12:00
Twistor sigma models, Plebanski generating functions and graviton scattering
Abstract
Plebanski generating functions give a compact encoding of the geometry of self-dual Ricci-flat space-times or hyper-Kahler spaces. They have applications as generating functions for BPS/DT/Gromov-Witten invariants. We first show that Plebanski's first fundamental form also provides a generating function for the gravitational MHV amplitude. We then obtain these Plebanski generating functions from the corresponding twistor spaces as the value of the action of new sigma models for holomorphic curves in twistor space.
In four-dimensions, perturbations of the hyperk¨ahler structure corresponding to positive helicity gravitons. The sigma model’s perturbation theory gives rise to a sum of tree diagrams for the gravity MHV amplitude observed previously in the literature, and their summation via a matrix tree theorem gives a first-principles derivation of Hodges’ determinant formula directly from general relativity. We generalise the twistor sigma model to higher-degree (defined in the first instance with a cosmological constant), giving a new generating principle for the full tree-level graviton S-matrix in general with or without cosmological constant. This is joint work with Tim Adamo and Atul Sharma in https://arxiv.org/abs/2103.16984.
16:00
Conformal Block Expansion in Celestial CFT
Abstract
The 4D 4-point scattering amplitude of massless scalars via a massive exchange can be expressed in a basis of conformal primary particle wavefunctions. In this talk I will show that the resulting celestial amplitude admits a decomposition as a sum over 2D conformal blocks. This decomposition is obtained by contour deformation upon expanding the celestial amplitude in a basis of conformal partial waves. The conformal blocks include intermediate exchanges of spinning light-ray states, as well as scalar states with positive integer conformal weights. The conformal block prefactors are found as expected to be quadratic in the celestial OPE coefficients. Finally, I will comment on implications of this result for celestial holography and discuss some open questions.
Geometric Methods for Machine Learning and Optimization
Abstract
Many machine learning applications involve non-Euclidean data, such as graphs, strings or matrices. In such cases, exploiting Riemannian geometry can deliver algorithms that are computationally superior to standard(Euclidean) nonlinear programming approaches. This observation has resulted in an increasing interest in Riemannian methods in the optimization and machine learning community.
In the first part of the talk, we consider the task of learning a robust classifier in hyperbolic space. Such spaces have received a surge of interest for representing large-scale, hierarchical data, due to the fact that theyachieve better representation accuracy with fewer dimensions. We present the first theoretical guarantees for the (robust) large margin learning problem in hyperbolic space and discuss conditions under which hyperbolic methods are guaranteed to surpass the performance of their Euclidean counterparts. In the second part, we introduce Riemannian Frank-Wolfe (RFW) methods for constrained optimization on manifolds. Here, we discuss matrix-valued tasks for which such Riemannian methods are more efficient than classical Euclidean approaches. In particular, we consider applications of RFW to the computation of Riemannian centroids and Wasserstein barycenters, both of which are crucial subroutines in many machine learning methods.
Fast Symmetric Tensor Decomposition
Abstract
From latent variable models in machine learning to inverse problems in computational imaging, tensors pervade the data sciences. Often, the goal is to decompose a tensor into a particular low-rank representation, thereby recovering quantities of interest about the application at hand. In this talk, I will present a recent method for low-rank CP symmetric tensor decomposition. The key ingredients are Sylvester’s catalecticant method from classical algebraic geometry and the power method from numerical multilinear algebra. In simulations, the method is roughly one order of magnitude faster than existing CP decomposition algorithms, with similar accuracy. I will state guarantees for the relevant non-convex optimization problem, and robustness results when the tensor is only approximately low-rank (assuming an appropriate random model). Finally, if the tensor being decomposed is a higher-order moment of data points (as in multivariate statistics), our method may be performed without explicitly forming the moment tensor, opening the door to high-dimensional decompositions. This talk is based on joint works with João Pereira, Timo Klock and Tammy Kolda.