13:00
The Geometry of Gravitational Radiation
Abstract
In this talk, I will examine the dynamics of the fermion–rotor system, originally introduced by Polchinski as a toy model for monopole–fermion scattering. Despite its simplicity, the system is surprisingly subtle, with ingoing and outgoing fermion fields carrying different quantum numbers. I will show that the rotor acts as a twist operator in the low-energy theory, changing the quantum numbers of excitations that have previously passed through the origin to ensure scattering consistent with all symmetries, thereby resolving the long-standing Unitarity puzzle. I will then discuss generalizations of this setup with multiple rotors and unequal charges, and demonstrate how the system can be viewed as a UV-completion of boundary states for chiral theories, establishing a connection to the proposed resolution of the puzzle using boundary conformal field theory.
Recently, a close parallel emerged between conformal field theory in general dimension and the theory of automorphic forms. I will review this connection and explain how it can be leveraged to make rigorous progress on central open problems of number theory, using methods borrowed from the conformal bootstrap. In particular, I will use the crossing equation to prove new subconvex bounds on L-functions. Based on work with Adve, Bonifacio, Kravchuk, Pal, Radcliffe, and Rogelberg: https://arxiv.org/abs/2508.20576.
Neurons interact via spikes, which is a pulse-like, discontinuous mechanism. Their mean-field PDE description gives Fokker-Planck equations with novel nonlinearities. From a probability point of view, these give rise to Mckean-Vlasov equations involving hitting times. Similar mechanisms also arise in models for systemic risk in mathematical finance, and the supercooled Stefan problem. In this talk, we will first present models for spiking neurons: both microscopic particle models and macroscopic PDE models, with an emphasis on the general mathematical structure. A central question for these equations is the finite-time blow-up of the firing rate, which scientifically corresponds to the synchronization of a neuronal network. We will discuss how to continue the solution physically after the blow-up, by introducing a new timescale. The new timescale also helps us to understand the long term behavior of the equation, as it reveals a hidden contraction structure in the hyperbolic case. Finally, we will present a recently developed numerical solver based on this framework. Numerical tests show that during the synchronization the standard microscopic solver suffers from a rather demanding time step requirement, while our macro-mesoscopic solver does not.
Irina-Beatrice Haas will talk about; 'Sharp error bounds for approximate eigenvalues and singular values from subspace methods'
Subspace methods are commonly used for finding approximate eigenvalues and singular values of large-scale matrices. Once a subspace is found, the Rayleigh-Ritz method (for symmetric eigenvalue problems) and Petrov-Galerkin projection (for singular values) are the de facto method for extraction of eigenvalues and singular values. In this work we derive error bounds for approximate eigenvalues obtained via the Rayleigh-Ritz process. Our bounds are quadratic in the residual corresponding to each Ritz value while also being robust to clustered Ritz values, which is a key improvement over existing results. We apply these bounds to several methods for computing eigenvalues and singular values, including Krylov methods and randomized algorithms.
Casey Garner will talk about; 'General Matrix Optimization'
Since our early days in mathematics, we have been aware of two important characteristics of a matrix, namely, its coordinates and its spectrum. We have also witnessed the growth of matrix optimization models from matrix completion to semidefinite programming; however, only recently has the question of solving matrix optimization problems with general spectral and coordinate constraints been studied. In this talk, we shall discuss recent work done to study these general matrix optimization models and how they relate to topics such as Riemannian optimization, approximation theory, and more.