14:00
Classifying Superconformal Defects in Diverse Dimensions
Abstract
We explore general constraints from unitarity, defect superconformal symmetry and locality of bulk-defect couplings to classify possible superconformal defects in superconformal field theories (SCFT) of spacetime dimensions d>2. Despite the general absence of locally conserved currents, the defect CFT contains new distinguished operators with protected quantum numbers that account for the broken bulk symmetries. Consistency with the preserved superconformal symmetry and unitarity requires that such operators arrange into unitarity multiplets of the defect superconformal algebra, which in turn leads to nontrivial constraints on what kinds of defects are admissible in a given SCFT. We will focus on the case of superconformal lines in this talk and comment on several interesting implications of our analysis, such as symmetry-enforced defect conformal manifolds, defect RG flows and possible nontrivial one-form symmetries in various SCFTs.
Selection Dynamics for Deep Neural Networks
Abstract
We present a partial differential equation framework for deep residual neural networks and for the associated learning problem. This is done by carrying out the continuum limits of neural networks with respect to width and depth. We study the wellposedness, the large time solution behavior, and the characterization of the steady states of the forward problem. Several useful time-uniform estimates and stability/instability conditions are presented. We state and prove optimality conditions for the inverse deep learning problem, using standard variational calculus, the Hamilton-Jacobi-Bellmann equation and the Pontryagin maximum principle. This serves to establish a mathematical foundation for investigating the algorithmic and theoretical connections between neural networks, PDE theory, variational analysis, optimal control, and deep learning.
This is based on joint work with Hailiang Liu.
Computational Hardness of Hypothesis Testing and Quiet Plantings
Abstract
When faced with a data analysis, learning, or statistical inference problem, the amount and quality of data available fundamentally determines whether such tasks can be performed with certain levels of accuracy. With the growing size of datasets however, it is crucial not only that the underlying statistical task is possible, but also that is doable by means of efficient algorithms. In this talk we will discuss methods aiming to establish limits of when statistical tasks are possible with computationally efficient methods or when there is a fundamental «Statistical-to-Computational gap›› in which an inference task is statistically possible but inherently computationally hard. We will focus on Hypothesis Testing and the ``Low Degree Method'' and also address hardness of certification via ``quiet plantings''. Guiding examples will include Sparse PCA, bounds on the Sherrington Kirkpatrick Hamiltonian, and lower bounds on Chromatic Numbers of random graphs.
Bridging GANs and Stochastic Analysis
Abstract
Generative adversarial networks (GANs) have enjoyed tremendous success in image generation and processing, and have recently attracted growing interests in other fields of applications. In this talk we will start from analyzing the connection between GANs and mean field games (MFGs) as well as optimal transport (OT). We will first show a conceptual connection between GANs and MFGs: MFGs have the structure of GANs, and GANs are MFGs under the Pareto Optimality criterion. Interpreting MFGs as GANs, on one hand, will enable a GANs-based algorithm (MFGANs) to solve MFGs: one neural network (NN) for the backward Hamilton-Jacobi-Bellman (HJB) equation and one NN for the Fokker-Planck (FP) equation, with the two NNs trained in an adversarial way. Viewing GANs as MFGs, on the other hand, will reveal a new and probabilistic aspect of GANs. This new perspective, moreover, will lead to an analytical connection between GANs and Optimal Transport (OT) problems, and sufficient conditions for the minimax games of GANs to be reformulated in the framework of OT. Building up from the probabilistic views of GANs, we will then establish the approximation of GANs training via stochastic differential equations and demonstrate the convergence of GANs training via invariant measures of SDEs under proper conditions. This stochastic analysis for GANs training can serve as an analytical tool to study its evolution and stability.
Oxford Mathematics Online Public Lecture: Anna Seigal - Ideas for a Complex World
Humans have been processing information in the world for a long time, finding patterns and learning from our surroundings to solve problems. Today, scientists make sense of complex problems by gathering vast amounts of data, and analysing them with quantitative methods. These methods are important tools to understand the issues facing us: the spread of disease, climate change, or even political movements. But this quantitative toolbox can seem far removed from our individual approaches for processing information in our day-to-day lives. This disconnect and inaccessibility leads to the scientific tools becoming entangled in politics and questions of trust.
In this talk, Anna will describe how some of the ideas at the heart of science’s quantitative tools are familiar to us all. We’ll see how mathematics enables us to turn the ideas into tools. As a society, if we can better connect with the ideas driving this toolbox, we can see when to use (and not to use) the available tools, what’s missing from the toolbox, and how we might come up with new ideas to drive our future understanding of the world around us.
Anna Seigal is a Hooke Research Fellow in the Mathematical Institute at the University of Oxford and a Junior Research Fellow at The Queen's College.
Watch live (no need to register):
Oxford Mathematics Twitter
Oxford Mathematics Facebook
Oxford Mathematics Livestream
Oxford Mathematics YouTube
The Oxford Mathematics Public Lectures are generously supported by XTX Markets.
Core-Periphery Structure in Directed Networks
Abstract
Empirical networks often exhibit different meso-scale structures, such as community and core-periphery structure. Core-periphery typically consists of a well-connected core, and a periphery that is well-connected to the core but sparsely connected internally. Most core-periphery studies focus on undirected networks. In this talk we discuss a generalisation of core-periphery to directed networks which yields a family of core-periphery blockmodel formulations in which, contrary to many existing approaches, core and periphery sets are edge-direction dependent. Then we shall focus on a particular structure consisting of two core sets and two periphery sets, and we introduce two measures to assess the statistical significance and quality of this structure in empirical data, where one often has no ground truth. The idea will be illustrated on three empirical networks -- faculty hiring, a world trade data-set, and political blogs.
This is based on joint work with Andrew Elliott, Angus Chiu, Marya Bazzi and Mihai Cucuringu, available at https://royalsocietypublishing.org/doi/pdf/10.1098/rspa.2019.0783
Parametric estimation via MMD optimization: robustness to outliers and dependence
Abstract
In this talk, I will study the properties of parametric estimators based on the Maximum Mean Discrepancy (MMD) defined by Briol et al. (2019). In a first time, I will show that these estimators are universal in the i.i.d setting: even in case of misspecification, they converge to the best approximation of the distribution of the data in the model, without ANY assumption on this model. This leads to very strong robustness properties. In a second time, I will show that these results remain valid when the data is not independent, but satisfy instead a weak-dependence condition. This condition is based on a new dependence coefficient, which is itself defined thanks to the MMD. I will show through examples that this new notion of dependence is actually quite general. This talk is based on published works, and works in progress, with Badr-Eddine Chérief Abdellatif (ENSAE Paris), Mathieu Gerber (University of Bristol), Jean-David Fermanian (ENSAE Paris) and Alexis Derumigny (University of Twente):
http://arxiv.org/abs/1912.05737
http://proceedings.mlr.press/v118/cherief-abdellatif20a.html
http://arxiv.org/abs/2006.00840
Oxford Mathematics Online Public Lecture
David Sumpter: How Learning Ten Equations Can Improve Your Life
Wednesday 28 October 2020
5.00-6.00pm
Is there a secret formula for becoming rich? Or for happiness? Or for becoming popular? Or for self-confidence and good judgement? David Sumpter answer these questions with an emphatic ‘Yes!' All YOU need are The Ten Equations.
Representation theory of wreath products
Abstract
The wreath product of a finite group, or more generally an algebra, with a symmetric group is a familiar and important construction in representation theory and other areas of Mathematics. I shall present some highlights from my work on the representation theory of wreath products. These will include both structural properties (for example, that the wreath product of a cellular algebra with a symmetric group is again a cellular algebra) and cohomological ones (one
particular point of interest being a generalisation of the result of Hemmer and Nakano on filtration multiplicities to the wreath product of two symmetric groups). I will also give an outline of some potential applications of this and related theory to important open problems in algebraic combinatorics.