A link for this talk will be sent to our mailing list a day or two in advance. If you are not on the list and wish to be sent a link, please contact Benjamin Fehrman.
In General Relativity, an impulsive gravitational wave is a localized and singular solution of the
Einstein equations modeling the spacetime distortions created by a strongly gravitating source.
I will present a comprehensive theory allowing for ternary interactions of such impulsive gravitational waves in translation-symmetry, offering the first examples of such an interaction.
The proof combines new techniques from harmonic analysis, Lorentzian geometry, and hyperbolic PDEs that are helpful to treat highly anisotropic low-regularity questions beyond the considered problem.
This is joint work with Jonathan Luk.
In a joint work with Peter Ozsvath we have developed algebraic invariants for knots using a family of bordered knot algebras. The goal of this lecture is to review these constructions and discuss some of the latest developments.
The problem of identifying geometric structure in heterogeneous, high-dimensional data is a cornerstone of representation learning. In this talk, we study the problem of data geometry from the perspective of Discrete Geometry. We focus specifically on the analysis of relational data, i.e., data that is given as a graph or can be represented as such.
We start by reviewing discrete notions of curvature, where we focus especially on discrete Ricci curvature. Then we discuss the problem of embeddability: For downstream machine learning and data science applications, it is often beneficial to represent data in a continuous space, i.e., Euclidean, Hyperbolic or Spherical space. How can we decide on a suitable representation space? While there exists a large body of literature on the embeddability of canonical graphs, such as lattices or trees, the heterogeneity of real-world data limits the applicability of these classical methods. We discuss a combinatorial approach for evaluating embeddability, where we analyze nearest-neighbor structures and local neighborhood growth rates to identify the geometric priors of suitable embedding spaces. For canonical graphs, the algorithm’s prediction provably matches classical results. As for large, heterogeneous graphs, we introduce an efficiently computable statistic that approximates the algorithm’s decision rule. We validate our method over a range of benchmark data sets and compare with recently published optimization-based embeddability methods.
Many complex networks depend upon biological entities for their preservation. Such entities, from human cognition to evolution, must first encode and then replicate those networks under marked resource constraints. Networks that survive are those that are amenable to constrained encoding, or, in other words, are compressible. But how compressible is a network? And what features make one network more compressible than another? Here we answer these questions by modeling networks as information sources before compressing them using rate-distortion theory. Each network yields a unique rate-distortion curve, which specifies the minimal amount of information that remains at a given scale of description. A natural definition then emerges for the compressibility of a network: the amount of information that can be removed via compression, averaged across all scales. Analyzing an array of real and model networks, we demonstrate that compressibility increases with two common network properties: transitivity (or clustering) and degree heterogeneity. These results indicate that hierarchical organization -- which is characterized by modular structure and heavy-tailed degrees -- facilitates compression in complex networks. Generally, our framework sheds light on the interplay between a network's structure and its capacity to be compressed, enabling investigations into the role of compression in shaping real-world networks.
arXiv link: https://arxiv.org/abs/2011.08994
Mean field games (MFG) and mean field control problems (MFC) are frameworks to study Nash equilibria or social optima in games with a continuum of agents. These problems can be used to approximate competitive or cooperative situations with a large finite number of agents. They have found a broad range of applications, from economics to crowd motion, energy production and risk management. Scalable numerical methods are a key step towards concrete applications. In this talk, we propose several numerical methods for MFG and MFC. These methods are based on machine learning tools such as function approximation via neural networks and stochastic optimization. We provide numerical results and we investigate the numerical analysis of these methods by proving bounds on the approximation scheme. If time permits, we will also discuss model-free methods based on extensions of the traditional reinforcement learning setting to the mean-field regime.
The interplay between fluid flows and fractures is ubiquitous in Nature and technology, from hydraulic fracturing in the shale formation to supraglacial lake drainage in Greenland and hydrofracture on Antarctic ice shelves.
In this talk I will discuss the above three examples, focusing on the scaling laws and their agreement with lab experiments and field observations. As climate warms, the meltwater on Antarctic ice shelves could threaten their structural integrity through propagation of water-driven fractures. We used a combination of machine learning and fracture mechanics to understand the stability of fractures on ice shelves. Our result also indicates that as meltwater inundates the surface of ice shelves in a warm climate, their collapse driven by hydrofracture could significantly influence the flow of the Antarctic Ice Sheets.
The heart of modern machine learning (ML) is the approximation of high dimensional functions. Traditional approaches, such as approximation by piecewise polynomials, wavelets, or other linear combinations of fixed basis functions, suffer from the curse of dimensionality (CoD). We will present a mathematical perspective of ML, focusing on the issue of CoD. We will discuss three major issues: approximation theory and error analysis of modern ML models, dynamics and qualitative behavior of gradient descent algorithms, and ML from a continuous viewpoint. We will see that at the continuous level, ML can be formulated as a series of reasonably nice variational and PDE-like problems. Modern ML models/algorithms, such as the random feature and two-layer and residual neural network models, can all be viewed as special discretizations of such continuous problems. We will also present a framework that is suited for analyzing ML models and algorithms in high dimension, and present results that are free of CoD. Finally, we will discuss the fundamental reasons that are responsible for the success of modern ML, as well as the subtleties and mysteries that still remain to be understood.
Abstract: The calibration problem for local stochastic volatility models leads to two-dimensional stochastic differential equations of McKean-Vlasov type. In these equations, the conditional distribution of the second component of the solution given the first enters the equation for the first component of the solution. While such equations enjoy frequent application in the financial industry, their mathematical analysis poses a major challenge. I will explain how to prove the strong existence of stationary solutions for these equations, as well as the strong uniqueness in an important special case. Based on joint work with Daniel Lacker and Jiacheng Zhang.
Topological data analysis (TDA) is a robust field of mathematical data science specializing in complex, noisy, and high-dimensional data. While the elements of modern TDA have existed since the mid-1980’s, applications over the past decade have seen a dramatic increase in systems analysis, engineering, medicine, and the sciences. Two of the primary challenges in this field regard modeling and computation: what do topological features mean, and are they computable? While these questions remain open for some of the simplest structures considered in TDA — homological persistence modules and their indecomposable submodules — in the past two decades researchers have made great progress in algorithms, modeling, and mathematical foundations through diverse connections with other fields of mathematics. This talk will give a first perspective on the idea of matroid theory as a framework for unifying and relating some of these seemingly disparate connections (e.g. with quiver theory, classification, and algebraic stability), and some questions that the fields of matroid theory and TDA may mutually pose to one another. No expertise in homological persistence or general matroid theory will be assumed, though prior exposure to the definition of a matroid and/or persistence module may be helpful.