Mirror Symmetry (Part II)
Contact organisers for access to meeting (Carmen Jorge-Diaz, Connor Behan or Sujay Nair)
Contact organisers for access to meeting (Carmen Jorge-Diaz, Connor Behan or Sujay Nair)
Computer-based simulation of partial differential equations (PDEs) involves approximating the unknowns and relies on suitable description of geometrical entities such as the computational domain and its properties. The Finite Element Method (FEM) is by large the most popular technique for the computer-based simulation of PDEs and hinges on the assumption that discretized domain and unknown fields are both represented by piecewise polynomials, on tetrahedral or hexahedral partitions. In reality, the simulation of PDEs is a brick within a workflow where, at the beginning, the geometrical entities are created, described and manipulated with a geometry processor, often through Computer-Aided Design systems (CAD), and then used for the simulation of the mechanical behaviour of the designed object. This workflow is often repeated many times as part of a shape optimisation loop. Within this loop, the use of FEM on CAD geometries (which are mainly represented through their boundaries) calls then for (re-) meshing and re-interpolation techniques that often require human intervention and result in inaccurate solutions and lack of robustness of the whole process. In my talk, I will present the mathematical counterpart of this problem, I will discuss the mismatch in the mathematical representations of geometries and PDEs unknowns and introduce a promising framework where geometric objects and PDEs unknowns are represented in a compatible way. Within this framework, the challenges to be addressed in order to construct robust PDE solvers are many and I will discuss some of them. Mathematical results will besupported by numerical validation.
We continue this term with our flagship seminars given by notable scientists on topics that are relevant to Industrial and Applied Mathematics.
Note the new time of 12:00-13:00 on Thursdays.
This will give an opportunity for the entire community to attend and for speakers with childcare responsibilities to present.
This talk will be three short stories on the general theme of elastic
instabilities in soft solids. First I will discuss the inflation of a
cylindrical cavity through a bulk soft solid, and show that such a
channel ultimately becomes unstable to a finite wavelength peristaltic
undulation. Secondly, I will introduce the elastic Rayleigh Plateau
instability, and explain that it is simply 1-D phase separation, much
like the inflationary instability of a cylindrical party balloon. I will
then construct a universal near-critical analytic solution for such 1-D
elastic instabilities, that is strongly reminiscent of the
Ginzberg-Landau theory of magnetism. Thirdly, and finally, I will
discuss pattern formation in layer-substrate buckling under equi-biaxial
compression, and argue, on symmetry grounds, that such buckling will
inevitably produce patterns of hexagonal dents near threshold.
Totally geodesic submanifolds are perhaps one of the easiest types of submanifolds of Riemannian manifolds one can study, since a maximal totally geodesic submanifold is completely determined by any one of its points and the tangent space at that point. It comes as a bit of a surprise then that classification of such submanifolds — up to an ambient isometry — is a nightmarish and widely open question, even on such a manageable and well-understood class of Riemannian manifolds as symmetric spaces.
We will discuss the theory of totally geodesic submanifolds of symmetric spaces and see that any maximal such submanifold is homogeneous and thus can be completely encoded by some Lie algebraic data called a 'Lie triple'. We will then talk about the duality between symmetric spaces of compact and noncompact type and discover that there is a one-to-one correspondence between totally geodesic submanifolds of a symmetric space and its dual. Finally, we will touch on the known classification in rank one symmetric spaces, namely in spheres and projective/hyperbolic spaces over real normed division algebras. Time permitting, I will demonstrate how all this business comes in handy in other geometric problems on symmetric spaces, e. g. in classification of isometric cohomogeneity one actions.
Link: https://teams.microsoft.com/l/meetup-join/19%3ameeting_ZGRiMTM1ZjQtZWNi…
The course covers the standard material on nonlinear wave equations, including local existence, breakdown criterion, global existence for small data for semi-linear equations, and Strichartz estimate if time allows.
We will discuss a generalisation of hyperbolic groups, from the group actions point of view. By studying torsion, we will see how this can help to answer questions about ordinary hyperbolic groups.
Part of the Oxford Discrete Maths and Probability Seminar, held via Zoom. Please see the seminar website for details.
I will introduce product structure theory of graphs and show how families of graphs that have such a structure admit short adjacency labeling scheme and small induced universal graphs. Time permitting, I will talk about another recent application of product structure theory, namely vertex ranking (coloring).
A quantum circuit defines a discrete-time evolution for a set of quantum spins/qubits, via a sequence of unitary 'gates’ coupling nearby spins. I will describe how random quantum circuits, where each gate is a random unitary matrix, serve as minimal models for various universal features of many-body dynamics. These include the dynamical generation of entanglement between distant spatial regions, and the quantum "butterfly effect". I will give a very schematic overview of mappings that relate averages in random circuits to the classical statistical mechanics of random paths. Time permitting, I will describe a new phase transition in the dynamics of a many-body wavefunction, due to repeated measurements by an external observer.
A wide variety of fixed-point iterative methods for the solution of nonlinear operator equations in Hilbert spaces exists. In many cases, such schemes can be interpreted as iterative local linearisation methods, which can be obtained by applying a suitable preconditioning operator to the original (nonlinear) equation. Based on this observation, we will derive a unified abstract framework which recovers some prominent iterative methods. It will be shown that for strongly monotone operators this unified iteration scheme satisfies an energy contraction property. Consequently, the generated sequence converges to a solution of the original problem.
--
A link for this talk will be sent to our mailing list a day or two in advance. If you are not on the list and wish to be sent a link, please contact @email.
Part of the Oxford Discrete Maths and Probability Seminar, held via Zoom. Please see the seminar website for details.
We consider the random directed graph $D(n,p)$ with vertex set $\{1,2,…,n\}$ in which each of the $n(n-1)$ possible directed edges is present independently with probability $p$. We are interested in the strongly connected components of this directed graph. A phase transition for the emergence of a giant strongly connected component is known to occur at $p = 1/n$, with critical window $p = 1/n + \lambda n-4/3$ for $\lambda \in \mathbb{R}$. We show that, within this critical window, the strongly connected components of $D(n,p)$, ranked in decreasing order of size and rescaled by $n-1/3$, converge in distribution to a sequence $(C_1,C_2,\ldots)$ of finite strongly connected directed multigraphs with edge lengths which are either 3-regular or loops. The convergence occurs in the sense of an $L^1$ sequence metric for which two directed multigraphs are close if there are compatible isomorphisms between their vertex and edge sets which roughly preserve the edge lengths. Our proofs rely on a depth-first exploration of the graph which enables us to relate the strongly connected components to a particular spanning forest of the undirected Erdős-Rényi random graph $G(n,p)$, whose scaling limit is well understood. We show that the limiting sequence $(C_1,C_2,\ldots)$ contains only finitely many components which are not loops. If we ignore the edge lengths, any fixed finite sequence of 3-regular strongly connected directed multigraphs occurs with positive probability.
Point cloud registration is the task of finding the transformation that aligns two data sets. We make the assumption that the data lies on a low-dimensional algebraic variety. The task is phrased as an optimization problem over the special orthogonal group of rotations. We solve this problem using Riemannian optimization algorithms and show numerical examples that illustrate the efficiency of this approach for point cloud registration.
--
A link for this talk will be sent to our mailing list a day or two in advance. If you are not on the list and wish to be sent a link, please contact @email.
The science of cities seeks to understand and explain regularities observed in the world's major urban systems. Modelling the population evolution of cities is at the core of this science and of all urban studies. Quantitatively, the most fundamental problem is to understand the hierarchical organization of cities and the statistical occurrence of megacities, first thought to be described by a universal law due to Zipf, but whose validity has been challenged by recent empirical studies. A theoretical model must also be able to explain the relatively frequent rises and falls of cities and civilizations, and despite many attempts these fundamental questions have not been satisfactorily answered yet. Here we fill this gap by introducing a new kind of stochastic equation for modelling population growth in cities, which we construct from an empirical analysis of recent datasets (for Canada, France, UK and USA) that reveals how rare but large interurban migratory shocks dominate city growth. This equation predicts a complex shape for the city distribution and shows that Zipf's law does not hold in general due to finite-time effects, implying a more complex organization of cities. It also predicts the existence of multiple temporal variations in the city hierarchy, in agreement with observations. Our result underlines the importance of rare events in the evolution of complex systems and at a more practical level in urban planning.
arXiv link: https://arxiv.org/abs/2011.09403
Topological data analysis is a growing area of research where topology and geometry meets data analysis. Many data science problems have a geometric flavor, and thus computational tools like persistent homology and Mapper were often found to be useful. Domains of applications include cosmology, material science, diabetes and cancer research. We will discuss some main tools of the field and some prominent applications.
Spacetimes with compact directions play an important role in supergravity and string theory. The simplest such example is the Kaluza-Klein spacetime, where the compact space is a flat torus. An interesting question to ask is whether this spacetime, when viewed as an initial value problem, is stable to small perturbations of initial data. In this talk I will discuss the global, non-linear stability of the Kaluza-Klein spacetime to toroidal-independent perturbations and the particular nonlinear structure appearing in the associated PDE system.
The goal of sequential learning is to draw inference from data that is gathered gradually through time. This is a typical situation in many applications, including finance. A sequential inference procedure is `anytime-valid’ if the decision to stop or continue an experiment can depend on anything that has been observed so far, without compromising statistical error guarantees. A recent approach to anytime-valid inference views a test statistic as a bet against the null hypothesis. These bets are constrained to be supermartingales - hence unprofitable - under the null, but designed to be profitable under the relevant alternative hypotheses. This perspective opens the door to tools from financial mathematics. In this talk I will discuss how notions such as supermartingale measures, log-optimality, and the optional decomposition theorem shed new light on anytime-valid sequential learning. (This talk is based on joint work with Wouter Koolen (CWI), Aaditya Ramdas (CMU) and Johannes Ruf (LSE).)
For some nonlocal PDEs, its steady states can be seen as critical points of an associated energy functional. Therefore, if one can construct perturbations around a function such that the energy decreases to first order along the perturbation, this function cannot be a steady state. In this talk, I will discuss how this simple variational approach has led to some recent progresses in the following equations, where the key is to carefully construct a suitable perturbation.
I will start with the aggregation-diffusion equation, which is a nonlocal PDE driven by two competing effects: nonlinear diffusion and long-range attraction. We show that all steady states are radially symmetric up to a translation (joint with Carrillo, Hittmeir and Volzone), and give some criteria on the uniqueness/non-uniqueness of steady states within the radial class (joint with Delgadino and Yan).
I will also discuss the 2D Euler equation, where we aim to understand under what condition must a stationary/uniformly-rotating solution be radially symmetric. Using a variational approach, we settle some open questions on the radial symmetry of rotating patches, and also show that any smooth stationary solution with compactly supported and nonnegative vorticity must be radial (joint with Gómez-Serrano, Park and Shi).
Chowla's conjecture from the 1960s is the assertion that the Möbius function does not correlate with its own shifts. I'll discuss some recent works where with collaborators we have made progress on this conjecture.
Veering triangulations are a special class of ideal triangulations with a rather mysterious combinatorial definition. Their importance follows from a deep connection with pseudo-Anosov flows on 3-manifolds. Recently Landry, Minsky and Taylor introduced a polynomial invariant of veering triangulations called the taut polynomial. During the talk I will discuss how and why it is connected to the Alexander polynomial of the underlying manifold.
Associativity in quantum cohomology is proven using a gluing formula for Gromov-Witten invariants. The gluing formula underlying orbifold quantum cohomology has additional interesting features. The Gross-Siebert program requires an analogue of quantum cohomology in logarithmic geometry, with underlying gluing formula for punctured logarithmic invariants. I'll attempt to explain how this works and what new subtle features arise. This is based on joint work with Q. Chen, M. Gross and B. Siebert (https://arxiv.org/pdf/2009.07720.pdf). |
We will discuss confinement in 4d N=1 theories obtained after soft supersymmetry breaking deformations of 4d N=2 Class S theories. Confinement is characterised by a subgroup of the 1-form symmetry group of the theory that is left unbroken in a massive vacuum of the theory. The 1-form symmetry group is encoded in the Gaiotto curve associated to the Class S theory, and its spontaneous breaking in a vacuum is encoded in the N=1 curve (which plays the role of Seiberg-Witten curve for N=1) associated to that vacuum. Using this proposal, we will recover the expected properties of confinement in N=1 SYM theories, and the theories studied by Cachazo, Douglas, Seiberg and Witten. We will also recover the dependence of confinement on the choice of gauge group and discrete theta parameters in these theories.
We investigate whether Swampland constraints on the low-energy dynamics of weakly coupled string vacua in AdS can be related to inconsistencies of their putative holographic duals or, more generally, recast in terms of CFT data. In the main part of the talk, we shall illustrate how various swampland consistency constraints are equivalent to a negativity condition on the sign of certain mixed anomalous dimensions. This condition is similar to established CFT positivity bounds arising from causality and unitarity, but not known to hold in general. Our analysis will include LVS, KKLT, perturbative and racetrack stabilisation, and we shall also point out an intriguing connection to the Distance Conjecture. In the final part we will take a complementary approach, and show how a recent, more rigorous CFT inequality maps to non-trivial constraints on AdS, mentioning possible applications along the way.
Speaker: Katherine Staden
Introduced by: Frances Kirwan
Title: Inducibility in graphs
Abstract: What is the maximum number of induced copies of a fixed graph H inside any graph on n vertices? Here, induced means that both edges and non-edges have to be correct. This basic question turns out to be surprisingly difficult, and it is not even known for all 4-vertex graphs H. I will survey the area and discuss some key results, ideas and techniques -- combinatorial, analytical and computer-assisted.
Speaker: Pierre Haas
Introduced by: Alain Goriely
Title: Shape-Shifting Droplets
Abstract: Experiments show that small oil droplets in aqueous surfactant solution flatten, upon slow cooling, into a host of polygonal shapes with straight edges and sharp corners. I will begin by showing how plane (and rather plain) geometry explains the sequence of these polygonal shapes. I will go on to show that geometric considerations of that ilk cannot however explain the three-dimensional polyhedral shapes that the initially spherical droplets evolve through while flattening. I will conclude by showing that the experimental data agree with the predictions of a model based on a partial phase transition of the oil near the droplet edges.
Buildings are geometric structures useful in understanding certain classes of groups. In a series of papers written during the 1980s, Ronan and Smith developed the theory of “presheaves on buildings”. By constructing a coefficient system consisting of kP-modules (where P is the stabiliser of a given simplex), and computing the sheaf homology, they proved several results relating the homology spaces with the irreducible G-modules. In this talk we discuss their methods as well as our implementation of the algorithms, which has allowed us to efficiently compute the irreducible representations of some groups of Lie type.
Our current approach to cancer treatment has been largely driven by finding molecular targets, those patients fortunate enough to have a targetable mutation will receive a fixed treatment schedule designed to deliver the maximum tolerated dose (MTD). These therapies generally achieve impressive short-term responses, that unfortunately give way to treatment resistance and tumor relapse. The importance of evolution during both tumor progression, metastasis and treatment response is becoming more widely accepted. However, MTD treatment strategies continue to dominate the precision oncology landscape and ignore the fact that treatments drive the evolution of resistance. Here we present an integrated theoretical, experimental and clinical approach to develop treatment strategies that specifically embrace cancer evolution. We will consider the importance of using treatment response as a critical driver of subsequent treatment decisions, rather than fixed strategies that ignore it. Through the integrated application of drug treatments and drug holidays we will illustrate that, evolutionary therapy can drive either tumor control or extinction. Our results strongly indicate that the future of precision medicine shouldn’t be in the development of new drugs but rather in the smarter evolutionary application of preexisting ones.
Abstract: Option price data are used as inputs for model calibration, risk-neutral density estimation and many other financial applications. The presence of arbitrage in option price data can lead to poor performance or even failure of these tasks, making pre-processing of the data to eliminate arbitrage necessary. Most attention in the relevant literature has been devoted to arbitrage-free smoothing and filtering (i.e. removing) of data. In contrast to smoothing, which typically changes nearly all data, or filtering, which truncates data, we propose to repair data by only necessary and minimal changes. We formulate the data repair as a linear programming (LP) problem, where the no-arbitrage relations are constraints, and the objective is to minimise prices' changes within their bid and ask price bounds. Through empirical studies, we show that the proposed arbitrage repair method gives sparse perturbations on data, and is fast when applied to real world large-scale problems due to the LP formulation. In addition, we show that removing arbitrage from prices data by our repair method can improve model calibration with enhanced robustness and reduced calibration error.
=================================================