Ultrafilters on omega versus forcing
Abstract
I plan to survey known facts and open questions about ultrafilters on omega generating (or not generating) ultrafilters in forcing extensions.
I plan to survey known facts and open questions about ultrafilters on omega generating (or not generating) ultrafilters in forcing extensions.
Primitive elements are elements that are part of a basis for a free group. We present the classical Whitehead algorithm for the recognition of such elements, and discuss the ideas behind the proof. We also present a second algorithm, more recent and completely different in the approach.
This seminar will be held via zoom. Meeting link will be sent to members of our mailing list (https://lists.maths.ox.ac.uk/mailman/listinfo/random-matrix-theory-anno…) in our weekly announcement on Monday.
I will describe a general method for comparing the counting functions of determinantal point processes in terms of trace class norm distances between their kernels (and review what all of those words mean). Then I will outline joint work with Elizabeth Meckes using this method to prove a version of a self-similarity property of eigenvalues of Haar-distributed unitary matrices conjectured by Coram and Diaconis. Finally, I will discuss ongoing work by my PhD student Kyle Taljan, bounding the rate of convergence for counting functions of GUE eigenvalues to the Sine or Airy process counting functions.
Smectic A liquid crystals are of great interest in physics for their striking defect structures, including curvature walls and focal conics. However, the mathematical modeling of smectic liquid crystals has not been extensively studied. This work takes a step forward in understanding these fascinating topological defects from both mathematical and numerical viewpoints. In this talk, we propose a new (two- and three-dimensional) mathematical continuum model for the transition between the smectic A and nematic phases, based on a real-valued smectic order parameter for the density perturbation and a tensor-valued nematic order parameter for the orientation. Our work expands on an idea mentioned by Ball & Bedford (2015). By doing so, physical head-to-tail symmetry in half charge defects is respected, which is not possible with vector-valued nematic order parameter.
A link for this talk will be sent to our mailing list a day or two in advance. If you are not on the list and wish to be sent a link, please send email to @email.
I will explain how the representation theory of rational Cherednik algebras interacts with the commutative algebra of certain subspace arrangements arising from the reflection arrangement of a complex reflection group. Potentially, the representation theory allows one to study both qualitative questions (e.g., is the arrangement Cohen-Macaulay or not?) and quantitative questions (e.g., what is the Hilbert series of the ideal of the arrangement, or even, what are its graded Betti numbers?), by applying the tools (such as orthogonal polynomials, Kazhdan-Lusztig characters, and Dirac cohomology) that representation theory provides. This talk is partly based on joint work with Susanna Fishel and Elizabeth Manosalva.
Modular and hierarchical structures are pervasive in real-world complex systems. A great deal of effort has gone into trying to detect and study these structures. Important theoretical advances in the detection of modular, or "community", structures have included identifying fundamental limits of detectability by formally defining community structure using probabilistic generative models. Detecting hierarchical community structure introduces additional challenges alongside those inherited from community detection. Here we present a theoretical study on hierarchical community structure in networks, which has thus far not received the same rigorous attention. We address the following questions: 1) How should we define a valid hierarchy of communities? 2) How should we determine if a hierarchical structure exists in a network? and 3) how can we detect hierarchical structure efficiently? We approach these questions by introducing a definition of hierarchy based on the concept of stochastic externally equitable partitions and their relation to probabilistic models, such as the popular stochastic block model. We enumerate the challenges involved in detecting hierarchies and, by studying the spectral properties of hierarchical structure, present an efficient and principled method for detecting them.
https://arxiv.org/abs/2009.07196 (15 sept.)
Motivated by the advent of machine learning, the last few years saw the return of hardware-supported low-precision computing. Computations with fewer digits are faster and more memory and energy efficient, but can be extremely susceptible to rounding errors. An application that can largely benefit from the advantages of low-precision computing is the numerical solution of partial differential equations (PDEs), but a careful implementation and rounding error analysis are required to ensure that sensible results can still be obtained. In this talk we study the accumulation of rounding errors in the solution of the heat equation, a proxy for parabolic PDEs, via Runge-Kutta finite difference methods using round-to-nearest (RtN) and stochastic rounding (SR). We demonstrate how to implement the numerical scheme to reduce rounding errors and we present \emph{a priori} estimates for local and global rounding errors. Let $u$ be the roundoff unit. While the worst-case local errors are $O(u)$ with respect to the discretization parameters, the RtN and SR error behaviour is substantially different. We show that the RtN solution is discretization, initial condition and precision dependent, and always stagnates for small enough $\Delta t$. Until stagnation, the global error grows like $O(u\Delta t^{-1})$. In contrast, the leading order errors introduced by SR are zero-mean, independent in space and mean-independent in time, making SR resilient to stagnation and rounding error accumulation. In fact, we prove that for SR the global rounding errors are only $O(u\Delta t^{-1/4})$ in 1D and are essentially bounded (up to logarithmic factors) in higher dimensions.
A link for this talk will be sent to our mailing list a day or two in advance. If you are not on the list and wish to be sent a link, please send email to @email.
We propose a subspace Gauss-Newton method for nonlinear least squares problems that builds a sketch of the Jacobian on each iteration. We provide global rates of convergence for regularization and trust-region variants, both in expectation and as a tail bound, for diverse choices of the sketching matrix that are suitable for dense and sparse problems. We also have encouraging computational results on machine learning problems.
We construct a class of Cauchy initial data without (marginally) trapped surfaces whose future evolution is a trapped region bounded by an apparent horizon, i.e., a smooth hypersurface foliated by MOTS. The estimates obtained in the evolution lead to the following conditional statement: if Kerr Stability holds, then this kind of initial data yields a class of scale critical vacuum examples of Weak Cosmic Censorship and the Final State Conjecture. Moreover, owing to estimates for the ADM mass of the data and the area of the MOTS, the construction gives a fully dynamical vacuum setting in which to study the Spacetime Penrose Inequality. We show that the inequality is satisfied for an open region in the Cauchy development of this kind of initial data, which itself is controllable by the initial data. This is joint work with Nikos Athanasiou https://arxiv.org/abs/2009.03704.
Part of the Oxford Discrete Maths and Probability Seminar, held via Zoom. Please see the seminar website for details.
Let $r>3$ be an integer and consider the following game on the complete graph $K_n$ for $n$ a multiple of $r$: Two players, Maker and Breaker, alternately claim previously unclaimed edges of $K_n$ such that in each turn Maker claims one and Breaker claims $b$ edges. Maker wins if her graph contains a $K_r$-factor, that is a collection of $n/r$ vertex-disjoint copies of $K_r$, and Breaker wins otherwise. In other words, we consider the $b$-biased $K_r$-factor Maker-Breaker game. We show that the threshold bias for this game is of order $n^2/(r+2)$. This makes a step towards determining the threshold bias for making bounded-degree spanning graphs and extends a result of Allen, Böttcher, Kohayakawa, Naves and Person who resolved the case $r=3$ or $4$ up to a logarithmic factor.
Joint work with Rajko Nenadov.
Part of the Oxford Discrete Maths and Probability Seminar, held via Zoom. Please see the seminar website for details.
I will introduce recent work on the two- and three-dimensional uniform spanning trees (USTs) that establish the laws of these random objects converge under rescaling in a space whose elements are measured, rooted real trees, continuously embedded into Euclidean space. (In the three-dimensional case, the scaling result is currently only known along a particular scaling sequence.) I will also discuss various properties of the intrinsic metrics and measures of the limiting spaces, including their Hausdorff dimension, as well as the scaling limits of the random walks on the two- and three-dimensional USTs. In the talk, I will attempt to emphasise where the differences lie between the two cases, and in particular the additional challenges that arise when it comes to the three-dimensional model.
The two-dimensional results are joint with Martin Barlow (UBC) and Takashi Kumagai (Kyoto). The three-dimensional results are joint with Omer Angel (UBC) and Sarai Hernandez-Torres (UBC).
A remarkable theorem due to Khovanskii asserts that for any finite subset $A$ of an abelian group, the cardinality of the $h$-fold sumset $hA$ grows like a polynomial for all sufficiently large $h$. However, neither the polynomial nor what sufficiently large means are understood in general. We obtain an effective version of Khovanskii's theorem for any $A \subset \mathbb{Z}$ whose convex hull is a simplex; previously such results were only available for $d = 1$. Our approach also gives information about the structure of $hA$, answering a recent question posed by Granville and Shakan. The work is joint with Leo Goldmakher at Williams College.
In Joint work with Modj Shokrian-Zini we study (numerically) our proposal that interacting physics can arise from single particle quantum Mechanics through spontaneous symmetry breaking SSB. The staring point is the claim the difference between single and many particle physics amounts to the probability distribution on the space of Hamiltonians. Hamiltonians for interacting systems seem to know about some local, say qubit, structure, on the Hilbert space, whereas typical QM systems need not have such internal structure. I will discuss how the former might arise from the latter in a toy model. This story is intended as a “prequel” to the decades old reductionist story in which low energy standard model physics is supposed to arise from something quite different at high energy. We ask the question: Can interacting physics itself can arise from something simpler.
Abstract: A recent paradigm views deep neural networks as discretizations of certain controlled ordinary differential equations, sometimes called neural ordinary differential equations. We make use of this perspective to link expressiveness of deep networks to the notion of controllability of dynamical systems. Using this connection, we study an expressiveness property that we call universal interpolation, and show that it is generic in a certain sense. The universal interpolation property is slightly weaker than universal approximation, and disentangles supervised learning on finite training sets from generalization properties. We also show that universal interpolation holds for certain deep neural networks even if large numbers of parameters are left untrained, and are instead chosen randomly. This lends theoretical support to the observation that training with random initialization can be successful even when most parameters are largely unchanged through the training. Our results also explore what a minimal amount of trainable parameters in neural ordinary differential equations could be without giving up on expressiveness.
Joint work with Martin Larsson, Josef Teichmann.
The Spin(7) and SU(4) structures on a Calabi-Yau 4-fold give rise to certain first order PDEs defining special Yang-Mills connections: the Spin(7) instanton equations and the Hermitian Yang-Mills (HYM) equations respectively. The latter are stronger than the former. In 1998 C. Lewis proved that -over a compact base space- the existence of an HYM connection implies the converse. In this talk we demonstrate that the equivalence of the two gauge-theoretic problems fails to hold in generality. We do this by studying the invariant solutions on a highly symmetric noncompact Calabi-Yau 4-fold: the Stenzel manifold. We give a complete description of the moduli space of irreducible invariant Spin(7) instantons with structure group SO(3) on this space and find that the HYM connections are properly embedded in it. This moduli space reveals an explicit example of a sequence of Spin(7) instantons bubbling off near a Cayley submanifold. The missing limit is an HYM connection, revealing a potential relationship between the two equation systems.
I will review what is known and not known about the joint moments of the characteristic polynomials of random unitary matrices and their derivatives. I will then explain some recent results which relate the joint moments to an interesting class of measures, known as Hua-Pickrell measures. This leads to the proof of a conjecture, due to Chris Hughes in 2000, concerning the asymptotics of the joint moments, as well as establishing a connection between the measures in question and one of the Painlevé equations.
In this interactive workshop, we'll discuss what mathematicians are looking for in written solutions. How can you set out your ideas clearly, and what are the standard mathematical conventions?
This session is likely to be most relevant for first-year undergraduates, but all are welcome.
Inherent fluctuations may play an important role in biological and chemical systems when the copy number of some chemical species is small. This talk will present the recent work on the stochastic modeling of reaction-diffusion processes in biochemical systems. First, I will introduce several stochastic models, which describe system features at different scales of interest. Then, model reduction and coarse-graining methods will be discussed to reduce model complexity. Next, I will show multiscale algorithms for stochastic simulation of reaction-diffusion processes that couple different modeling schemes for better efficiency of the simulation. The algorithms apply to the systems whose domain is partitioned into two regions with a few molecules and a large number of molecules.
Topological data analysis has proven to be an effective tool in machine learning, supporting the analysis of neural networks, but also driving the development of new algorithms that make use of topological features. Graph classification is of particular interest here, since graphs are inherently amenable to a topological description in terms of their connected components and cycles. This talk will briefly summarise recent advances in topology-based graph classification, focussing equally on ’shallow’ and ‘deep’ approaches. Starting from an intuitive description of persistent homology, we will discuss how to incorporate topological features into the Weisfeiler–Lehman colour refinement scheme, thus obtaining a simple feature-based graph classification algorithm. We will then build a bridge to graph neural networks and demonstrate a topological variant of ‘readout’ functions, which can be learned in an end-to-end fashion. Care has been taken to make the talk accessible to an audience that might not have been exposed to machine learning or topological data analysis.
We consider continuous time financial models with continuous paths, in a pathwise setting using functional Ito calculus. We look at applications of optimal transport duality in context of robust pricing and hedging and that of calibration. First, we explore exntesions of the discrete-time results in Aksamit et al. [Math. Fin. 29(3), 2019] to a continuous time setting. Second, we addresses the joint calibration problem of SPX options and VIX options or futures. We show that the problem can be formulated as a semimartingale optimal transport problem under a finite number of discrete constraints, in the spirit of [arXiv:1906.06478]. We introduce a PDE formulation along with its dual counterpart. The solution, a calibrated diffusion process, can be represented via the solutions of Hamilton--Jacobi--Bellman equations arising from the dual formulation. The method is tested on both simulated data and market data. Numerical examples show that the model can be accurately calibrated to SPX options, VIX options and VIX futures simultaneously.
Based on joint works with Ivan Guo, Gregoire Loeper, Shiyi Wang.
==============================================
Tissue folding during animal development involves an intricate interplay
of cell shape changes, cell division, cell migration, cell
intercalation, and cell differentiation that obfuscates the underlying
mechanical principles. However, a simpler instance of tissue folding
arises in the green alga Volvox: its spherical embryos turn themselves
inside out at the close of their development. This inversion arises from
cell shape changes only.
In this talk, I will present a model of tissue folding in which these
cell shape changes appear as variations of the intrinsic stretches and
curvatures of an elastic shell. I will show how this model reproduces
Volvox inversion quantitatively, explains mechanically the arrest of
inversion observed in mutants, and reveals the spatio-temporal
regulation of different biological driving processes. I will close with
two examples illustrating the challenges of nonlinearity in tissue
folding: (i) constitutive nonlinearity leading to nonlocal elasticity in
the continuum limit of discrete cell sheet models; (ii) geometric
nonlinearity in large bending deformations of morphoelastic shells.
One of the standard methods for the solution of elliptic boundary value problems calls for reformulating them as systems of integral equations. The integral operators that arise in this fashion typically have singular kernels, and, in many cases of interest, the solutions of these equations are themselves singular. This makes the accurate discretization of the systems of integral equations arising from elliptic boundary value problems challenging.
Over the last decade, Generalized Gaussian quadrature rules, which are n-point quadrature rules that are exact for a collection of 2n functions, have emerged as one of the most effective tools for discretizing singular integral equations. Among other things, they have been used to accelerate the discretization of singular integral operators on curves, to enable the accurate discretization of singular integral operators on complex surfaces and to greatly reduce the cost of representing the (singular) solutions of integral equations given on planar domains with corners.
We will first briefly outline a standard method for the discretization of integral operators given on curves which is highly amenable to acceleration through generalized Gaussian quadratures. We will then describe a numerical procedure for the construction of Generalized Gaussian quadrature rules.
Much of this is joint work with Zydrunas Gimbutas (NIST Boulder) and Vladimir Rokhlin (Yale University).
A link for this talk will be sent to our mailing list a day or two in advance. If you are not on the list and wish to be sent a link, please send email to @email.
In the study of geometric flows it is often important to understand when a flow which converges along a sequence of times going to infinity will, in fact, converge along every such sequence of times to the same limit. While examples of finite dimensional gradient flows that asymptote to a circle of critical points show that this cannot hold in general, a positive result can be obtained in the presence of a so-called Lojasiewicz-Simon inequality. In this talk we will introduce this problem of uniqueness of asymptotic limits and discuss joint work with Melanie Rupflin and Peter M. Topping in which we examined the situation for a geometric flow that is designed to evolve a map describing a closed surface in a given target manifold into a parametrization of a minimal surface.
/
The Landau equation is an important PDE in kinetic theory modelling plasma particles in a gas. It can be derived as a limiting process from the famous Boltzmann equation. From the mathematical point of view, the Landau equation can be very challenging to study; many partial results require, for example, stochastic analysis as well as a delicate combination of kinetic and parabolic theory. The major open question is uniqueness in the physically relevant Coulomb case. I will present joint work with Jose Carrillo, Matias Delgadino, and Laurent Desvillettes where we cast the Landau equation as a generalized gradient flow from the optimal transportation perspective motivated by analogous results on the Boltzmann equation. A direct outcome of this is a numerical scheme for the Landau equation in the spirit of de Giorgi and Jordan, Kinderlehrer, and Otto. An extended area of investigation is to use the powerful gradient flow techniques to resolve some of the open problems and recover known results.
I will explain what it means for a manifold to have an affine structure and give an introduction to Benzecri's theorem stating that a closed surface admits an affine structure if and only if its Euler characteristic vanishes. I will also talk about an algebraic-topological generalization, due to Milnor and Wood, that bounds the Euler class of a flat circle bundle. No prior familiarity with the concepts is necessary.
Part of the Oxford Discrete Maths and Probability Seminar, held via Zoom. Please see the seminar website for details.
A hereditary graph property is a class of finite graphs closed under isomorphism and induced subgraphs. Given a hereditary graph property $H$, the speed of $H$ is the function which sends an integer n to the number of distinct elements in $H$ with underlying set $\{1,...,n\}$. Not just any function can occur as the speed of hereditary graph property. Specifically, there are discrete "jumps" in the possible speeds. Study of these jumps began with work of Scheinerman and Zito in the 90's, and culminated in a series of papers from the 2000's by Balogh, Bollobás, and Weinreich, in which essentially all possible speeds of a hereditary graph property were characterized. In contrast to this, many aspects of this problem in the hypergraph setting remained unknown. In this talk we present new hypergraph analogues of many of the jumps from the graph setting, specifically those involving the polynomial, exponential, and factorial speeds. The jumps in the factorial range turned out to have surprising connections to the model theoretic notion of mutual algebricity, which we also discuss. This is joint work with Chris Laskowski.
This seminar will be held via zoom. Meeting link will be sent to members of our mailing list (https://lists.maths.ox.ac.uk/mailman/listinfo/random-matrix-theory-anno…) in our weekly announcement on Monday.
Classical random matrix theory begins with a random matrix model and analyzes the distribution of the resulting eigenvalues. In this work, we treat the reverse question: if the eigenvalues are specified but the matrix is "otherwise random", what do the entries typically look like? I will describe a natural model of random matrices with prescribed eigenvalues and discuss a central limit theorem for projections, which in particular shows that relatively large subcollections of entries are jointly Gaussian, no matter what the eigenvalue distribution looks like. I will discuss various applications and interpretations of this result, in particular to a probabilistic version of the Schur--Horn theorem and to models of quantum systems in random states. This work is joint with Mark Meckes.
In the classical setting of real semisimple Lie groups, the Dirac inequality (due to Parthasarathy) gives a necessary condition that the infinitesimal character of an irreducible unitary representation needs to satisfy in terms of the restriction of the representation to the maximal compact subgroup. A similar tool was introduced in the setting of representations of p-adic groups in joint work with Barbasch and Trapa, where the necessary unitarity condition is phrased in terms of the semisimple parameter in the Kazhdan-Lusztig parameterization and the hyperspecial parahoric restriction. I will present several consequences of this inequality to the problem of understanding the unitary dual of the p-adic group, in particular, how it can be used in order to exhibit several isolated "extremal" unitary representations and to compute precise "spectral gaps" for them.
We develop a theory to measure the variance and covariance of probability distributions defined on the nodes of a graph, which takes into account the distance between nodes. Our approach generalizes the usual (co)variance to the setting of weighted graphs and retains many of its intuitive and desired properties. As a particular application, we define the maximum-variance problem on graphs with respect to the effective resistance distance, and characterize the solutions to this problem both numerically and theoretically. We show how the maximum-variance distribution can be interpreted as a core-periphery measure, illustrated by the fact that these distributions are supported on the leaf nodes of tree graphs, low-degree nodes in a configuration-like graph and boundary nodes in random geometric graphs. Our theoretical results are supported by a number of experiments on a network of mathematical concepts, where we use the variance and covariance as analytical tools to study the (co-)occurrence of concepts in scientific papers with respect to the (network) relations between these concepts. Finally, I will draw connections to related notion of assortativity on networks, a network analogue of correlation used to describe how the presence and absence of edges covaries with the properties of nodes.
Part of the Oxford Discrete Maths and Probability Seminar, held via Zoom. Please see the seminar website for details.
Let $G_n$ be a sequence of finite, simple, connected, regular graphs with degrees tending to infinity and let $T_n$ be a uniformly drawn spanning tree of $G_n$. In joint work with Yuval Peres we show that the local limit of $T_n$ is the $\text{Poisson}(1)$ branching process conditioned to survive forever (that is, the asymptotic frequency of the appearance of any small subtree is given by the branching process). The proof is based on electric network theory and I hope to show most of it.
The Dirichlet class number formula gives an expression for the residue at s=1 of the Dedekind zeta function of a number field K in terms of certain quantities associated to K. Among those is the regulator of K, a certain determinant involving logarithms of units in K. In the 1980s, Don Zagier gave a conjectural expression for the values at integers s $\geq$ 2 in terms of "higher regulators", with polylogarithms in place of logarithms. The goal of this talk is to give an algebraic-geometric interpretation of these polylogarithms. Time permitting, we will also discuss a similar picture for Hasse--Weil L-functions of elliptic curves.
In the talk, we will discuss the connection between quantitative hypoelliptic PDE methods and the long-time dynamics of stochastic differential equations (SDEs). In a recent joint work with Alex Blumenthal and Sam Punshon-Smith, we put forward a new method for obtaining quantitative lower bounds on the top Lyapunov exponent of stochastic differential equations (SDEs). Our method combines (i) an (apparently new) identity connecting the top Lyapunov exponent to a degenerate Fisher information-like functional of the stationary density of the Markov process tracking tangent directions with (ii) a quantitative version of Hörmander's hypoelliptic regularity theory in an L1 framework which estimates this (degenerate) Fisher information from below by a W^{s,1} Sobolev norm using the associated Kolmogorov equation for the stationary density. As an initial application, we prove the positivity of the top Lyapunov exponent for a class of weakly-dissipative, weakly forced SDE and we prove that this class includes the classical Lorenz 96 model in any dimension greater than 6, provided the additive stochastic driving is applied to any consecutive pair of modes. This is the first mathematically rigorous proof of chaos (in the sense of positive Lyapunov exponents) for Lorenz 96 and, more recently, for finite dimensional truncations of the shell models GOY and SABRA (stochastically driven or otherwise), despite the overwhelming numerical evidence. If time permits, I will also discuss joint work with Kyle Liss, in which we obtain sharp, quantitative estimates on the spectral gap of the Markov semigroups. In both of these works, obtaining various kinds of quantitative hypoelliptic regularity estimates that are uniform in certain parameters plays a pivotal role.
We revisit the variational characterization of conservative diffusion as entropic gradient flow and provide for it a probabilistic interpretation based on stochastic calculus. It was shown by Jordan, Kinderlehrer, and Otto that, for diffusions of Langevin–Smoluchowski type, the Fokker–Planck probability density flow maximizes the rate of relative entropy dissipation, as measured by the distance traveled in the ambient space of probability measures with finite second moments, in terms of the quadratic Wasserstein metric. We obtain novel, stochastic-process versions of these features, valid along almost every trajectory of the diffusive motion in the backward direction of time, using a very direct perturbation analysis. By averaging our trajectorial results with respect to the underlying measure on path space, we establish the maximal rate of entropy dissipation along the Fokker–Planck flow and measure exactly the deviation from this maximum that corresponds to any given perturbation. As a bonus of our trajectorial approach we derive the HWI inequality relating relative entropy (H), Wasserstein distance (W) and relative Fisher information (I).
I will discuss connections between ambient geometry of Moduli spaces and Teichmuller dynamics. This includes the recent resolution of the Siu's conjecture about convexity of Teichmuller spaces, and the (conjectural) topological description of the Caratheodory metric on Moduli spaces of Riemann surfaces.
This is a report on joint work with Martijn Kool.
Recently, Marian-Oprea-Pandharipande established a generalization of Lehn’s conjecture for Segre numbers associated to Hilbert schemes of points on surfaces. Extending work of Johnson, they provided a conjectural correspondence between Segre and Verlinde numbers. For surfaces with holomorphic 2-form, we propose conjectural generalizations of their results to moduli spaces of stable sheaves of higher rank.
Using Mochizuki’s formula, we derive a universal function which expresses virtual Segre and Verlinde numbers of surfaces with holomorphic 2-form in terms of Seiberg- Witten invariants and intersection numbers on products of Hilbert schemes of points. We use this to verify our conjectures in examples.
I will discuss an analogue of the CHY formalism in AdS. Considering the biadjoint scalar theory on AdS, I will explain how to rewrite all the tree-level amplitudes as an integral over the moduli space of punctured Riemann spheres. Contrary to the flat space, the scattering equations are operator-valued. The resulting formula is motivated via a bosonic ambitwistor string on AdS and can be proven to be equivalent to the corresponding Witten diagram computation by applying a series of contour deformations.
Persistence theory provides useful tools to extract information from real-world data sets, and profits of techniques from different mathematical disciplines, such as Morse theory and quiver representation. In this seminar, I am going to present a new approach for studying persistence theory using model categories. I will briefly introduce model categories and then describe how to define a model structure on the category of the tame parametrised chain complexes, which are chain complexes that evolve in time. Using this model structure, we can define new invariants for tame parametrised chain complexes, which are in perfect accordance with the standard barcode when restricting to persistence modules. I will illustrate with some examples why such an approach can be useful in topological data analysis and what new insight on standard persistence can give us.
When was the last time you read a grand statement, accompanied by a large number, and wondered whether it could really be true?
Statistics are vital in helping us tell stories – we see them in the papers, on social media, and we hear them used in everyday conversation – and yet we doubt them more than ever. But numbers, in the right hands, have the power to change the world for the better. Contrary to popular belief, good statistics are not a trick, although they are a kind of magic. Good statistics are like a telescope for an astronomer, or a microscope for a bacteriologist. If we are willing to let them, good statistics help us see things about the world around us and about ourselves.
Tim Harford is a senior columnist for the Financial Times, the presenter of Radio 4’s More or Less and is a visiting fellow at Nuffield College, Oxford. His books include The Fifty Things that Made the Modern Economy, Messy, and The Undercover Economist.
To order a personalised copy of Tim's book email @email, providing your name and contact phone number/email and the personalisation you would like. You can then pick up from 16/10 or contact Blackwell's on 01865 792792 from that date to pay and have it sent.
Watch online (no need to register):
Oxford Mathematics Twitter
Oxford Mathematics Facebook
Oxford Mathematics Livestream
Oxford Mathematics YouTube
The Oxford Mathematics Public Lectures are generously supported by XTX Markets.
Part of UK virtual operator algebras seminar: https://sites.google.com/view/uk-operator-algebras-seminar/home
Cuntz introduced pure infiniteness for simple C*-algebras as a C*-algebraic analogue of type III von Neumann factors. Notable examples include the Calkin algebra B(H)/K(H), the Cuntz algebras O_n, simple Cuntz-Krieger algebras, and other C*-algebras you would encounter in the wild. The separable, nuclear ones were classified in celebrated work by Kirchberg and Phillips in the mid 90s. I will talk about these topics including the non-simple case if time permits.
Part of UK virtual operator algebras seminar: https://sites.google.com/view/uk-operator-algebras-seminar/home
Unitary solutions of the Yang-Baxter equation ("R-matrices") play a prominent role in several fields, such as quantum field theory and topological quantum computing, but are difficult to find directly and remain somewhat mysterious. In this talk I want to explain how one can use subfactor techniques to learn something about unitary R-matrices, and a research programme aiming at the classification of unitary R-matrices up to a natural equivalence relation. This talk is based on joint work with Roberto Conti, Ulrich Pennig, and Simon Wood.
Part of the Oxford Discrete Maths and Probability Seminar, held via Zoom. Please see the seminar website for details.
Liouville quantum gravity (LQG) is a theory of random fractal surfaces with origin in the physics literature in the 1980s. Most literature is about LQG with matter central charge $c\in (-\infty,1]$. We study a discretization of LQG which makes sense for all $c\in (-\infty,25)$. Based on a joint work with Gwynne, Pfeffer, and Remy.
Systems with lattice geometry can be renormalized exploiting their embedding in metric space, which naturally defines the coarse-grained nodes. By contrast, complex networks defy the usual techniques because of their small-world character and lack of explicit metric embedding. Current network renormalization approaches require strong assumptions (e.g. community structure, hyperbolicity, scale-free topology), thus remaining incompatible with generic graphs and ordinary lattices. Here we introduce a graph renormalization scheme valid for any hierarchy of coarse-grainings, thereby allowing for the definition of block-nodes across multiple scales. This approach reveals a necessary and specific dependence of network topology on an additive hidden variable attached to nodes, plus optional dyadic factors. Renormalizable networks turn out to be consistent with a unique specification of the fitness model, while they are incompatible with preferential attachment, the configuration model or the stochastic blockmodel. These results highlight a deep conceptual distinction between scale-free and scale-invariant networks, and provide a geometry-free route to renormalization. If the hidden variables are annealed, the model spontaneously leads to realistic scale-free networks with cut-off. If they are quenched, the model can be used to renormalize real-world networks with node attributes and distance-dependence or communities. As an example we derive an accurate multiscale model of the International Trade Network applicable across arbitrary geographic resolutions.
https://arxiv.org/abs/2009.11024 (23 sept.)
Part of the Oxford Discrete Maths and Probability Seminar, held via Zoom. Please see the seminar website for details.
There is a growing body of results in extremal combinatorics and Ramsey theory which give better bounds or stronger conclusions under the additional assumption of bounded VC-dimension. Schur and Erdős conjectured that there exists a suitable constant $c$ with the property that every graph with at least $2^{cm}$ vertices, whose edges are colored by $m$ colors, contains a monochromatic triangle. We prove this conjecture for edge-colored graphs such that the set system induced by the neighborhoods of the vertices with respect to each color class has bounded VC-dimension. This result is best possible up to the value of $c$.
Joint work with Jacob Fox and Andrew Suk.
More information on the Reddick Lecture.
This talk is a personal how-to (and how-not-to) manual for doing Maths with industry, or indeed with government. The Maths element is essential but lots of other skills and activities are equally necessary. Examples: problem elicitation; understanding the environmental constraints; power analysis; understanding world-views and aligning personal motivations; and finally, understanding the wider systems in which the Maths element will sit. These issues have been discussed for some time in the management science community, where their generic umbrella name is Problem Structuring Methods (PSMs).
Driven by the need for principled extraction of features from time series, we introduce the iterated-sums signature over any commutative semiring. The case of the tropical semiring is a central, and our motivating, example, as it leads to features of (real-valued) time series that are not easily available using existing signature-type objects.
This is joint work with Kurusch Ebrahimi-Fard (NTNU Trondheim) and Nikolas Tapia (WIAS Berlin).
Distinguishing classes of surfaces in $\mathbb{R}^n$ is a task which arises in many situations. There are many characteristics we can use to solve this classification problem. The Persistent Homology Transform allows us to look at shapes in $\mathbb{R}^n$ from $S^{n-1}$ directions simultaneously, and is a useful tool for surface classification. Using the Julia package DiscretePersistentHomologyTransform, we will look at some example curves in $\mathbb{R}^2$ and examine distinguishing features.
Part of UK virtual operator algebras seminar: https://sites.google.com/view/uk-operator-algebras-seminar/home
The Gelfand correspondence between compact Hausdorff spaces and unital C*-algebras justifies the slogan that C*-algebras are to be thought of as "non-commutative topological spaces", and Rieffel's theory of compact quantum metric spaces provides, in the same vein, a non-commutative counterpart to the theory of compact metric spaces. The aim of my talk is to introduce the basics of the theory and explain how the classical Gromov-Hausdorff distance between compact metric spaces can be generalized to the quantum setting. If time permits, I will touch upon some recent results obtained in joint work with Jens Kaad and Thomas Gotfredsen.
Part of UK virtual operator algebras seminar: https://sites.google.com/view/uk-operator-algebras-seminar/home
C*-algebras associated to etale groupoids appear as a versatile construction in many contexts. For instance, groupoid C*-algebras allow for implementation of natural one-parameter groups of automorphisms obtained from continuous cocycles. This provides a path to quantum statistical mechanical systems, where one studies equilibrium states and ground states. The early characterisations of ground states and equilibrium states for groupoid C*-algebras due to Renault have seen remarkable refinements. It is possible to characterise in great generality all ground states of etale groupoid C*-algebras in terms of a boundary groupoid of the cocycle (joint work with Laca and Neshveyev). The steps in the proof employ important constructions for groupoid C*-algebras due to Renault.
We study the minimum Wasserstein distance from the empirical measure to a space of probability measures satisfying linear constraints. This statistic can naturally be used in a wide range of applications, for example, optimally choosing uncertainty sizes in distributionally robust optimization, optimal regularization, testing fairness, martingality, among many other statistical properties. We will discuss duality results which recover the celebrated Kantorovich-Rubinstein duality when the manifold is sufficiently rich and associated test statistics as the sample size increases. We illustrate how this relaxation can beat the statistical curse of dimensionality often associated to empirical Wasserstein distances.
The talk builds on joint work with S. Ghosh, Y. Kang, K. Murthy, M. Squillante, and N. Si.