Tue, 24 Oct 2023
13:00
L1

Duality defects, anomalies and RG flows

Christian Copetti
(Oxford)
Abstract

We review the construction of non-invertible duality defects in various dimensions. We explain how they can be preserved along RG flows and how their realization on gapped phases contains their 't Hooft anomalies. We finally give a presentation of the anomalies from the Symmetry TFT. Time permitting I will discuss some possible future applications.

Tue, 24 Oct 2023
11:00
Lecture Room 4, Mathematical Institute

DPhil Presentations

Akshay Hegde, Julius Villar, Csaba Toth
(Mathematical Institute (University of Oxford))
Abstract

As part of the internal seminar schedule for Stochastic Analysis for this coming term, DPhil students have been invited to present on their works to date. Student talks are 20 minutes, which includes question and answer time. 

Students presenting are:

Akshay Hegde, supervisor Dmitry Beylaev

Julius Villar, supervisor Dmitry Beylaev

Csaba Toth, supervisor Harald Oberhauser 

Mon, 23 Oct 2023

16:30 - 17:30
L3

Graph Limit for Interacting Particle Systems on Weighted Random Graphs

Nastassia Pouradier Duteil
(Sorbonne Université)
Abstract

We study the large-population limit of interacting particle systems posed on weighted random graphs. In that aim, we introduce a general framework for the construction of weighted random graphs, generalizing the concept of graphons. We prove that as the number of particles tends to infinity, the finite-dimensional particle system converges in probability to the solution of a deterministic graph-limit equation, in which the graphon prescribing the interaction is given by the first moment of the weighted random graph law. We also study interacting particle systems posed on switching weighted random graphs, which are obtained by resetting the weighted random graph at regular time intervals. We show that these systems converge to the same graph-limit equation, in which the interaction is prescribed by a constant-in-time graphon.

Mon, 23 Oct 2023
15:30
Lecture Theatre 3, Mathematical Institute, Radcliffe Observatory Quarter, Woodstock Road, OX2 6G

PCF-GAN: generating sequential data via the characteristic function of measures on the path space

Prof Hao Ni
(Dept of Mathematics UCL)
Further Information

Please join us from 1500-1530 for tea and coffee outside the lecture theatre before the talk.

Abstract

Generating high-fidelity time series data using generative adversarial networks (GANs) remains a challenging task, as it is difficult to capture the temporal dependence of joint probability distributions induced by time-series data. To this end, a key step is the development of an effective discriminator to distinguish between time series distributions. In this talk, I will introduce the so-called PCF-GAN, a novel GAN that incorporates the path characteristic function (PCF) as the principled representation of time series distribution into the discriminator to enhance its generative performance.  On the one hand, we establish theoretical foundations of the PCF distance by proving its characteristicity, boundedness, differentiability with respect to generator parameters, and weak continuity, which ensure the stability and feasibility of training the PCF-GAN. On the other hand, we design efficient initialisation and optimisation schemes for PCFs to strengthen the discriminative power and accelerate training efficiency. To further boost the capabilities of complex time series generation, we integrate the auto-encoder structure via sequential embedding into the PCF-GAN, which provides additional reconstruction functionality. Extensive numerical experiments on various datasets demonstrate the consistently superior performance of PCF-GAN over state-of-the-art baselines, in both generation and reconstruction quality. Joint work with Dr. Siran Li (Shanghai Jiao Tong Uni) and Hang Lou (UCL). Paper: [https://arxiv.org/pdf/2305.12511.pdf].

Mon, 23 Oct 2023
15:30
L4

Khovanov homology and the Fukaya category of the three-punctured sphere

Claudius Zibrowius
(Durham University)
Abstract

About 20 years ago, Dror Bar-Natan described an elegant generalisation
of Khovanov homology to tangles with any number of endpoints, by
considering certain quotients of two-dimensional relative cobordism
categories.  I claim that these categories are in general not
well-understood (not by me in any case).  However, if we restrict to
tangles with four endpoints, things simplify and Bar-Natan's category
turns out to be closely related to the wrapped Fukaya category of the
four-punctured sphere.  This relationship gives rise to a symplectic
interpretation of Khovanov homology that is useful both for doing
calculations and for proving theorems.  I will discuss joint work in
progress with Artem Kotelskiy and Liam Watson where we investigate what
happens when we fill in one of the punctures.
 

Mon, 23 Oct 2023
14:15
L4

Einstein metrics on the Ten-Sphere

Matthias Wink
(Münster)
Abstract

In this talk we give an introduction to the topic of Einstein metrics on spheres. In particular, we prove the existence of three non-round Einstein metrics with positive scalar curvature on $S^{10}.$ Previously, the only even-dimensional spheres known to admit non-round Einstein metrics were $S^6$ and $S^8.$ This talk is based on joint work with Jan Nienhaus.

Mon, 23 Oct 2023

14:00 - 15:00
Lecture Room 6

Tractable Riemannian Optimization via Randomized Preconditioning and Manifold Learning

Boris Shustin
(Mathematical Institute University of Oxford)
Abstract

Optimization problems constrained on manifolds are prevalent across science and engineering. For example, they arise in (generalized) eigenvalue problems, principal component analysis, and low-rank matrix completion, to name a few problems. Riemannian optimization is a principled framework for solving optimization problems where the desired optimum is constrained to a (Riemannian) manifold.  Algorithms designed in this framework usually require some geometrical description of the manifold, i.e., tangent spaces, retractions, Riemannian gradients, and Riemannian Hessians of the cost function. However, in some cases, some of the aforementioned geometric components cannot be accessed due to intractability or lack of information.


 

In this talk, we present methods that allow for overcoming cases of intractability and lack of information. We demonstrate the case of intractability on canonical correlation analysis (CCA) and on Fisher linear discriminant analysis (FDA). Using Riemannian optimization to solve CCA or FDA with the standard geometric components is as expensive as solving them via a direct solver. We address this shortcoming using a technique called Riemannian preconditioning, which amounts to changing the Riemannian metric on the constraining manifold. We use randomized numerical linear algebra to form efficient preconditioners that balance the computational costs of the geometric components and the asymptotic convergence of the iterative methods. If time permits, we also show the case of lack of information, e.g., the constraining manifold can be accessed only via samples of it. We propose a novel approach that allows approximate Riemannian optimization using a manifold learning technique.

 

Mon, 23 Oct 2023

13:00 - 14:00
N3.12

Mathematrix: Careers Panel

Abstract

We will have a Q&A with a panel of academics and industry experts on applying to jobs both in and out of academia.

Fri, 20 Oct 2023

16:00 - 17:00
L1

Generalized Tensor Decomposition: Utility for Data Analysis and Mathematical Challenges

Tamara Kolda
( MathSci.ai)
Further Information

Tamara Kolda is an independent mathematical consultant under the auspices of her company MathSci.ai based in California. From 1999-2021, she was a researcher at Sandia National Laboratories in Livermore, California. She specializes in mathematical algorithms and computation methods for tensor decompositions, tensor eigenvalues, graph algorithms, randomized algorithms, machine learning, network science, numerical optimization, and distributed and parallel computing.

From the website: https://www.mathsci.ai/

Abstract

Tensor decomposition is an unsupervised learning methodology that has applications in a wide variety of domains, including chemometrics, criminology, and neuroscience. We focus on low-rank tensor decomposition using  canonical polyadic or CANDECOMP/PARAFAC format. A low-rank tensor decomposition is the minimizer according to some nonlinear program. The usual objective function is the sum of squares error (SSE) comparing the data tensor and the low-rank model tensor. This leads to a nicely-structured problem with subproblems that are linear least squares problems which can be solved efficiently in closed form. However, the SSE metric is not always ideal. Thus, we consider using other objective functions. For instance, KL divergence is an alternative metric is useful for count data and results in a nonnegative factorization. In the context of nonnegative matrix factorization, for instance, KL divergence was popularized by Lee and Seung (1999). We can also consider various objectives such as logistic odds for binary data, beta-divergence for nonnegative data, and so on. We show the benefits of alternative objective functions on real-world data sets. We consider the computational of generalized tensor decomposition based on other objective functions, summarize the work that has been done thus far, and illuminate open problems and challenges. This talk includes joint work with David Hong and Jed Duersch.

Fri, 20 Oct 2023
16:00
L1

Departmental Colloquium (Tamara Kolda) - Generalized Tensor Decomposition: Utility for Data Analysis and Mathematical Challenges

Tamara Kolda
Further Information
Tamara Kolda is an independent mathematical consultant under the auspices of her company MathSci.ai based in California. From 1999-2021, she was a researcher at Sandia National Laboratories in Livermore, California. She specializes in mathematical algorithms and computation methods for tensor decompositions, tensor eigenvalues, graph algorithms, randomized algorithms, machine learning, network science, numerical optimization, and distributed and parallel computing.
Abstract
Tensor decomposition is an unsupervised learning methodology that has applications in a wide variety of domains, including chemometrics, criminology, and neuroscience. We focus on low-rank tensor decomposition using canonical polyadic or CANDECOMP/PARAFAC format. A low-rank tensor decomposition is the minimizer according to some nonlinear program. The usual objective function is the sum of squares error (SSE) comparing the data tensor and the low-rank model tensor. This leads to a nicely-structured problem with subproblems that are linear least squares problems which can be solved efficiently in closed form. However, the SSE metric is not always ideal. Thus, we consider using other objective functions. For instance, KL divergence is an alternative metric is useful for count data and results in a nonnegative factorization. In the context of nonnegative matrix factorization, for instance, KL divergence was popularized by Lee and Seung (1999). We can also consider various objectives such as logistic odds for binary data, beta-divergence for nonnegative data, and so on. We show the benefits of alternative objective functions on real-world data sets. We consider the computational of generalized tensor decomposition based on other objective functions, summarize the work that has been done thus far, and illuminate open problems and challenges. This talk includes joint work with David Hong and Jed Duersch.
Fri, 20 Oct 2023

15:00 - 16:00
L5

Euler characteristic in topological persistence

Vadim Lebovici
(Mathematical Institute, University of Oxford)
Further Information

Vadim Lebovici is a post-doc in the Centre for Topological Data Anslysis. His research interests include: 

  • Multi-parameter persistent homology
  • Constructible functions and Euler calculus
  • Sheaf theory
  • Persistent magnitude
Abstract

In topological data analysis, persistence barcodes record the
persistence of homological generators in a one-parameter filtration
built on the data at hand. In contrast, computing the pointwise Euler
characteristic (EC) of the filtration merely records the alternating sum
of the dimensions of each homology vector space.

In this talk, we will show that despite losing the classical
"signal/noise" dichotomy, EC tools are powerful descriptors, especially
when combined with new integral transforms mixing EC techniques with
Lebesgue integration. Our motivation is fourfold: their applicability to
multi-parameter filtrations and time-varying data, their remarkable
performance in supervised and unsupervised tasks at a low computational
cost, their satisfactory properties as integral transforms (e.g.,
regularity and invertibility properties) and the expectation results on
the EC in random settings. Along the way, we will give an insight into
the information these descriptors record.

This talk is based on the work [https://arxiv.org/abs/2111.07829] and
the joint work with Olympio Hacquard [https://arxiv.org/abs/2303.14040].

 

 

Fri, 20 Oct 2023

15:00 - 16:00
Virtual

Machine learning for identifying translatable biomarkers and targets

Professor Daphne Koller
(Department of Computer Science Stanford University)
Abstract

Modern medicine has given us effective tools to treat some of the most significant and burdensome diseases. At the same time, it is becoming consistently more challenging and more expensive to develop new therapeutics. A key factor in this trend is that we simply don't understand the underlying biology of disease, and which interventions might meaningfully modulate clinical outcomes and in which patients. To achieve this goal, we are bringing together large amounts of high content data, taken both from humans and from human-derived cellular systems generated in our own lab. Those are then used to learn a meaningful representation of biological states via cutting edge machine learning methods, which enable us to make predictions about novel targets, coherent patient segments, and the clinical effect of molecules. Our ultimate goal is to develop a new approach to drug development that uses high-quality data and ML models to design novel, safe, and effective therapies that help more people, faster, and at a lower cost. 

Fri, 20 Oct 2023

12:00 - 13:00

The Artin-Schreier Theorem

James Taylor
(University of Oxford)
Abstract

Typically, the algebraic closure of a non-algebraically closed field F is an infinite extension of F. However, this doesn't always have to happen: for example consider $\mathbb{R}$ inside $\mathbb{C}$. Are there any other examples? Yes: for example you can consider the index two subfield of the algebraic numbers, defined by intersecting with $\mathbb{R}$. However this is still similar to the first example: the degree of the extension is two, and we extract a square root of $-1$ to obtain the algebraic closure. The Artin-Schreier Theorem tells us that amazingly this is always the case: if $F$ is a field for which the algebraic closure is a non trivial finite extension $L$, then this forces F to have characteristic 0, L is degree two over $F$, and $L = F(i)$ for some $i$ with $i^2 = -1$. I.e. all such extensions "look like" $\mathbb{C} / \mathbb{R}$. In this expository talk we will give an overview of the proof of this theorem, and try to get some feeling for why this result is true.

 

Thu, 19 Oct 2023
16:00
Lecture Room 4, Mathematical Institute

Detecting Lead-Lag Relationships in Stock Returns and Portfolio Strategies

Qi Jin
Abstract

We propose a method to detect linear and nonlinear lead-lag relationships in stock returns.  Our approach uses pairwise Lévy-area and cross-correlation of returns to rank the assets from leaders to followers. We use the rankings to construct a portfolio that longs or shorts the followers based on the previous returns of the leaders, and the stocks are ranked every time the portfolio is rebalanced. The portfolio also takes an offsetting position on the SPY ETF so that the initial value of the portfolio is zero. Our data spans from 1963 to 2022 and we use an average of over 500 stocks to construct portfolios for each trading day. The annualized returns of our lead-lag portfolios are over  20%, and the returns outperform all lead-lag benchmarks in the literature. There is little overlap between the leaders and the followers we find and those that are reported in previous studies based on market capitalization, volume traded, and intra-industry relationships. Our findings support the slow information diffusion hypothesis; i.e., portfolios rebalanced once a day consistently outperform the bidiurnal, weekly, bi-weekly, tri-weekly, and monthly rebalanced portfolios.

Thu, 19 Oct 2023
16:00
L5

Siegel modular forms and algebraic cycles

Aleksander Horawa
(Oxford University)
Abstract

(Joint work with Kartik Prasanna)

Siegel modular forms are higher-dimensional analogues of modular forms. While each rational elliptic curve corresponds to a single holomorphic modular form, each abelian surface is expected to correspond to a pair of Siegel modular forms: a holomorphic and a generic one. We propose a conjecture that explains the appearance of these two forms (in the cohomology of vector bundles on Siegel modular threefolds) in terms of certain higher algebraic cycles on the self-product of the abelian surface. We then prove three results:
(1) The conjecture is implied by Beilinson's conjecture on special values of L-functions. Amongst others, this uses a recent analytic result of Radzwill-Yang about non-vanishing of twists of L-functions for GL(4).
(2) The conjecture holds for abelian surfaces associated with elliptic curves over real quadratic fields.
(3) The conjecture implies a conjecture of Prasanna-Venkatesh for abelian surfaces associated with elliptic curves over imaginary quadratic fields.

Thu, 19 Oct 2023

14:00 - 15:00
Lecture Room 3

Randomized Least Squares Optimization and its Incredible Utility for Large-Scale Tensor Decomposition

Tammy Kolda
(mathsci.ai)
Abstract

Randomized least squares is a promising method but not yet widely used in practice. We show an example of its use for finding low-rank canonical polyadic (CP) tensor decompositions for large sparse tensors. This involves solving a sequence of overdetermined least problems with special (Khatri-Rao product) structure.

In this work, we present an application of randomized algorithms to fitting the CP decomposition of sparse tensors, solving a significantly smaller sampled least squares problem at each iteration with probabilistic guarantees on the approximation errors. We perform sketching through leverage score sampling, crucially relying on the fact that the problem structure enable efficient sampling from overestimates of the leverage scores with much less work. We discuss what it took to make the algorithm practical, including general-purpose improvements.

Numerical results on real-world large-scale tensors show the method is faster than competing methods without sacrificing accuracy.

*This is joint work with Brett Larsen, Stanford University.

Thu, 19 Oct 2023

12:00 - 13:00
L3

Extrinsic flows on convex hypersurfaces of graph type.

Hyunsuk Kang
(Gwangju Institute of Science and Technology and University of Oxford)
Abstract

Extrinsic flows are evolution equations whose speeds are determined by the extrinsic curvature of submanifolds in ambient spaces.  Some of the well-known ones are mean curvature flow, Gauss curvature flow, and Lagrangian mean curvature flow.

We focus on the special case in which the speed of a flow is given by powers of mean curvature for smooth convex hypersurfaces of graph type, i.e., ones that can be represented as the graph of a function.  Convergence and long-time existence of such flow will be discussed. Furthermore, C^2 estimates which are independent of height of the graph will be derived to see that the boundary of the domain of the graph is also a smooth solution for the same flow as a submanifold with codimension two in the classical sense.  Some of the main ideas, notably a priori estimates via the maximum principle, come from the work of Huisken and Ecker on mean curvature evolution of entire graphs in 1989.  This is a joint work with Ki-ahm Lee and Taehun Lee.

Thu, 19 Oct 2023

12:00 - 13:00
L1

Does Maxwell’s hypothesis of air saturation near the surface of evaporating liquid hold at all spatial scales?

Eugene Benilov
(University of Limerick)
Abstract

The classical model of evaporation of liquids hinges on Maxwell’s assumption that the air near the liquid’s surface is saturated. It allows one to find the evaporative flux without considering the interface separating liquid and air. Maxwell’s hypothesis is based on an implicit assumption that the vapour-emission capacity of the interface exceeds the throughput of air (i.e., its ability to pass the vapour on to infinity). If indeed so, the air adjacent to the liquid would get quickly saturated, justifying Maxwell’s hypothesis.

 

In the present paper, the so-called diffuse-interface model is used to account for the interfacial physics and, thus, derive a generalised version of Maxwell’s boundary condition for the near-interface vapour density. It is then applied to a spherical drop floating in air. It turns out that the vapour-emission capacity of the interface exceeds the throughput of air only if the drop’s radius is rd 10μm, but for rd ≈ 2μm, the two are comparable. For rd 1μm, evaporation is interface-driven, and the resulting evaporation rate is noticeably smaller than that predicted by the classical model.

Thu, 19 Oct 2023

11:00 - 12:00
C6

New ideas in Arakelov intersection theory

Michał Szachniewicz
(Mathematical Insitute, Oxford)
Abstract

I will give an overview of new ideas showing up in arithmetic intersection theory based on some exciting talks that appeared at the very recent conference "Global invariants of arithmetic varieties". I will also outline connections to globally valued fields and some classical problems.

Wed, 18 Oct 2023

16:00 - 17:00
L6

Fibring in manifolds and groups

Monika Kudlinska
(University of Oxford)
Abstract

Algebraic fibring is the group-theoretic analogue of fibration over the circle for manifolds. Generalising the work of Agol on hyperbolic 3-manifolds, Kielak showed that many groups virtually fibre. In this talk we will discuss the geometry of groups which fibre, with some fun applications to Poincare duality groups - groups whose homology and cohomology invariants satisfy a Poincare-Lefschetz type duality, like those of manifolds - as well as to exotic subgroups of Gromov hyperbolic groups. No prior knowledge of these topics will be assumed.

Disclaimer: This talk will contain many manifolds.

Tue, 17 Oct 2023

16:00 - 17:00
C3

Compactness and related properties for weighted composition operators on BMOA

David Norrbo
(Åbo Akademi University)
Abstract

A previously known function-theoretic characterisation of compactness for a weighted composition operator on BMOA is improved. Moreover, the same function-theoretic condition also characterises weak compactness and complete continuity. In order to close the circle of implications, the operator-theoretic property of fixing a copy of c0 comes in useful. 

Tue, 17 Oct 2023

16:00 - 17:00
L6

Limiting spectral distributions of random matrices arising in neural networks

Ouns El Harzli
Abstract

We study the distribution of eigenvalues of kernel random matrices where each element is the empirical covariance between the feature map evaluations of a random fully-connected neural network. We show that, under mild assumptions on the non-linear activation function, namely Lipschitz continuity and measurability, the limiting spectral distribution can be written as successive free multiplicative convolutions between the Marchenko-Pastur law and a nonrandom measure specific to the neural network. The latter has no known analytical expression but can be simulated empirically, separately from the random matrices of interest.

Tue, 17 Oct 2023

15:30 - 16:30
Online

Critical core percolation on random graphs

Alice Contat
(Université Paris-Saclay)
Further Information

Part of the Oxford Discrete Maths and Probability Seminar, held via Zoom. Please see the seminar website for details.

Abstract

Motivated by the desire to construct large independent sets in random graphs, Karp and Sipser modified the usual greedy construction to yield an algorithm that outputs an independent set with a large cardinal called the Karp-Sipser core. When run on the Erdős-Rényi $G(n,c/n)$ random graph, this algorithm is optimal as long as $c < e$. We will present the proof of a physics conjecture of Bauer and Golinelli (2002) stating that at criticality, the size of the Karp-Sipser core is of order $n^{3/5}$. Along the way we shall highlight the similarities and differences with the usual greedy algorithm and the $k$-core algorithm.
Based on a joint work with Nicolas Curien and Thomas Budzinski.

Tue, 17 Oct 2023
15:00

Dehn functions of central products of nilpotent groups

Claudio Llosa Isenrich
(KIT)
Abstract

The Dehn function of a finitely presented group provides a quantitative measure for the difficulty of detecting if a word in its generators represents the trivial element of the group. By work of Gersten, Holt and Riley the Dehn function of a nilpotent group of class $c$ is bounded above by $n^{c+1}$. However, we are still far from determining the precise Dehn functions of all nilpotent groups. In this talk, I will explain recent results that allow us to determine the Dehn functions of large classes of nilpotent groups arising as central products. As a consequence, for every $k>2$, we obtain many pairs of finitely presented $k$-nilpotent groups with bilipschitz asymptotic cones, but with different Dehn functions. This shows that Dehn functions can distinguish between nilpotent groups with the same asymptotic cone, making them interesting in the context of the conjectural quasi-isometry classification of nilpotent groups.  This talk is based on joint works with García-Mejía, Pallier and Tessera.