Thu, 02 Feb 2023
15:00
L6

Higher Geometry by Examples

Chenjing Bu
Abstract

We give an introduction to the subject of higher geometry, by giving many examples of higher geometric objects, and looking at their properties. These include examples of 2-rings, 2-vector spaces, and 2-vector bundles. We show how these concepts help solve problems in ordinary geometry, as one of the many motivations of the subject. We assume no prerequisites on the subject, and the talk should be applicable to both differential and algebraic geometry.

Mon, 06 Feb 2023
16:00
L6

TBD

TBD
Mon, 30 Jan 2023
16:00
L6

Collisions in supersingular isogeny graphs

Wissam Ghantous
(University of Oxford)
Abstract

In this talk we will study the graph structure of supersingular isogeny graphs. These graphs are known to have very few loops and multi-edges. We formalize this idea by studying and finding bounds for their number of loops and multi-edges. We also find conditions under which these graphs are simple. To do so, we introduce a method of counting the total number of collisions (which are special endomorphisms) based on a trace formula of Gross and a known formula of Kronecker, Gierster and Hurwitz. 

The method presented in this talk can be used to study many kinds of collisions in supersingular isogeny graphs. As an application, we will see how this method was used to estimate a certain number of collisions and then show that isogeny graphs do not satisfy a certain cryptographic property that was falsely believed (and proven!) to hold.

Mon, 06 Feb 2023

14:00 - 15:00
L6

Constrained and Multirate Training of Neural Networks

Tiffany Vlaar
(McGill University )
Abstract

I will describe algorithms for regularizing and training deep neural networks. Soft constraints, which add a penalty term to the loss, are typically used as a form ofexplicit regularization for neural network training. In this talk I describe a method for efficiently incorporating constraints into a stochastic gradient Langevin framework for the training of deep neural networks. In contrast to soft constraints, our constraints offer direct control of the parameter space, which allows us to study their effect on generalization. In the second part of the talk, I illustrate the presence of latent multiple time scales in deep learning applications.

Different features present in the data can be learned by training a neural network on different time scales simultaneously. By choosing appropriate partitionings of the network parameters into fast and slow parts I show that our multirate techniques can be used to train deep neural networks for transfer learning applications in vision and natural language processing in half the time, without reducing the generalization performance of the model.

Mon, 23 Jan 2023
16:00
L6

Sums of arithmetic functions over F_q[T] and non-unitary distributions (Joint junior/senior number theory seminar)

Vivian Kuperberg
(Tel Aviv University)
Abstract

In 2018, Keating, Rodgers, Roditty-Gershon and Rudnick conjectured that the variance of sums of the divisor
function in short intervals is described by a certain piecewise polynomial coming from a unitary matrix integral. That is
to say, this conjecture ties a straightforward arithmetic problem to random matrix theory. They supported their
conjecture by analogous results in the setting of polynomials over a finite field rather than in the integer setting. In this
talk, we'll discuss arithmetic problems over F_q[T] and their connections to matrix integrals, focusing on variations on
the divisor function problem with symplectic and orthogonal distributions. Joint work with Matilde Lalín.

Mon, 23 Jan 2023
16:00
L6

Sums of arithmetic functions over F_q[T] and non-unitary distributions

Vivian Kuperberg
(Tel Aviv University)
Abstract

In 2018, Keating, Rodgers, Roditty-Gershon and Rudnick conjectured that the variance of sums of the divisor function in short intervals is described by a certain piecewise polynomial coming from a unitary matrix integral. That is to say, this conjecture ties a straightforward arithmetic problem to random matrix theory. They supported their conjecture by analogous results in the setting of polynomials over a finite field rather than in the integer setting. In this talk, we'll discuss arithmetic problems over F_q[T] and their connections to matrix integrals, focusing on variations on the divisor function problem with symplectic and orthogonal distributions. Joint work with Matilde Lalín.

Wed, 22 Mar 2023

10:00 - 12:00
L6

Gradient flows in metric spaces: overview and recent advances

Dr Antonio Esposito
(University of Oxford)
Further Information

Sessions led by Dr Antonio Esposito will take place on

14 March 2023 10:00 - 12:00 L4

16 March 2023 10:00 - 12:00 L4

21 March 2023 10:00 - 12:00 L6

22 March 2023 10:00 - 12:00 L6

Should you be interested in taking part in the course, please send an email to @email.

Abstract

This course will serve as an introduction to the theory of gradient flows with an emphasis on the recent advances in metric spaces. More precisely, we will start with an overview of gradient flows from the Euclidean theory to its generalisation to metric spaces, in particular Wasserstein spaces. This also includes a short introduction to the Optimal Transport theory, with a focus on specific concepts and tools useful subsequently. We will then analyse the time-discretisation scheme à la Jordan--Kinderlehrer-Otto (JKO), also known as minimising movement, and discuss the role of convexity in proving stability, uniqueness, and long-time behaviour for the PDE under study. Finally, we will comment on recent advances, e.g., in the study of PDEs on graphs and/or particle approximation of diffusion equations.

PhD_course_Esposito_1.pdf

Tue, 21 Mar 2023

10:00 - 12:00
L6

Gradient flows in metric spaces: overview and recent advances

Dr Antonio Esposito
(University of Oxford)
Further Information

Sessions led by Dr Antonio Esposito will take place on

14 March 2023 10:00 - 12:00 L4

16 March 2023 10:00 - 12:00 L4

21 March 2023 10:00 - 12:00 L6

22 March 2023 10:00 - 12:00 L6

Should you be interested in taking part in the course, please send an email to @email.

Abstract

This course will serve as an introduction to the theory of gradient flows with an emphasis on the recent advances in metric spaces. More precisely, we will start with an overview of gradient flows from the Euclidean theory to its generalisation to metric spaces, in particular Wasserstein spaces. This also includes a short introduction to the Optimal Transport theory, with a focus on specific concepts and tools useful subsequently. We will then analyse the time-discretisation scheme à la Jordan--Kinderlehrer-Otto (JKO), also known as minimising movement, and discuss the role of convexity in proving stability, uniqueness, and long-time behaviour for the PDE under study. Finally, we will comment on recent advances, e.g., in the study of PDEs on graphs and/or particle approximation of diffusion equations.

PhD_course_Esposito_0.pdf

Mon, 20 Feb 2023

14:00 - 15:00
L6

Gradient flows and randomised thresholding: sparse inversion and classification

Jonas Latz
(Heriot Watt University Edinburgh)
Abstract

Sparse inversion and classification problems are ubiquitous in modern data science and imaging. They are often formulated as non-smooth minimisation problems. In sparse inversion, we minimise, e.g., the sum of a data fidelity term and an L1/LASSO regulariser. In classification, we consider, e.g., the sum of a data fidelity term and a non-smooth Ginzburg--Landau energy. Standard (sub)gradient descent methods have shown to be inefficient when approaching such problems. Splitting techniques are much more useful: here, the target function is partitioned into a sum of two subtarget functions -- each of which can be efficiently optimised. Splitting proceeds by performing optimisation steps alternately with respect to each of the two subtarget functions.

In this work, we study splitting from a stochastic continuous-time perspective. Indeed, we define a differential inclusion that follows one of the two subtarget function's negative subdifferential at each point in time. The choice of the subtarget function is controlled by a binary continuous-time Markov process. The resulting dynamical system is a stochastic approximation of the underlying subgradient flow. We investigate this stochastic approximation for an L1-regularised sparse inversion flow and for a discrete Allen-Cahn equation minimising a Ginzburg--Landau energy. In both cases, we study the longtime behaviour of the stochastic dynamical system and its ability to approximate the underlying subgradient flow at any accuracy. We illustrate our theoretical findings in a simple sparse estimation problem and also in low- and high-dimensional classification problems.

 

Thu, 19 Jan 2023

12:00 - 13:00
L6

On the Incompressible Limit for a Tumour Growth Model Incorporating Convective Effects

Markus Schmidtchen
(TU Dresden)
Abstract

In this seminar, we study a tissue growth model with applications to tumour growth. The model is based on that of Perthame, Quirós, and Vázquez proposed in 2014 but incorporated the advective effects caused, for instance, by the presence of nutrients, oxygen, or, possibly, as a result of self-propulsion. The main result of this work is the incompressible limit of this model, which builds a bridge between the density-based model and a geometry free-boundary problem by passing to a singular limit in the pressure law. The limiting objects are then proven to be unique.

Subscribe to L6