Tue, 21 Nov 2023

16:00 - 17:00
L6

Beyond i.i.d. weights: sparse and low-rank deep Neural Networks are also Gaussian Processes

Thiziri Nait Saada
(Mathematical Institute (University of Oxford))
Abstract

The infinitely wide neural network has been proven a useful and manageable mathematical model that enables the understanding of many phenomena appearing in deep learning. One example is the convergence of random deep networks to Gaussian processes that enables a rigorous analysis of the way the choice of activation function and network weights impacts the training dynamics. In this paper, we extend the seminal proof of Matthews (2018) to a larger class of initial weight distributions (which we call "pseudo i.i.d."), including the established cases of i.i.d. and orthogonal weights, as well as the emerging low-rank and structured sparse settings celebrated for their computational speed-up benefits. We show that fully-connected and convolutional networks initialized with pseudo i.i.d. distributions are all effectively equivalent up to their variance. Using our results, one can identify the Edge-of-Chaos for a broader class of neural networks and tune them at criticality in order to enhance their training.

Tue, 14 Nov 2023
11:00
Lecture Room 4

DPhil Presentations

Sarah-Jean Meyer, Satoshi Hayakawa
(Mathematical Institute (University of Oxford))
Abstract

As part of the internal seminar schedule for Stochastic Analysis for this coming term, DPhil students have been invited to present on their works to date. Student talks are 20 minutes, which includes question and answer time. 

 

Students presenting are:

Sara-Jean Meyer, supervisor Massimiliano Gubinelli

Satoshi Hayakawa, supervisor Harald Oberhauser 

Mon, 20 Nov 2023
16:30
L3

Recent developments on evolution PDEs on graphs

Antonio Esposito
(Mathematical Institute (University of Oxford))
Abstract

The seminar concerns the study of evolution equations on graphs, motivated by applications in data science and opinion dynamics. We will discuss graph analogues of the continuum nonlocal-interaction equation and interpret them as gradient flows with respect to a graph Wasserstein distance, using Benamou--Brenier formulation. The underlying geometry of the problem leads to a Finslerian gradient flow structure, rather than Riemannian, since the resulting distance on graphs is actually a quasi-metric. We will address the existence of suitably defined solutions, as well as their asymptotic behaviour when the number of vertices converges to infinity and the graph structure localises. The two limits lead to different dynamics. From a slightly different perspective, by means of a classical fixed-point argument, we can show the existence and uniqueness of solutions to a larger class of nonlocal continuity equations on graphs. In this context, we consider general interpolation functions of the mass on the edges, which give rise to a variety of different dynamics. Our analysis reveals structural differences with the more standard Euclidean space, as some analogous properties rely on the interpolation chosen. The latter study can be extended to equations on co-evolving graphs. The talk is based on works in collaboration with G. Heinze (Augsburg), L. Mikolas (Oxford), F. S. Patacchini (IFP Energies Nouvelles), A. Schlichting (University of Münster), and D. Slepcev (Carnegie Mellon University). 

Tue, 24 Oct 2023
11:00
Lecture Room 4, Mathematical Institute

DPhil Presentations

Akshay Hegde, Julius Villar, Csaba Toth
(Mathematical Institute (University of Oxford))
Abstract

As part of the internal seminar schedule for Stochastic Analysis for this coming term, DPhil students have been invited to present on their works to date. Student talks are 20 minutes, which includes question and answer time. 

Students presenting are:

Akshay Hegde, supervisor Dmitry Beylaev

Julius Villar, supervisor Dmitry Beylaev

Csaba Toth, supervisor Harald Oberhauser 

Tue, 07 Nov 2023

14:30 - 15:00
VC

A Finite-Volume Scheme for Fractional Diffusion on Bounded Domains

Stefano Fronzoni
(Mathematical Institute (University of Oxford))
Abstract

Diffusion is one of the most common phenomenon in natural sciences and large part of applied mathematics have been interested in the tools to model it. Trying to study different types of diffusions, the mathematical ways to describe them and the numerical methods to simulate them is an appealing challenge, giving a wide range of applications. The aim of our work is the design of a finite-volume numerical scheme to model non-local diffusion given by the fractional Laplacian and to build numerical solutions for the Lévy-Fokker-Planck equation that involves it. Numerical methods for fractional diffusion have been indeed developed during the last few years and large part of the literature has been focused on finite element methods. Few results have been rather proposed for different techniques such as finite volumes.

 
We propose a new fractional Laplacian for bounded domains, which is expressed as a conservation law. This new approach is therefore particularly suitable for a finite volumes scheme and allows us also to prescribe no-flux boundary conditions explicitly. We enforce our new definition with a well-posedness theory for some cases to then capture with a good level of approximation the action of fractional Laplacian and its anomalous diffusion effect with our numerical scheme. The numerical solutions we get for the Lévy-Fokker-Planck equation resemble in fact the known analytical predictions and allow us to numerically explore properties of this equation and compute stationary states and long-time asymptotics.

Tue, 23 Jan 2024

14:30 - 15:00
L6

Manifold-Free Riemannian Optimization

Boris Shustin
(Mathematical Institute (University of Oxford))
Abstract

Optimization problems constrained to a smooth manifold can be solved via the framework of Riemannian optimization. To that end, a geometrical description of the constraining manifold, e.g., tangent spaces, retractions, and cost function gradients, is required. In this talk, we present a novel approach that allows performing approximate Riemannian optimization based on a manifold learning technique, in cases where only a noiseless sample set of the cost function and the manifold’s intrinsic dimension are available.

Mon, 16 Oct 2023
15:30
Lecture Theatre 3, Mathematical Institute, Radcliffe Observatory Quarter, Woodstock Road, OX2 6GG

Non-adversarial training of Neural SDEs with signature kernel scores

Dr Maud Lemercier
(Mathematical Institute (University of Oxford))
Further Information

Please join us from 1500-1530 for tea and coffee outside the lecture theatre before the talk.

Abstract

Neural SDEs are continuous-time generative models for sequential data. State-of-the-art performance for irregular time series generation has been previously obtained by training these models adversarially as GANs. However, as typical for GAN architectures, training is notoriously unstable, often suffers from mode collapse, and requires specialised techniques such as weight clipping and gradient penalty to mitigate these issues. In this talk, I will introduce a novel class of scoring rules on path space based on signature kernels and use them as an objective for training Neural SDEs non-adversarially. The strict properness of such kernel scores and the consistency of the corresponding estimators, provide existence and uniqueness guarantees for the minimiser. With this formulation, evaluating the generator-discriminator pair amounts to solving a system of linear path-dependent PDEs which allows for memory-efficient adjoint-based backpropagation. Moreover, because the proposed kernel scores are well-defined for paths with values in infinite-dimensional spaces of functions, this framework can be easily extended to generate spatiotemporal data. This procedure permits conditioning on a rich variety of market conditions and significantly outperforms alternative ways of training Neural SDEs on a variety of tasks including the simulation of rough volatility models, the conditional probabilistic forecasts of real-world forex pairs where the conditioning variable is an observed past trajectory, and the mesh-free generation of limit order book dynamics.

Fri, 28 Apr 2023
16:00
L1

Pathways to independent research: fellowships and grants.

Professor Jason Lotay and panel including ECRs from the North and South Wings, and Department of Statistics.
(Mathematical Institute (University of Oxford))
Abstract

Join us for our first Fridays@4 session of Trinity about different academic routes people take post-PhD, with a particular focus on fellowships and grants. We’ll hear from Jason Lotay about his experiences on both sides of the application process, as well as hear about the experiences of ECRs in the South Wing, North Wing, and Statistics. Towards the end of the hour we’ll have a Q+A session with the whole panel, where you can ask any questions you have around this topic!

Fri, 27 Jan 2023
15:00
L2

TDA Centre Meeting

Various Speakers
(Mathematical Institute (University of Oxford))
Fri, 20 Jan 2023
16:00
L1

Departmental Colloquium

Professor James Maynard
(Mathematical Institute (University of Oxford))
Further Information

Title: “Prime numbers: Techniques, results and questions”

Abstract

The basic question in prime number theory is to try to understand the number of primes in some interesting set of integers. Unfortunately many of the most basic and natural examples are famous open problems which are over 100 years old!

We aim to give an accessible survey of (a selection of) the main results and techniques in prime number theory. In particular we highlight progress on some of these famous problems, as well as a selection of our favourite problems for future progress.

Subscribe to Mathematical Institute (University of Oxford)