Mon, 18 Mar 2024 12:30 -
Fri, 22 Mar 2024 13:00
Lecture Room 2, Mathmatical Institute

National PDE Network Meeting: Nonlinear PDEs of Mixed Type in Geometry and Mechanics /Joint with the 13th Oxbridge PDE Conference

Abstract

Meeting Theme:      

Analysis of Nonlinear PDEs of Mixed-Type (esp. Elliptic-Hyperbolic and Hyperbolic-Parabolic Mixed PDEs) and Related Topics

Meeting Place:    

Lecture Theatre 2, Mathematical Institute, University of Oxford

For more information and to view the programme

Registration is now closed.

Mon, 27 May 2024

14:00 - 15:00
Lecture Room 3

Dynamic Sparsity: Routing Information through Neural Pathways

Edoardo Ponti
(University of Edinburgh)
Abstract
Recent advancements in machine learning have caused a shift from traditional sparse modelling, which focuses on static feature selection in neural representations, to a paradigm based on selecting input or task-dependent pathways within neural networks. 
In fact, the ability to selectively (de)activate portions of neural computation graphs provides several advantages, including conditional computation, efficient parameter scaling, and compositional generalisation. 
 
In this talk, I will explore how sparse subnetworks can be identified dynamically and how parametric routing functions allow for recombining and locally adapting them in Large Language Models.


 

Mon, 20 May 2024

14:00 - 15:00
Lecture Room 3

Low rank approximation for faster optimization

Madeleine Udell
(Stanford University, USA)
Abstract

Low rank structure is pervasive in real-world datasets.

This talk shows how to accelerate the solution of fundamental computational problems, including eigenvalue decomposition, linear system solves, composite convex optimization, and stochastic optimization (including deep learning), by exploiting this low rank structure.

We present a simple method based on randomized numerical linear algebra for efficiently computing approximate top eigende compositions, which can be used to replace large matrices (such as Hessians and constraint matrices) with low rank surrogates that are faster to apply and invert.

The resulting solvers for linear systems (NystromPCG), composite convex optimization (NysADMM), and stochastic optimization (SketchySGD and PROMISE) demonstrate strong theoretical and numerical support, outperforming state-of-the-art methods in terms of speed and robustness to hyperparameters.

Mon, 10 Jun 2024

14:00 - 15:00
Lecture Room 3

Randomly pivoted Cholesky

Prof. Joel Tropp
(California Institute of Technology, USA)
Abstract
André-Louis Cholesky entered École Polytechnique as a student in 1895. Before 1910, during his work as a surveyer for the French army, Cholesky invented a technique for solving positive-definite systems of linear equations. Cholesky's method can also be used to approximate a positive-semidefinite (psd) matrix using a small number of columns, called "pivots". A longstanding question is how to choose the pivot columns to achieve the best possible approximation.

This talk describes a simple but powerful randomized procedure for adaptively picking the pivot columns. This algorithm, randomly pivoted Cholesky (RPC), provably achieves near-optimal approximation guarantees. Moreover, in experiments, RPC matches or improves on the performance of alternative algorithms for low-rank psd approximation.

Cholesky died in 1918 from wounds suffered in battle. In 1924, Cholesky's colleague, Commandant Benoit, published his manuscript. One century later, a modern adaptation of Cholesky's method still yields state-of-the-art performance for problems in scientific machine learning.
 
Joint work with Yifan Chen, Ethan Epperly, and Rob Webber. Available at arXiv:2207.06503.


 

Mon, 13 May 2024

14:00 - 15:00
Lecture Room 3

Compression of Graphical Data

Mihai Badiu
(Department of Engineering Science University of Oxford)
Abstract

Data that have an intrinsic network structure can be found in various contexts, including social networks, biological systems (e.g., protein-protein interactions, neuronal networks), information networks (computer networks, wireless sensor networks),  economic networks, etc. As the amount of graphical data that is generated is increasingly large, compressing such data for storage, transmission, or efficient processing has become a topic of interest. In this talk, I will give an information theoretic perspective on graph compression. 

The focus will be on compression limits and their scaling with the size of the graph. For lossless compression, the Shannon entropy gives the fundamental lower limit on the expected length of any compressed representation. I will discuss the entropy of some common random graph models, with a particular emphasis on our results on the random geometric graph model. 

Then, I will talk about the problem of compressing a graph with side information, i.e., when an additional correlated graph is available at the decoder. Turning to lossy compression, where one accepts a certain amount of distortion between the original and reconstructed graphs, I will present theoretical limits to lossy compression that we obtained for the Erdős–Rényi and stochastic block models by using rate-distortion theory.

Can shallow quantum circuits scramble local noise into global white noise?
Foldager, J Koczor, B (02 Feb 2023)
Using a probabilistic approach to derive a two-phase model of flow-induced cell migration
Ben-Ami, Y Pitt-Francis, J Maini, P Byrne, H Biophysical Journal volume 123 issue 7 799-813 (25 Feb 2024)
ICTP lectures on (non-)invertible generalized symmetries
Schafer-Nameki, S Physics Reports volume 1063 1-55 (09 Feb 2024)
Tue, 20 Feb 2024
11:00
Lecture room 5

The flow equation approach to singular SPDEs.

Massimiliano Gubinelli
(Mathematical Institute)
Abstract

I will give an overview of a recent method introduced by P. Duch to solve some subcritical singular SPDEs, in particular the stochastic quantisation equation for scalar fields. 

Mon, 29 Apr 2024
15:30
Lecture Room 3

Sharp interface limit of 1D stochastic Allen-Cahn equation in full small noise regime

Prof. Weijun Xu
(Beijing International Center for Mathematical Research)
Abstract

We consider the sharp interface limit problem for 1D stochastic Allen-Cahn equation, and extend a classic result by Funaki to the full small noise regime. One interesting point is that the notion of "small noise" turns out to depend on the topology one uses. The main new idea in the proof is the construction of a series of functional correctors, which are designed to recursively cancel out potential divergences. At a technical level, in order to show these correctors are well behaved, we also develop a systematic decomposition of functional derivatives of the deterministic Allen-Cahn flow of all orders, which might have its own interest.
Based on a joint work with Wenhao Zhao (EPFL) and Shuhan Zhou (PKU).

Subscribe to