Thu, 04 Apr 2024

16:00 - 17:00
Virtual

Differential Equation-inspired Deep Learning for Node Classification and Spatiotemporal Forecasting

Noseong Park
Further Information
Abstract

Scientific knowledge, written in the form of differential equations, plays a vital role in various deep learning fields. In this talk, I will present a graph neural network (GNN) design based on reaction-diffusion equations, which addresses the notorious oversmoothing problem of GNNs. Since the self-attention of Transformers can also be viewed as a special case of graph processing, I will present how we can enhance Transformers in a similar way. I will also introduce a spatiotemporal forecasting model based on neural controlled differential equations (NCDEs). NCDEs were designed to process irregular time series in a continuous manner and for spatiotemporal processing, it needs to be combined with a spatial processing module, i.e., GNN. I will show how this can be done. 

Thu, 21 Mar 2024

16:00 - 17:00
Virtual

Data-driven surrogate modelling for astrophysical simulations: from stellar winds to supernovae

Jeremy Yates and Frederik De Ceuster
(University College London)
Further Information
Abstract

The feedback loop between simulations and observations is the driving force behind almost all discoveries in astronomy. However, as technological innovations allow us to create ever more complex simulations and make ever more detailed observations, it becomes increasingly difficult to combine the two: since we cannot do controlled experiments, we need to simulate whatever we can observe. This requires efficient simulation pipelines, including (general-relativistic-)(magneto-)hydrodynamics, particle physics, chemistry, and radiation transport. In this talk, we explore the challenges associated with these modelling efforts and discuss how adopting data-driven surrogate modelling and proper control over model uncertainties, promises to unlock a gold mine of future discoveries. For instance, the application to stellar wind simulations can teach us about the origin of chemistry in our Universe and the building blocks for life, while supernova simulations can reveal exotic states of matter and elucidate the formation black holes.

Mon, 18 Mar 2024 12:30 -
Fri, 22 Mar 2024 13:00
Lecture Room 2, Mathmatical Institute

National PDE Network Meeting: Nonlinear PDEs of Mixed Type in Geometry and Mechanics /Joint with the 13th Oxbridge PDE Conference

Abstract

Meeting Theme:      

Analysis of Nonlinear PDEs of Mixed-Type (esp. Elliptic-Hyperbolic and Hyperbolic-Parabolic Mixed PDEs) and Related Topics

Meeting Place:    

Lecture Theatre 2, Mathematical Institute, University of Oxford

For more information and to view the programme

Registration is now closed.

Mon, 27 May 2024

14:00 - 15:00
Lecture Room 3

Dynamic Sparsity: Routing Information through Neural Pathways

Edoardo Ponti
(University of Edinburgh)
Abstract
Recent advancements in machine learning have caused a shift from traditional sparse modelling, which focuses on static feature selection in neural representations, to a paradigm based on selecting input or task-dependent pathways within neural networks. 
In fact, the ability to selectively (de)activate portions of neural computation graphs provides several advantages, including conditional computation, efficient parameter scaling, and compositional generalisation. 
 
In this talk, I will explore how sparse subnetworks can be identified dynamically and how parametric routing functions allow for recombining and locally adapting them in Large Language Models.


 

On Saturday 17th February, the Mathematrix and Mirzakhani societies held their inaugural joint conference. There were over 100 attendees from across the UK. The theme of the day was ‘Beyond the Pipeline’ and focused on the issues behind the metaphor of the leaky pipeline and the ways that we can prevent women and other gender minorities from leaving mathematics.

Mon, 20 May 2024

14:00 - 15:00
Lecture Room 3

Low rank approximation for faster optimization

Madeleine Udell
(Stanford University, USA)
Abstract

Low rank structure is pervasive in real-world datasets.

This talk shows how to accelerate the solution of fundamental computational problems, including eigenvalue decomposition, linear system solves, composite convex optimization, and stochastic optimization (including deep learning), by exploiting this low rank structure.

We present a simple method based on randomized numerical linear algebra for efficiently computing approximate top eigende compositions, which can be used to replace large matrices (such as Hessians and constraint matrices) with low rank surrogates that are faster to apply and invert.

The resulting solvers for linear systems (NystromPCG), composite convex optimization (NysADMM), and stochastic optimization (SketchySGD and PROMISE) demonstrate strong theoretical and numerical support, outperforming state-of-the-art methods in terms of speed and robustness to hyperparameters.

Mon, 10 Jun 2024

14:00 - 15:00
Lecture Room 3

Randomly pivoted Cholesky

Prof. Joel Tropp
(California Institute of Technology, USA)
Abstract
André-Louis Cholesky entered École Polytechnique as a student in 1895. Before 1910, during his work as a surveyer for the French army, Cholesky invented a technique for solving positive-definite systems of linear equations. Cholesky's method can also be used to approximate a positive-semidefinite (psd) matrix using a small number of columns, called "pivots". A longstanding question is how to choose the pivot columns to achieve the best possible approximation.

This talk describes a simple but powerful randomized procedure for adaptively picking the pivot columns. This algorithm, randomly pivoted Cholesky (RPC), provably achieves near-optimal approximation guarantees. Moreover, in experiments, RPC matches or improves on the performance of alternative algorithms for low-rank psd approximation.

Cholesky died in 1918 from wounds suffered in battle. In 1924, Cholesky's colleague, Commandant Benoit, published his manuscript. One century later, a modern adaptation of Cholesky's method still yields state-of-the-art performance for problems in scientific machine learning.
 
Joint work with Yifan Chen, Ethan Epperly, and Rob Webber. Available at arXiv:2207.06503.


 

Mon, 13 May 2024

14:00 - 15:00
Lecture Room 3

Compression of Graphical Data

Mihai Badiu
(Department of Engineering Science University of Oxford)
Abstract

Data that have an intrinsic network structure can be found in various contexts, including social networks, biological systems (e.g., protein-protein interactions, neuronal networks), information networks (computer networks, wireless sensor networks),  economic networks, etc. As the amount of graphical data that is generated is increasingly large, compressing such data for storage, transmission, or efficient processing has become a topic of interest. In this talk, I will give an information theoretic perspective on graph compression. 

The focus will be on compression limits and their scaling with the size of the graph. For lossless compression, the Shannon entropy gives the fundamental lower limit on the expected length of any compressed representation. I will discuss the entropy of some common random graph models, with a particular emphasis on our results on the random geometric graph model. 

Then, I will talk about the problem of compressing a graph with side information, i.e., when an additional correlated graph is available at the decoder. Turning to lossy compression, where one accepts a certain amount of distortion between the original and reconstructed graphs, I will present theoretical limits to lossy compression that we obtained for the Erdős–Rényi and stochastic block models by using rate-distortion theory.

Can shallow quantum circuits scramble local noise into global white noise?
Foldager, J Koczor, B (02 Feb 2023)
Using a probabilistic approach to derive a two-phase model of flow-induced cell migration
Ben-Ami, Y Pitt-Francis, J Maini, P Byrne, H Biophysical Journal volume 123 issue 7 799-813 (25 Feb 2024)
Subscribe to