Mon, 10 Jun 2024

14:00 - 15:00
Lecture Room 3

Randomly pivoted Cholesky

Prof. Joel Tropp
(California Institute of Technology, USA)
Abstract
André-Louis Cholesky entered École Polytechnique as a student in 1895. Before 1910, during his work as a surveyer for the French army, Cholesky invented a technique for solving positive-definite systems of linear equations. Cholesky's method can also be used to approximate a positive-semidefinite (psd) matrix using a small number of columns, called "pivots". A longstanding question is how to choose the pivot columns to achieve the best possible approximation.

This talk describes a simple but powerful randomized procedure for adaptively picking the pivot columns. This algorithm, randomly pivoted Cholesky (RPC), provably achieves near-optimal approximation guarantees. Moreover, in experiments, RPC matches or improves on the performance of alternative algorithms for low-rank psd approximation.

Cholesky died in 1918 from wounds suffered in battle. In 1924, Cholesky's colleague, Commandant Benoit, published his manuscript. One century later, a modern adaptation of Cholesky's method still yields state-of-the-art performance for problems in scientific machine learning.
 
Joint work with Yifan Chen, Ethan Epperly, and Rob Webber. Available at arXiv:2207.06503.


 

Mon, 13 May 2024

14:00 - 15:00
Lecture Room 3

Compression of Graphical Data

Mihai Badiu
(Department of Engineering Science University of Oxford)
Abstract

Data that have an intrinsic network structure can be found in various contexts, including social networks, biological systems (e.g., protein-protein interactions, neuronal networks), information networks (computer networks, wireless sensor networks),  economic networks, etc. As the amount of graphical data that is generated is increasingly large, compressing such data for storage, transmission, or efficient processing has become a topic of interest. In this talk, I will give an information theoretic perspective on graph compression. 

The focus will be on compression limits and their scaling with the size of the graph. For lossless compression, the Shannon entropy gives the fundamental lower limit on the expected length of any compressed representation. I will discuss the entropy of some common random graph models, with a particular emphasis on our results on the random geometric graph model. 

Then, I will talk about the problem of compressing a graph with side information, i.e., when an additional correlated graph is available at the decoder. Turning to lossy compression, where one accepts a certain amount of distortion between the original and reconstructed graphs, I will present theoretical limits to lossy compression that we obtained for the Erdős–Rényi and stochastic block models by using rate-distortion theory.

Can shallow quantum circuits scramble local noise into global white noise?
Foldager, J Koczor, B (02 Feb 2023)
Using a probabilistic approach to derive a two-phase model of flow-induced cell migration
Ben-Ami, Y Pitt-Francis, J Maini, P Byrne, H Biophysical Journal volume 123 issue 7 799-813 (25 Feb 2024)
ICTP lectures on (non-)invertible generalized symmetries
Schafer-Nameki, S Physics Reports volume 1063 1-55 (09 Feb 2024)
Tue, 20 Feb 2024
11:00
Lecture room 5

The flow equation approach to singular SPDEs.

Massimiliano Gubinelli
(Mathematical Institute)
Abstract

I will give an overview of a recent method introduced by P. Duch to solve some subcritical singular SPDEs, in particular the stochastic quantisation equation for scalar fields. 

Mon, 29 Apr 2024
15:30
Lecture Room 3

Sharp interface limit of 1D stochastic Allen-Cahn equation in full small noise regime

Prof. Weijun Xu
(Beijing International Center for Mathematical Research)
Abstract

We consider the sharp interface limit problem for 1D stochastic Allen-Cahn equation, and extend a classic result by Funaki to the full small noise regime. One interesting point is that the notion of "small noise" turns out to depend on the topology one uses. The main new idea in the proof is the construction of a series of functional correctors, which are designed to recursively cancel out potential divergences. At a technical level, in order to show these correctors are well behaved, we also develop a systematic decomposition of functional derivatives of the deterministic Allen-Cahn flow of all orders, which might have its own interest.
Based on a joint work with Wenhao Zhao (EPFL) and Shuhan Zhou (PKU).

Insights and caveats from mining local and global temporal motifs in cryptocurrency transaction networks
Arnold, N Zhong, P Ba, C Steer, B Mondragon, R Cuadrado, F Lambiotte, R Clegg, R (14 Feb 2024)
Still from the lecture with Sarah

Which of the 93 student lectures on our YouTube Channel is most watched? Perhaps the 'Introduction to Mathematics' or a spot of 'Linear Algebra'? Or possibly 'Probability'? 

Well, they're all popular, but the most watched lecture on the channel is 'Introductory Calculus'. YouTubers (and the algorithm) love it.

So we're showing 4 lectures from its follow-up, 'Multivariable Calculus' starring Sarah Waters. It's a first year lecture taken in the second term of the year.

 

Subscribe to