Wed, 14 Jan 2026

14:00 - 15:00
Lecture Room 3

Deep Learning is Not So Mysterious or Different

Andrew Gordon Wilson
Abstract

Deep neural networks are often seen as different from other model classes by defying conventional notions of generalization. Popular
examples of anomalous generalization behaviour include benign overfitting, double descent, and the success of overparametrization.
We argue that these phenomena are not distinct to neural networks, or particularly mysterious. Moreover, this generalization behaviour can be intuitively understood, and rigorously characterized using long-standing generalization frameworks such as PAC-Bayes and countable hypothesis bounds. We present soft inductive biases as a key unifying principle in explaining these phenomena: rather than restricting the hypothesis space to avoid overfitting, embrace a flexible hypothesis space, with a soft preference for simpler solutions that are  consistent with the data. This principle can be encoded in many model classes, and thus deep learning is not as mysterious or different from other model classes as it might seem. However, we also highlight how deep learning is relatively distinct in other ways, such as its ability for representation learning, phenomena such as mode
connectivity, and its relative universality.


Bio: Andrew Gordon Wilson is a Professor at the Courant Institute of Mathematical Sciences and Center for Data Science at New York
University. He is interested in developing a prescriptive foundation for building intelligent systems. His work includes loss landscapes,
optimization, Bayesian model selection, equivariances, generalization theory, and scientific applications. His website is
https://cims.nyu.edu/~andrewgw.

oRANS: Online optimisation of RANS machine learning models with embedded DNS data generation
Dehtyriov, D MacArt, J Sirignano, J (03 Oct 2025)
Impact of memory on clustering in spontaneous particle aggregation
Erban, R Haskovec, J (17 Oct 2025)
Some Identities For Periods of Hulek-Verrill Threefolds
de la Ossa, X Elmi, M (20 Oct 2025)
On the Fourier Coefficients of critical Gaussian multiplicative chaos
Arguin, L Hamdan, J (28 Oct 2025)
Tue, 25 Nov 2025
15:00
L6

Non-Definability of Free Independence

William Boulanger, Emma Harvey, Yizhi Li
(Oxford University)
Abstract
Definability of a property, in the context of operator algebras, can be thought of as invariance under ultraproducts. William Boulanger, Emma Harvey, and Yizhi Li will show that free independence of elements, a concept from Voiculescu's free probability theory, does not lift from ultrapowers, and is thus not definable, either over C*-probability spaces or tracial von Neumann algebras. This fits into the general interest of lifting n-independent operators.
 
This talk comes from a summer research project supervised by J. Pi and J. Curda.
Wed, 19 Nov 2025

16:00 - 17:00
L6

QI groups and QI rigidity

Paula Heim
(Max Planck Institute in Leipzig)
Abstract
When studying a metric space, it can be interesting to
consider the group of maps preserving its large scale geometry. These
maps are called quasiisometries and the associated group is called the
QI group. Determining the QI group of a metric space is, in general, a
hard problem. Few QI groups are known explicitly, and most of these
results arise from a phenomenon called QI rigidity, which essentially
says that QI(X)=Isom(X). In this talk we will explore these concepts and
give a partial answer to the question which groups can arise as QI
groups of metric spaces. This talk is based on joint work with Joe
MacManus and Lawk Mineh.

 
Thu, 26 Feb 2026
16:00
Lecture Room 4

TBA

Ana Caraiani
(Imperial College London)
Inclusions of operator algebras from tensor categories: beyond irreducibility
Hataishi, L Palomares, R Glasgow Mathematical Journal 1-35 (03 Nov 2025)
Flow physics beyond the Betz limit
Dehtyriov, D
Subscribe to