Please note that the list below only shows forthcoming events, which may not include regular events that have not yet been entered for the forthcoming term. Please see the past events page for a list of all seminar series that the department has on offer.

 

Past events in this series


Wed, 14 Jan 2026

14:00 - 15:00
Lecture Room 3

Deep Learning is Not So Mysterious or Different

Andrew Gordon Wilson
Abstract

Deep neural networks are often seen as different from other model classes by defying conventional notions of generalization. Popular examples of anomalous generalization behaviour include benign overfitting, double descent, and the success of overparametrization. We argue that these phenomena are not distinct to neural networks, or particularly mysterious. Moreover, this generalization behaviour can be intuitively understood, and rigorously characterized using long-standing generalization frameworks such as PAC-Bayes and countable hypothesis bounds. We present soft inductive biases as a key unifying principle in explaining these phenomena: rather than restricting the hypothesis space to avoid overfitting, embrace a flexible hypothesis space, with a soft preference for simpler solutions that are  consistent with the data. This principle can be encoded in many model classes, and thus deep learning is not as mysterious or different from other model classes as it might seem. However, we also highlight how deep learning is relatively distinct in other ways, such as its ability for representation learning, phenomena such as mode connectivity, and its relative universality.


Bio: Andrew Gordon Wilson is a Professor at the Courant Institute of Mathematical Sciences and Center for Data Science at New York University. He is interested in developing a prescriptive foundation for building intelligent systems. His work includes loss landscapes, optimization, Bayesian model selection, equivariances, generalization theory, and scientific applications. 
His website is https://cims.nyu.edu/~andrewgw.

Mon, 19 Jan 2026

14:00 - 15:00
Lecture Room 3

TBA

Professor Olivier Bokanowski
(Université Paris Cité)
Abstract

TBA

Mon, 09 Feb 2026

14:00 - 15:00
Lecture Room 3

What makes an image realistic ?

Lucas Theis
Abstract

The last decade has seen tremendous progress in our ability to generate realistic-looking data, be it images, text, audio, or video. In this presentation, we will look at the closely related problem of quantifying realism, that is, designing functions that can reliably tell realistic data from unrealistic data. This problem turns out to be significantly harder to solve and remains poorly understood, despite its prevalence in machine learning and recent breakthroughs in generative AI. Drawing on insights from algorithmic information theory, we discuss why this problem is challenging, why a good generative model alone is insufficient to solve it, and what a good solution would look like. In particular, we introduce the notion of a universal critic, which unlike adversarial critics does not require adversarial training. While universal critics are not immediately practical, they can serve both as a North Star for guiding practical implementations and as a tool for analyzing existing attempts to capture realism.