Please note that the list below only shows forthcoming events, which may not include regular events that have not yet been entered for the forthcoming term. Please see the past events page for a list of all seminar series that the department has on offer.

 

Past events in this series


Mon, 02 Feb 2026

14:00 - 15:00
Lecture 3

Convex Analysis of Non-Convex Neural Networks

Aaron Mishkin
(Stanford University, USA)
Abstract

Speaker Aaron Mishkin will talk about; 'Convex Analysis of Non-Convex Neural Networks

One of the key themes in modern optimization is the boundary between convex and non-convex problems. While convex problems can often be solved efficiently, many non-convex programs are NP-Hard and formally difficult. 

In this talk, we show how to break the barrier between convex and non-convex optimization by reformulating, or "lifting", neural networks into high-dimensional spaces where they become convex. These convex reformulations serve two purposes: as algorithmic tools to enable fast, global optimization for two-layer ReLU networks; and as a convex proxy to study variational properties of the original non-convex problem. In particular, we show that shallow ReLU networks are equivalent to models with simple "gated ReLU" activations, derive the set of all critical points for two-layer ReLU networks, and give the first polynomial-time algorithm for optimal neuron pruning. We conclude with extensions to ReLU networks of arbitrary depth using a novel layer-elimination argument.

 

Mon, 09 Feb 2026

14:00 - 15:00
Lecture Room 3

What makes an image realistic ?

Lucas Theis
Abstract

Speaker Lucas Theis will talk about: 'What makes an image realistic ?'

The last decade has seen tremendous progress in our ability to generate realistic-looking data, be it images, text, audio, or video. 
In this presentation, we will look at the closely related problem of quantifying realism, that is, designing functions that can reliably tell realistic data from unrealistic data. This problem turns out to be significantly harder to solve and remains poorly understood, despite its prevalence in machine learning and recent breakthroughs in generative AI. Drawing on insights from algorithmic information theory, we discuss why this problem is challenging, why a good generative model alone is insufficient to solve it, and what a good solution would look like. In particular, we introduce the notion of a universal critic, which unlike adversarial critics does not require adversarial training. While universal critics are not immediately practical, they can serve both as a North Star for guiding practical implementations and as a tool for analyzing existing attempts to capture realism.