Please note that the list below only shows forthcoming events, which may not include regular events that have not yet been entered for the forthcoming term. Please see the past events page for a list of all seminar series that the department has on offer.

 

Past events in this series


Mon, 11 May 2026

14:00 - 15:00
Lecture Room 3

Smooth, globally Polyak-Łojasiewicz functions are nonlinear least-squares

Associate Professor Nicolas Boumal
(École Polytechnique Fédérale de Lausanne - EPFL)
Abstract

Associate Professor Nicolas Boumal will talk about: 'Smooth, globally Polyak-Łojasiewicz functions are nonlinear least-squares'

Polyak-Łojasiewicz (PŁ) functions abound in the literature, especially in nonconvex optimization. When they are also smooth, they become surprisingly simple---with an exotic twist. The plan is for us to discover the structure of those functions and of their sets of minimizers via gradient flow and fiber bundles.

Joint work with Christopher Criscitiello and Quentin Rebjock.

Mon, 25 May 2026

14:00 - 15:00
Lecture Room 3

Acceleration of first order methods in convex optimization

Professor Juan Peypouquet
(University of Groningen, The Netherlands)
Abstract

The dynamic nature of first order methods can be interpreted by means of continuous time models. In this survey talk, we explain how physical concepts like accelerationinertia or momentum have been used to improve the performance of convex optimization algorithms. 

We give special attention to the historical evolution of complexity results, especially in the form of convergence rates, under the light of this connection. We also discuss different ways in which acceleration schemes can be applied when the smoothness or strong convexity parameters are unknown, and how these ideas extend to saddle point and constrained problems. 

 

 

Mon, 01 Jun 2026

14:00 - 15:00
Lecture Room 3

Extragradient Methods for Modern Machine Learning: New Convergence Guarantees, Step-Size Rules, and Stochastic Variants

Assistant Professor Nicolas Loizou
(Johns Hopkins University, Baltimore, USA)
Abstract

Extragradient methods are a fundamental class of algorithms for solving min-max optimization problems and variational inequalities. While the classical theory is largely developed under smoothness and other relatively restrictive assumptions, many problems arising in modern machine learning call for analysis in weaker regularity regimes and in stochastic large-scale settings. In this talk, we present new convergence results for deterministic and stochastic extragradient methods beyond the classical framework. In particular, we establish convergence guarantees under the (L0, L1)-Lipschitz condition and derive new step-size rules that expand the range of provably convergent regimes. We also introduce Polyak-type step sizes for deterministic and stochastic extragradient methods, leading to adaptive variants with favourable theoretical properties and practical performance. Our results focus primarily on monotone problems, with extensions to selected structured non-monotone settings. We conclude with numerical experiments that illustrate the theory and the empirical behaviour of the proposed methods.

 

 

Further Information

Bio
Nicolas Loizou is an Assistant Professor in the Department of Applied Mathematics and Statistics and the Mathematical Institute for Data Science (MINDS) at Johns Hopkins University, where he leads the Optimization and Machine Learning Lab. He holds secondary appointments in the Departments of Computer Science and Electrical and Computer Engineering and is a member of Johns Hopkins Data Science Institute and Ralph O’Connor Sustainable Energy Institute (ROSEI).

Prior to this, he was a Postdoctoral Research Fellow at Mila - Quebec Artificial Intelligence Institute and the University of Montreal. He holds a Ph.D. in Optimization and Operational Research from the University of Edinburgh, School of Mathematics, an M.Sc. in Computing from Imperial College London, and a BSc in Mathematics from the National and Kapodistrian University of Athens.

His research interests include large-scale optimization, machine learning, randomized numerical linear algebra, distributed and decentralized algorithms, algorithmic game theory, and federated learning. He currently serves as action editor for Information and Inference: A Journal of the IMA, Optimization Methods and Software, and Transactions on Machine Learning Research. He has received several awards and fellowships, including the OR Society's 2019 Doctoral Award (runner-up) for the ''Most Distinguished Body of Research leading to the Award of a Doctorate in the field of Operational Research’', the IVADO Fellowship, the COAP 2020 Best Paper Award, the CISCO 2023 Research Award, and the Catalyst 2025 Award.

 

Mon, 15 Jun 2026

14:00 - 15:00
Lecture Room 3

TBA

Jian-Qing Zheng
(CAMS-Oxford Institute, University of Oxford)
Abstract

TBA

Further Information

Bio: 
Jian-Qing Zheng is a Postdoctoral Researcher at the University of Oxford (2024–present), specialising in artificial intelligence for biomedicine. He obtained his DPhil from Oxford as a Kennedy Trust Scholar. His research develops machine learning frameworks for biomedical and immunological applications, with a focus on robust modelling and real-world impact. He serves on the editorial boards of PLOS Digital Health and MedScience (Springer). He has published over 20 papers in leading venues, including Medical Image Analysis, Cell Research, and IEEE Trans on Signal Proc.

Mon, 30 Nov 2026

14:00 - 15:00
Lecture Room 3

Physics-informed deep generative models: Applications to computational sensing

Professor Marcelo Pereyra
(Heriot-Watt University, Edinburgh)
Abstract

Professor Pereyra will talk about; 'Physics-informed deep generative models: Applications to computational sensing'

This talk introduces a novel mathematical and computational framework for constructing high-dimensional Bayesian inversion methods that leverage state-of-the-art generative denoising diffusion models as highly informative priors. A central innovation is the construction of physics-informed generative models using Langevin diffusion processes and Markov chain Monte Carlo (MCMC) sampling techniques to develop stochastic neural network architectures capable of near-exact sampling. The obtained networks are modular and composed of interpretable layers that are directly related to statistical image priors and data likelihoods derived from forward observation models. The layers encoding the data likelihood function are designed for flexibility, enabling scene and instrument model parameters to be specified at inference time and seamlessly integrated with pre-trained foundational generative priors. To achieve high computational efficiency, we employ adversarial model distillation, which yields excellent sampling performance with as few as four Markov chain Monte Carlo steps, even in problems exceeding one million dimensions. Our approach is validated through non-asymptotic convergence analysis and extensive numerical experiments in computational image and video restoration. We conclude by discussing unsupervised training strategies that allow the models to be fine-tuned directly from measurement data, thereby bypassing the need for clean reference data.

The talk is based on recent work in physics-informed generative AI for Bayesian imaging: https://arxiv.org/abs/2503.12615 (ICCV 2025), which uses a distilled latent Stable Diffusion XL model trained on five billion clean images as a zero-shot prior, and  https://arxiv.org/pdf/2507.02686, which integrates pixel-based diffusion models with deep unfolding and diffusion distillation (TMLR 2025). The extension to video restoration is presented in https://arxiv.org/abs/2510.01339 (ICLR 2025). Our approach to unsupervised training of diffusion models is introduced in https://arxiv.org/abs/2510.11964.

 

 

Further Information

Biosketch:
Marcelo Pereyra is a Professor in Statistics and UKRI EPSRC Open Research Fellow at the School of Mathematical and Computer Sciences of Heriot-Watt University & Maxwell Institute for Mathematical Sciences. He leads pioneering research advancing the statistical foundations of quantitative and scientific imaging, shaping how image data are used as rigorous quantitative evidence, and forging deep connections between statistical, variational, and machine learning approaches to imaging. His leadership and contributions have been recognized through multiple prestigious awards, most recently a five-year fulltime EPSRC Open Fellowship to drive the next generation of breakthroughs in statistical imaging sciences based on physics-informed generative artificial intelligence. Prof. Pereyra will join Imperial College London in 2027 as Chair in Statistical Machine Learning in the Department of Mathematics.

Prof. Pereyra received the SIAM SIGEST Award in Imaging Sciences for his contributions to Bayesian imaging in 2022. He has held Invited Professor positions at Institut Henri Poincaré (Paris, 2019), Université Paris Cité (2022), Ecole Normale Superiéure Lyon (2023), Université Paris Cité (2024) and Centralle Lille (2025). He is also the recipient of a UKRI EPSRC Open Research Fellowship (2025), a Marie Curie Intra-European Fellowship for Career Development (2013), a Brunel Postdoctoral Research Fellowship in Statistics (2012), a Postdoctoral Research Fellowship from French Ministry of Defence (2012), and a Leopold Escande PhD Thesis award from the University of Toulouse (2012).