Fri, 08 Mar 2024

15:00 - 16:00
L6

Topological Perspectives to Characterizing Generalization in Deep Neural Networks

Tolga Birdal
((Imperial College)
Further Information

 

Dr. Tolga Birdal is an Assistant Professor in the Department of Computing at Imperial College London, with prior experience as a Senior Postdoctoral Research Fellow at Stanford University in Prof. Leonidas Guibas's Geometric Computing Group. Tolga has defended his master's and Ph.D. theses at the Computer Vision Group under Chair for Computer Aided Medical Procedures, Technical University of Munich led by Prof. Nassir Navab. He was also a Doktorand at Siemens AG under supervision of Dr. Slobodan Ilic working on “Geometric Methods for 3D Reconstruction from Large Point Clouds”. His research interests center on geometric machine learning and 3D computer vision, with a theoretical focus on exploring the boundaries of geometric computing, non-Euclidean inference, and the foundations of deep learning. Dr. Birdal has published extensively in leading academic journals and conference proceedings, including NeurIPS, CVPR, ICLR, ICCV, ECCV, T-PAMI, and IJCV. Aside from his academic life, Tolga has co-founded multiple companies including Befunky, a widely used web-based image editing platform.

Abstract

 

Training deep learning models involves searching for a good model over the space of possible architectures and their parameters. Discovering models that exhibit robust generalization to unseen data and tasks is of paramount for accurate and reliable machine learning. Generalization, a hallmark of model efficacy, is conventionally gauged by a model's performance on data beyond its training set. Yet, the reliance on vast training datasets raises a pivotal question: how can deep learning models transcend the notorious hurdle of 'memorization' to generalize effectively? Is it feasible to assess and guarantee the generalization prowess of deep neural networks in advance of empirical testing, and notably, without any recourse to test data? This inquiry is not merely theoretical; it underpins the practical utility of deep learning across myriad applications. In this talk, I will show that scrutinizing the training dynamics of neural networks through the lens of topology, specifically using 'persistent-homology dimension', leads to novel bounds on the generalization gap and can help demystifying the inner workings of neural networks. Our work bridges deep learning with the abstract realms of topology and learning theory, while relating to information theory through compression.

 

Fri, 09 Jun 2017

13:00 - 14:00
L6

Structure of martingale transports in finite dimensions

Pietro Siorpaes
((Imperial College)
Abstract


Martingale optimal transport is a variant of the classical optimal transport problem where a martingale constraint is imposed on the coupling. In a recent paper, Beiglböck, Nutz and Touzi show that in dimension one there is no duality gap and that the dual problem admits an optimizer. A key step towards this achievement is the characterization of the polar sets of the family of all martingale couplings. Here we aim to extend this characterization to arbitrary finite dimension through a deeper study of the convex order

 

Mon, 08 Oct 2012

17:00 - 18:00
Gibson 1st Floor SR

Blow-up & Stationary States

José Antonio Carrillo de la Plata
((Imperial College)
Abstract
We will discuss how optimal transport tools can be used to analyse the qualitative behavior of continuum systems of interacting particles by fully attractive or short-range repulsive long-range attractive potentials.
Subscribe to (Imperial College