Research group
Topology
Fri, 08 Mar 2024

15:00 - 16:00
L6

Topological Perspectives to Characterizing Generalization in Deep Neural Networks

Tolga Birdal
((Imperial College)
Further Information

 

Dr. Tolga Birdal is an Assistant Professor in the Department of Computing at Imperial College London, with prior experience as a Senior Postdoctoral Research Fellow at Stanford University in Prof. Leonidas Guibas's Geometric Computing Group. Tolga has defended his master's and Ph.D. theses at the Computer Vision Group under Chair for Computer Aided Medical Procedures, Technical University of Munich led by Prof. Nassir Navab. He was also a Doktorand at Siemens AG under supervision of Dr. Slobodan Ilic working on “Geometric Methods for 3D Reconstruction from Large Point Clouds”. His research interests center on geometric machine learning and 3D computer vision, with a theoretical focus on exploring the boundaries of geometric computing, non-Euclidean inference, and the foundations of deep learning. Dr. Birdal has published extensively in leading academic journals and conference proceedings, including NeurIPS, CVPR, ICLR, ICCV, ECCV, T-PAMI, and IJCV. Aside from his academic life, Tolga has co-founded multiple companies including Befunky, a widely used web-based image editing platform.

Abstract

 

Training deep learning models involves searching for a good model over the space of possible architectures and their parameters. Discovering models that exhibit robust generalization to unseen data and tasks is of paramount for accurate and reliable machine learning. Generalization, a hallmark of model efficacy, is conventionally gauged by a model's performance on data beyond its training set. Yet, the reliance on vast training datasets raises a pivotal question: how can deep learning models transcend the notorious hurdle of 'memorization' to generalize effectively? Is it feasible to assess and guarantee the generalization prowess of deep neural networks in advance of empirical testing, and notably, without any recourse to test data? This inquiry is not merely theoretical; it underpins the practical utility of deep learning across myriad applications. In this talk, I will show that scrutinizing the training dynamics of neural networks through the lens of topology, specifically using 'persistent-homology dimension', leads to novel bounds on the generalization gap and can help demystifying the inner workings of neural networks. Our work bridges deep learning with the abstract realms of topology and learning theory, while relating to information theory through compression.

 

Fri, 01 Mar 2024

15:00 - 16:00
L6

Applied Topology TBC

Zoe Cooperband
(University of Pennsylvania)
Further Information

Dr  Zoe Copperband is a member of the Penn Engineering GRASP Laboratory. Her recent preprint, Towards Homological Methods in Graphic Statics, can be found here.

Fri, 19 Jan 2024

15:00 - 16:00
L4

The Function-Rips Multifiltration as an Estimator

Steve Oudot
(INRIA - Ecole Normale Supérieure)
Abstract

Say we want to view the function-Rips multifiltration as an estimator. Then, what is the target? And what kind of consistency, bias, or convergence rate, should we expect? In this talk I will present on-going joint work with Ethan André (Ecole Normale Supérieure) that aims at laying the algebro-topological ground to start answering these questions.

Tue, 21 Nov 2023
11:00
L1

Singularity Detection from a Data "Manifold"

Uzu Lim
(Mathematical Institute)

Note: we would recommend to join the meeting using the Teams client for best user experience.

Abstract

High-dimensional data is often assumed to be distributed near a smooth manifold. But should we really believe that? In this talk I will introduce HADES, an algorithm that quickly detects singularities where the data distribution fails to be a manifold.

By using hypothesis testing, rather than persistent homology, HADES achieves great speed and a strong statistical foundation. We also have a precise mathematical theorem for correctness, proven using optimal transport theory and differential geometry. In computational experiments, HADES recovers singularities in synthetic data, road networks, molecular conformation space, and images.

Paper link: https://arxiv.org/abs/2311.04171
Github link: https://github.com/uzulim/hades
 

Fri, 27 Oct 2023

15:00 - 16:00
L5

Universality in Persistence Diagrams and Applications

Primoz Skraba
(Queen Mary University, Mathematical Sciences)
Further Information

 

Primoz Skraba is a Senior Lecturer in Applied and Computational Topology. His research is broadly related to data analysis with an emphasis on topological data analysis. Generally, the problems he considers span both theory and applications. On the theory side, the areas of interest include stability and approximation of algebraic invariants, stochastic topology (the topology of random spaces), and algorithmic research. On the applications side, he focuses on combining topological ideas with machine learning, optimization, and  other statistical tools. Other applications areas of interest include visualization and geometry processing.

He received a PhD in Electrical Engineering from Stanford University in 2009 and has held positions at INRIA in France and the Jozef Stefan Institute, the University of Primorska, and the University of Nova Gorica in Slovenia, before joining Queen Mary University of London in 2018. He is also currently a Fellow at the Alan Turing Institute.

Abstract

In this talk, I will present joint work with Omer Bobrowski:  a series of statements regarding the behaviour of persistence diagrams arising from random point-clouds. I will present evidence that, viewed in the right way, persistence values obey a universal probability law, that depends on neither the underlying space nor the original distribution of the point-cloud.  I will present two versions of this universality: “weak” and “strong” along with progress which has been made in proving the statements.  Finally, I will also discuss some applications of this phenomena based on detecting structure in data.

Fri, 01 Dec 2023

15:00 - 16:00
L5

Computing algebraic distances and associated invariants for persistence

Martina Scolamiero
(KTH Stockholm)
Further Information

Martina Scolamiero is an Assistant Professor in Mathametics with specialization in Geometry and Mathematical Statistics in Artificial Intelligence.

Her research is in Applied and Computational Topology, mainly working on defining topological invariants which are suitable for data analysis, understanding their statistical properties and their applicability in Machine Learning. Martina is also interested in applications of topological methods to Neuroscience and Psychiatry.

Abstract

Pseudo metrics between persistence modules can be defined starting from Noise Systems [1].  Such metrics are used to compare the modules directly or to extract stable vectorisations. While the stability property directly follows from the axioms of Noise Systems, finding algorithms or closed formulas to compute the distances or associated vectorizations  is often a difficult problem, especially in the multi-parameter setting. In this seminar I will show how extra properties of Noise Systems can be used to define algorithms. In particular I will describe how to compute stable vectorisations with respect to Wasserstein distances [2]. Lastly I will discuss ongoing work (with D. Lundin and R. Corbet) for the computation of a geometric distance (the Volume Noise distance) and associated invariants on interval modules.

[1] M. Scolamiero, W. Chachólski, A. Lundman, R. Ramanujam, S. Oberg. Multidimensional Persistence and Noise, (2016) Foundations of Computational Mathematics, Vol 17, Issue 6, pages 1367-1406. doi:10.1007/s10208-016-9323-y.

[2] J. Agerberg, A. Guidolin, I. Ren and M. Scolamiero. Algebraic Wasserstein distances and stable homological invariants of data. (2023) arXiv: 2301.06484.

Fri, 24 Nov 2023

15:00 - 16:00
L5

Indecomposables in multiparameter persistence

Ulrich Bauer
(TU Munich)
Further Information

Ulrich Bauer is an associate professor (W3) in the department of mathematics at the Technical University of Munich (TUM), leading the Applied & Computational Topology group. His research revolves around application-motivated concepts and computational methods in topology and geometry, popularized by application areas such as topological data analysis. Some of his key research areas are persistent homology, discrete Morse theory, and geometric complexes.

Abstract

I will discuss various aspects of multi-parameter persistence related to representation theory and decompositions into indecomposable summands, based on joint work with Magnus Botnan, Steffen Oppermann, Johan Steen, Luis Scoccola, and Benedikt Fluhr.

A classification of indecomposables is infeasible; the category of two-parameter persistence modules has wild representation type. We show [1] that this is still the case if the structure maps in one parameter direction are epimorphisms, a property that is commonly satisfied by degree 0 persistent homology and related to filtered hierarchical clustering. Furthermore, we show [2] that indecomposable persistence modules are dense in the interleaving distance, and that being nearly-indecomposable is a generic property of persistence modules. On the other hand, the two-parameter persistence modules arising from interleaved sets (relative interleaved set cohomology) have a very well-behaved structure [3] that is encoded as a complete invariant in the extended persistence diagram. This perspective reveals some important but largely overlooked insights about persistent homology; in particular, it highlights a strong reason for working at the level of chain complexes, in a derived category [4].

 

[1] Ulrich Bauer, Magnus B. Botnan, Steffen Oppermann, and Johan Steen, Cotorsion torsion triples and the representation theory of filtered hierarchical clustering, Adv. Math. 369 (2020), 107171, 51. MR4091895

[2] Ulrich Bauer and Luis Scoccola, Generic multi-parameter persistence modules are nearly indecomposable, 2022.

[3] Ulrich Bauer, Magnus Bakke Botnan, and Benedikt Fluhr, Structure and interleavings of relative interlevel set cohomology, 2022.

[4] Ulrich Bauer and Benedikt Fluhr, Relative interlevel set cohomology categorifies extended persistence diagrams, 2022.

 

Fri, 10 Nov 2023

15:00 - 16:00
L5

Topological Data Analysis (TDA) for Geographical Information Science (GIS)

Padraig Corcoran
(Cardiff University)
Further Information

Dr Padraig Corcoran is a Senior Lecturer and the Director of Research in the School of Computer Science and Informatics (COMSC) at Cardiff University.

Dr Corcoran has much experience and expertise in the fields of graph theory and applied topology. He is particularly interested in applications to the domains of geographical information science and robotics.

Abstract

Topological data analysis (TDA) is an emerging field of research, which considers the application of topology to data analysis. Recently, these methods have been successfully applied to research problems in the field of geographical information science (GIS). This includes the problems of Point of Interest (PoI), street network and weather analysis. In this talk I will describe how TDA can be used to provide solutions to these problems plus how these solutions compare to those traditionally used by GIS practitioners. I will also describe some of the challenges of performing interdisciplinary research when applying TDA methods to different types of data.

Fri, 03 Nov 2023

15:00 - 16:00
L5

The Expected Betti Numbers of Preferential Attachment Clique Complexes

Chunyin Siu
(Cornell)
Further Information

Chunyin Siu (Alex) is a PhD candidate at Cornell University at the Center for Applied Mathematics, and is a Croucher scholar (2019) and a Youde scholar (2018).

His primary research interests lie in the intersection of topological data analysis, network analysis, topological statistics and computational geometry. He is advised by Prof. Gennady Samorodnitsky. Before coming to Cornell University, he was a MPhil. student advised by Prof. Ronald (Lokming) Lui at the Chinese University of Hong Kong.

Abstract

The preferential attachment model is a natural and popular random graph model for a growing network that contains very well-connected ``hubs''. Despite intense interest in the higher-order connectivity of these networks, their Betti numbers at higher dimensions have been largely unexplored.

In this talk, after a brief survey on random topology, we study the clique complexes of preferential attachment graphs, and we prove the asymptotics of the expected Betti numbers. If time allows, we will briefly discuss their homotopy connectedness as well. This is joint work with Gennady Samorodnitsky, Christina Lee Yu and Rongyi He, and it is based on the preprint https://arxiv.org/abs/2305.11259

Subscribe to Applied Topology Seminar