Fri, 05 May 2023
15:30
Large Lecture Theatre, Department of Statistics, University of Oxford

Joint Maths and Stats Colloquium: Understanding neural networks and quantification of their uncertainty via exactly solvable models

Lenka Zdeborová, Professor of Physics and Computer Science
(École Polytechnique Fédérale de Lausanne, Switzerland)
Further Information

The Lecture will be followed by a Drinks Reception in the ground floor social area. To help with catering arrangements, please book your place here https://forms.office.com/e/Nw3qSZtzCs.

Lenka Zdeborová is a Professor of Physics and Computer Science at École Polytechnique Fédérale de Lausanne, where she leads the Statistical Physics of Computation Laboratory. She received a PhD in physics from University Paris-Sud and Charles University in Prague in 2008. She spent two years in the Los Alamos National Laboratory as the Director's Postdoctoral Fellow. Between 2010 and 2020, she was a researcher at CNRS, working in the Institute of Theoretical Physics in CEA Saclay, France. In 2014, she was awarded the CNRS bronze medal; in 2016 Philippe Meyer prize in theoretical physics and an ERC Starting Grant; in 2018, the Irène Joliot-Curie prize; in 2021, the Gibbs lectureship of AMS and the Neuron Fund award. Lenka's expertise is in applications of concepts from statistical physics, such as advanced mean field methods, the replica method and related message-passing algorithms, to problems in machine learning, signal processing, inference and optimization. She enjoys erasing the boundaries between theoretical physics, mathematics and computer science.

Abstract

The affinity between statistical physics and machine learning has a long history. Theoretical physics often proceeds in terms of solvable synthetic models; I will describe the related line of work on solvable models of simple feed-forward neural networks. I will then discuss how this approach allows us to analyze uncertainty quantification in neural networks, a topic that gained urgency in the dawn of widely deployed artificial intelligence. I will conclude with what I perceive as important specific open questions in the field.

 

Fri, 28 Apr 2023
16:00
L1

Pathways to independent research: fellowships and grants.

Professor Jason Lotay and panel including ECRs from the North and South Wings, and Department of Statistics.
(Mathematical Institute (University of Oxford))
Abstract

Join us for our first Fridays@4 session of Trinity about different academic routes people take post-PhD, with a particular focus on fellowships and grants. We’ll hear from Jason Lotay about his experiences on both sides of the application process, as well as hear about the experiences of ECRs in the South Wing, North Wing, and Statistics. Towards the end of the hour we’ll have a Q+A session with the whole panel, where you can ask any questions you have around this topic!

Entanglement and topology in RG flows across dimensions: caps, bridges and corners
Deddo, E Pando Zayas, L Uhlemann, C Journal of High Energy Physics volume 2023 issue 4 (04 Apr 2023)
Generalized quotients and holographic duals for 5d S-fold SCFTs
Apruzzi, F Bergman, O Kim, H Uhlemann, C Journal of High Energy Physics volume 2023 issue 4 (05 Apr 2023)
Mon, 12 Jun 2023
15:30
L5

On the Dualizability of Fusion 2-Categories

Thibault Decoppet
Abstract

Fusion 2-categories were introduced by Douglas and Reutter so as to define a state-sum invariant of 4-manifolds. Categorifying a result of Douglas, Schommer-Pries and Snyder, it was conjectured that, over an algebraically closed field of characteristic zero, every fusion 2-category is a fully dualizable object in an appropriate symmetric monoidal 4-category. I will sketch a proof of this conjecture, which will proceed by studying, and in fact classifying, the Morita equivalence classes of fusion 2-categories. In particular, by appealing to the cobordism hypothesis, we find that every fusion 2-category yields a fully extended framed 4D TQFT. I will explain how these theories are related to the ones constructed using braided fusion 1-categories by Brochier, Jordan, and Snyder.

Mon, 22 May 2023
15:30
L5

Combining the minimal-separating-set trick with simplicial volume

Hannah Alpert
Abstract

In 1983 Gromov proved the systolic inequality: if M is a closed, essential n-dimensional Riemannian manifold where every loop of length 2 is null-homotopic, then the volume of M is at least a constant depending only on n.  He also proved a version that depends on the simplicial volume of M, a topological invariant generalizing the hyperbolic volume of a closed hyperbolic manifold.  If the simplicial volume is large, then the lower bound on volume becomes proportional to the simplicial volume divided by the n-th power of its logarithm.  Nabutovsky showed in 2019 that Papasoglu's method of area-minimizing separating sets recovers the systolic inequality and improves its dependence on n.  We introduce simplicial volume to the proof, recovering the statement that the volume is at least proportional to the square root of the simplicial volume.

Mon, 15 May 2023
14:00
C6

Ext in functor categories and stable cohomology of Aut(F_n) (Arone)

Greg Arone
Abstract

 

We present a homotopy-theoretic method for calculating Ext groups between polynomial functors from the category of (finitely generated, free) groups to abelian groups. It enables us to substantially extend the range of what can be calculated. In particular, we can calculate torsion in the Ext groups, about which very little has been known. We will discuss some applications to the stable cohomology of Aut(F_n), based on a theorem of Djament.   

 

 

Fri, 16 Jun 2023

15:00 - 16:00
Lecture room 5

Topology of Artificial Neuron Activations in Deep Learning

Bei Wang
Abstract

Deep convolutional neural networks such as GoogLeNet and ResNet have become ubiquitous in image classification tasks, whereas
transformer-based language models such as BERT and its variants have found widespread use in natural language processing. In this talk, I
will discuss recent efforts in exploring the topology of artificial neuron activations in deep learning, from images to word embeddings.
First, I will discuss the topology of convolutional neural network activations, which provides semantic insight into how these models
organize hierarchical class knowledge at each layer. Second, I will discuss the topology of word embeddings from transformer-based models.
I will explore the topological changes of word embeddings during the fine-tuning process of various models and discover model confusions in
the embedding spaces. If time permits, I will discuss on-going work in studying the topology of neural activations under adversarial attacks.
 

Fri, 02 Jun 2023

15:00 - 16:00
Lecture room 5

Projected barcodes and distances for multi-parameter persistence modules

Francois Petit
Abstract

In this talk, I will present the notion of projected barcodes and projected distances for multi-parameter persistence modules. Projected barcodes are defined as derived pushforward of persistence modules onto R. Projected distances come in two flavors: the integral sheaf metrics (ISM) and the sliced convolution distances (SCD). I will explain how the fibered barcode is a particular instance of projected barcodes and how the ISM and the SCD provide lower bounds for the convolution distance. 

Furthermore, in the case where the persistence module considered is the sublevel-sets persistence modules of a function f : X -> R^n, we will explain how, under mild conditions, the projected barcode of this module by a linear map u : R^n \to R is the collection of sublevel-sets barcodes of the composition uf . In particular, it can be computed using software dedicated to one-parameter persistence modules. This is joint work with Nicolas Berkouk.

Subscribe to