A new weekly feature to entice you to have a look at what the café has to offer. 

Lots to come, but next Wednesday there is a special vegan dish to celebrate Green Action Week and also, on the same day, vegan Eton Mess for dessert. For those of you who didn't go to Eton, 'Mess' is a traditional dessert consisting of a mixture of strawberries (or other berries) meringue, and (vegan) whipped cream.

And you consumed 400 of the free chocolates on Valentine's Day.

Link Me Baby One More Time: Social Music Discovery on Spotify.
Babul, S Hristova, D Lima, A Lambiotte, R Beguerisse-Díaz, M CoRR volume abs/2401.08818 (01 Jan 2024)
Fri, 01 Mar 2024
16:00
L1

Departmental Colloquium: The role of depth in neural networks: function space geometry and learnability

Professor Rebecca Willett (University of Chicago)
Further Information

Rebecca Willett is a Professor of Statistics and Computer Science & the Faculty Director of AI at the Data Science Institute, with a courtesy appointment at the Toyota Technological Institute at Chicago. Her research is focused on machine learning foundations, scientific machine learning, and signal processing. She is the Deputy Director for Research at the NSF-Simons Foundation National Institute for Theory and Mathematics in Biology and a member of the Executive Committee for the NSF Institute for the Foundations of Data Science. She is the Faculty Director of the Eric and Wendy Schmidt AI in Science Postdoctoral Fellowship and helps direct the Air Force Research Lab University Center of Excellence on Machine Learning

Abstract

Neural network architectures play a key role in determining which functions are fit to training data and the resulting generalization properties of learned predictors. For instance, imagine training an overparameterized neural network to interpolate a set of training samples using weight decay; the network architecture will influence which interpolating function is learned. 

In this talk, I will describe new insights into the role of network depth in machine learning using the notion of representation costs – i.e., how much it “costs” for a neural network to represent some function f. Understanding representation costs helps reveal the role of network depth in machine learning. First, we will see that there is a family of functions that can be learned with depth-3 networks when the number of samples is polynomial in the input dimension d, but which cannot be learned with depth-2 networks unless the number of samples is exponential in d. Furthermore, no functions can easily be learned with depth-2 networks while being difficult to learn with depth-3 networks. 

Together, these results mean deeper networks have an unambiguous advantage over shallower networks in terms of sample complexity. Second, I will show that adding linear layers to a ReLU network yields a representation cost that favors functions with latent low-dimension structure, such as single- and multi-index models. Together, these results highlight the role of network depth from a function space perspective and yield new tools for understanding neural network generalization. 

Fri, 16 Feb 2024
16:00
L1

Conferences and networking

Naomi Andrew, Jane Coons, Antonio Esposito, Romain Ruzziconi
(Mathematical Institute (University of Oxford))
Abstract

Conferences and networking are important parts of academic life, particularly early in your academic career.  But how do you make the most out of conferences?  And what are the does and don'ts of networking?  Learn about the answers to these questions and more in this panel discussion by postdocs from across the Mathematical Institute.

Tue, 20 Feb 2024

14:00 - 15:00
L4

Hamiltonicity of expanders: optimal bounds and applications

Nemanja Draganić
(University of Oxford)
Abstract

An $n$-vertex graph $G$ is a $C$-expander if $|N(X)|\geq C|X|$ for every $X\subseteq V(G)$ with $|X|< n/2C$ and there is an edge between every two disjoint sets of at least $n/2C$ vertices.

We show that there is some constant $C>0$ for which every $C$-expander is Hamiltonian. In particular, this implies the well known conjecture of Krivelevich and Sudakov from 2003 on Hamilton cycles in $(n,d,\lambda)$-graphs. This completes a long line of research on the Hamiltonicity of sparse graphs, and has many applications.

Joint work with R. Montgomery, D. Munhá Correia, A. Pokrovskiy and B. Sudakov.

Fri, 23 Feb 2024
14:30
C6

Flat from anti de Sitter - a Carrollian perspective

Prof Marios Petropoulos
(Ecole Polytechnique, Paris)
Abstract

In recent years, the theme of asymptotically flat spacetimes has come back to the fore, fueled by the discovery of gravitational waves and the growing interest in what flat holography could be. In this quest, the standard tools pertaining to asymptotically anti-de Sitter spacetimes have been insufficiently exploited. I will show how Ricci-flat spacetimes are generally reached as a limit of Einstein geometries and how they are in fact constructed by means of data defined on the conformal Carrollian boundary that is null infinity. These data, infinite in number, are obtained as the coefficients of the Laurent expansion of the energy-momentum tensor in powers of the cosmological constant. This approach puts this tensor back at the heart of the analysis, and at the same time reveals the versatile role of the boundary Cotton tensor. Both appear in the infinite hierarchy of flux-balance equations governing the gravitational dynamics.  

Learning to adapt: rational personalization of adaptive therapy using deep reinforcement learning
Gallagher, K Strobl, M Park, D Spendlin, F Gatenby, R Maini, P Anderson, A Cancer Research

Starring Naomi Andrew, Jane Coons, Antonio Esposito, and Romain Ruzziconi

Conferences and networking are important parts of academic life, particularly early in your academic career.  But how do you make the most out of conferences?  And what are the does and don'ts of networking?  Learn about the answers to these questions and more in this panel discussion with postdocs from across the Mathematical Institute.

L1, 4pm today

When we started putting our short research films on social media, we thought the applied maths films would be more popular than the pure maths films.

We were wrong.

Here's Nathan.

Subscribe to