Wed, 31 May 2023
17:00
Lecture Theatre 1, Mathematical Institute, Radcliffe Observatory Quarter, Woodstock Road, OX2 6GG

A world from a sheet of paper - Tadashi Tokieda

Tadashi Tokieda
(Stanford University)
Further Information

Starting from just a sheet of paper, by folding, stacking, crumpling, sometimes tearing, Tadashi will explore a diversity of phenomena, from magic tricks and geometry through elasticity and the traditional Japanese art of origami to medical devices and an ‘h-principle’. Much of the show consists of table-top demonstrations, which you can try later with friends and family.

So, take a sheet of paper. . .

Tadashi Tokieda is a professor of mathematics at Stanford.  He grew up as a painter in Japan, became a classical philologist (not to be confused with philosopher) in France and, having earned a PhD in pure mathematics from Princeton, has been an applied mathematician in England and the US; all in all, he has lived in eight countries so far.  Tadashi is very active in mathematical outreach, notably with the African Institute for Mathematical Sciences. You'll find him on Numberphile's YouTube channel.

Please email @email to register.

The Oxford Mathematics Public Lectures are generously supported by XTX Markets.

Tue, 07 Jun 2022

16:30 - 17:30
Virtual

Thresholds

Jinyoung Park
(Stanford University)
Further Information

Part of the Oxford Discrete Maths and Probability Seminar, held via Zoom. Please see the seminar website for details.

Abstract

Thresholds for increasing properties of random structures are a central concern in probabilistic combinatorics and related areas. In 2006, Kahn and Kalai conjectured that for any nontrivial increasing property on a finite set, its threshold is never far from its "expectation-threshold," which is a natural (and often easy to calculate) lower bound on the threshold. In this talk, I will present recent progress on this topic. Based on joint work with Huy Tuan Pham.

Fri, 04 Feb 2022
16:00
N4.01

Gravity factorized

Jorrit Kruthoff
(Stanford University)
Further Information

It is also possible to join virtually via Teams.

Abstract

There are various aspects of the AdS/CFT correspondence that are rather mysterious. For example, how does the gravitational theory know about a discrete boundary spectrum or how does it know moments of the partition function factorize, given the existence of connected (wormhole) geometries? In this talk I will discuss some recent efforts with Andreas Blommaert and Luca Iliesiu on these two puzzles in two dimensional dilaton gravities. These gravity theories are simple enough that we can understand and propose a resolution to the discreteness and factorization puzzles. I will show that a tiny but universal bilocal spacetime interaction in the bulk is enough to ensure factorization, whereas modifying the dilaton potential with tiny corrections gives a discrete boundary spectrum. We will discuss the meaning of these corrections and how they could be related to resolutions of the same puzzles in higher dimensions. 

Fri, 14 May 2021
16:00
Virtual

Leaps and bounds towards scale separation

Bruno De Luca
(Stanford University)
Abstract

In a broad class of gravity theories, the equations of motion for vacuum compactifications give a curvature bound on the Ricci tensor minus a multiple of the Hessian of the warping function. Using results in so-called Bakry-Émery geometry, I will show how to put rigorous general bounds on the KK scale in gravity compactifications in terms of the reduced Planck mass or the internal diameter.
If time permits, I will reexamine in this light the local behavior in type IIA for the class of supersymmetric solutions most promising for scale separation. It turns out that the local O6-plane behavior cannot be smoothed out as in other local examples; it generically turns into a formal partially smeared O4.

Fri, 19 Feb 2021
16:00
Virtual

The statistical mechanics of near-extremal and near-BPS black holes

Luca Iliesiu
(Stanford University)
Abstract

An important open question in black hole thermodynamics is about the existence of a "mass gap" between an extremal black hole and the lightest near-extremal state within a sector of fixed charge. In this talk, I will discuss how to reliably compute the partition function of 4d Reissner-Nordstrom near-extremal black holes at temperature scales comparable to the conjectured gap. I will show that the density of states at fixed charge does not exhibit a gap in the simplest gravitational non-supersymmetric theories; rather, at the expected gap energy scale, we see a continuum of states whose meaning we will extensively discuss. Finally, I will present a similar computation for nearly-BPS black holes in 4d N=2 supergravity. As opposed to their non-supersymmetric counterparts, such black holes do in fact exhibit a gap consistent with various string theory predictions.

Thu, 18 Feb 2021

17:00 - 18:00
Virtual

Quantitative inviscid limits and universal shock formation in scalar conservation laws

Cole Graham
(Stanford University)
Further Information

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact Benjamin Fehrman.

Abstract

We explore one facet of an old problem: the approximation of hyperbolic conservation laws by viscous counterparts. While qualitative convergence results are well-known, quantitative rates for the inviscid limit are less common. In this talk, we consider the simplest case: a one-dimensional scalar strictly-convex conservation law started from "generic" smooth initial data. Using a matched asymptotic expansion, we quantitatively control the inviscid limit up to the time of first shock. We conclude that the inviscid limit has a universal character near the first shock. This is joint work with Sanchit Chaturvedi.

Thu, 21 Jan 2021
14:00
Virtual

Domain specific languages for convex optimization

Stephen Boyd
(Stanford University)
Abstract

Specialized languages for describing convex optimization problems, and associated parsers that automatically transform them to canonical form, have greatly increased the use of convex optimization in applications. These systems allow users to rapidly prototype applications based on solving convex optimization problems, as well as generate code suitable for embedded applications. In this talk I will describe the general methods used in such systems.

 

--

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Thu, 17 Sep 2020

16:00 - 17:00
Virtual

On Wasserstein projections

Jose Blanchet
(Stanford University)
Abstract

We study the minimum Wasserstein distance from the empirical measure to a space of probability measures satisfying linear constraints. This statistic can naturally be used in a wide range of applications, for example, optimally choosing uncertainty sizes in distributionally robust optimization, optimal regularization, testing fairness, martingality, among many other statistical properties. We will discuss duality results which recover the celebrated Kantorovich-Rubinstein duality when the manifold is sufficiently rich and associated test statistics as the sample size increases. We illustrate how this relaxation can beat the statistical curse of dimensionality often associated to empirical Wasserstein distances.

The talk builds on joint work with S. Ghosh, Y. Kang, K. Murthy, M. Squillante, and N. Si.

Tue, 08 Oct 2019
14:00
L2

Traces of Class/Cross-Class Structure Pervade Deep Learning Spectra

Vardan Papyan
(Stanford University)
Abstract


Numerous researchers recently applied empirical spectral analysis to the study of modern deep learning classifiers. We identify and discuss an important formal class/cross-class structure and show how it lies at the origin of the many visually striking features observed in deepnet spectra, some of which were reported in recent articles and others unveiled here for the first time. These include spectral outliers and small but distinct bumps often seen beyond the edge of a "main bulk". The structure we identify organizes the coordinates of deepnet features and back-propagated errors, indexing them as an NxC or NxCxC array. Such arrays can be indexed by a two-tuple (i,c) or a three-tuple (i,c,c'), where i runs across the indices of the train set; c runs across the class indices and c' runs across the cross-class indices. This indexing naturally induces C class means, each obtained by averaging over the indices i and c' for a fixed class c. The same indexing also naturally defines C^2 cross-class means, each obtained by averaging over the index i for a fixed class c and a cross-class c'. We develop a formal process of spectral attribution, which is used to show the outliers are attributable to the C class means; the small bump next to the "main bulk" is attributable to between-cross-class covariance; and the "main bulk" is attributable to within-cross-class covariance. Formal theoretical results validate our attribution methodology.
We show how the effects of the class/cross-class structure permeate not only the spectra of deepnet features and backpropagated errors, but also the gradients, Fisher Information matrix and Hessian, whether these are considered in the context of an individual layer or the concatenation of them all. The Kronecker or Khatri-Rao product of the class means in the features and the class/cross-class means in the backpropagated errors approximates the class/cross-class means in the gradients. These means of gradients then create C and C^2 outliers in the spectrum of the Fisher Information matrix, which is the second moment of these gradients. The outliers in the Fisher Information matrix spectrum then create outliers in the Hessian spectrum. We explain the significance of this insight by proposing a correction to KFAC, a well known second-order optimization algorithm for training deepnets.

Wed, 05 Sep 2018

17:00 - 18:00
L1

Persi Diaconis - Chance and Evidence

Persi Diaconis
(Stanford University)
Abstract

In this lecture Persi Diaconis will take a look at some of our most primitive images of chance - flipping a coin, rolling a roulette wheel and shuffling cards - and via a little bit of mathematics (and a smidgen of physics) show that sometimes things are not very random at all. Indeed chance is sometimes confused with frequency and this confusion caries over to a confusion between chance and evidence. All of which explains our wild misuse of probability and statistical models.

Persi Diaconis is world-renowned for his study of mathematical problems involving randomness and randomisation. He is the co-author of 'Ten Great Ideas about Chance (2017) and is the Mary V. Sunseri Professor of Statistics and Mathematics at Stanford University. 

Please email @email to register.

Watch live:

https://www.facebook.com/OxfordMathematics
https://livestream.com/oxuni/PersiDiaconis

The Oxford Mathematics Public Lectures are generously supported by XTX Markets.

Subscribe to Stanford University