From Twistor Actions to MHV Diagrams
Boels, R Mason, L Skinner, D (05 Feb 2007)
Lie Polynomials and a Twistorial Correspondence for Amplitudes
Frost, H Mason, L (09 Dec 2019)
Recursion and worldsheet formulae for 6d superamplitudes
Albonico, G Geyer, Y Mason, L (16 Jan 2020)
Gluon scattering on self-dual radiative gauge fields
Adamo, T Mason, L Sharma, A (28 Oct 2020)
A Lie bracket for the momentum kernel
Frost, H Mafra, C Mason, L (01 Dec 2020)
Ambitwistor Strings in Six and Five Dimensions
Geyer, Y Mason, L Skinner, D (30 Dec 2020)
Twistor sigma models for quaternionic geometry and graviton scattering
Adamo, T Mason, L Sharma, A (31 Mar 2021)
Tue, 07 May 2024
15:00
L6

Oka manifolds and their role in complex analysis and geometry

Franc Forstneric
Abstract

Oka theory is about the validity of the h-principle in complex analysis and geometry. In this expository lecture, I will trace its main developments, from the classical results of Kiyoshi Oka (1939) and Hans Grauert (1958), through the seminal work of Mikhail Gromov (1989), to the introduction of Oka manifolds (2009) and the present state of knowledge. The lecture does not assume any prior exposure to this theory.

Model Integration in Computational Biology: The Role of Reproducibility, Credibility and Utility.
Karr, J Malik-Sheriff, R Osborne, J Gonzalez-Parra, G Forgoston, E Bowness, R Liu, Y Thompson, R Garira, W Barhak, J Rice, J Torres, M Dobrovolny, H Tang, T Waites, W Glazier, J Faeder, J Kulesza, A Frontiers in systems biology volume 2 822606 (07 Jan 2022)
Mon, 06 May 2024

14:00 - 15:00
Lecture Room 3

Bayesian Interpolation with Linear and Shaped Neural Networks

Boris Hanin
(Princeton University)
Abstract

This talk, based on joint work with Alexander Zlokapa, concerns Bayesian inference with neural networks. 

I will begin by presenting a result giving exact non-asymptotic formulas for Bayesian posteriors in deep linear networks. A key takeaway is the appearance of a novel scaling parameter, given by # data * depth / width, which controls the effective depth of the posterior in the limit of large model and dataset size. 

Additionally, I will explain some quite recent results on the role of this effective depth parameter in Bayesian inference with deep non-linear neural networks that have shaped activations.

Subscribe to