The second in the series of Student Lectures that we are making publicly available this Autumn is from Vicky Neale. Vicky is one of our most popular lecturers and this lecture is from her First Year Analysis course. 

The course introduces students to a rigorous definition of convergence, allowing them to develop their previous understanding of sequences and series and to prove key results about convergence, leading on to subsequent Analysis courses addressing continuity, differentiability and integrability of functions.

Fri, 20 Nov 2020
16:00
Virtual

Polarizations and Symmetries of T[M] theories

Du Pei
(Harvard)
Abstract

I will lead an informal discussion centered on discrete data that need to be specified when reducing 6d relative theories on an internal manifold M and how they determine symmetries of the resulting theory T[M].

Fernando Alday has been appointed Rouse Ball Professor of Mathematics in the University of Oxford. The Rouse Ball Professorship of Mathematics is one of the senior chairs in the Mathematics Department in Oxford (and also in Cambridge). The two positions were founded in 1927 by a bequest from the mathematician W. W. Rouse Ball.

Mon, 18 Jan 2021

16:00 - 17:00

 Machine Learning for Mean Field Games

MATHIEU LAURIERE
(Princeton University)
Abstract

Mean field games (MFG) and mean field control problems (MFC) are frameworks to study Nash equilibria or social optima in games with a continuum of agents. These problems can be used to approximate competitive or cooperative situations with a large finite number of agents. They have found a broad range of applications, from economics to crowd motion, energy production and risk management. Scalable numerical methods are a key step towards concrete applications. In this talk, we propose several numerical methods for MFG and MFC. These methods are based on machine learning tools such as function approximation via neural networks and stochastic optimization. We provide numerical results and we investigate the numerical analysis of these methods by proving bounds on the approximation scheme. If time permits, we will also discuss model-free methods based on extensions of the traditional reinforcement learning setting to the mean-field regime.  

 

 

Mon, 25 Jan 2021

16:00 - 17:00

Open markets

DONGHAN KIM
(Columbia University)
Abstract

An open market is a subset of a larger equity market, composed of a certain fixed number of top‐capitalization stocks. Though the number of stocks in the open market is fixed, their composition changes over time, as each company's rank by market capitalization fluctuates. When one is allowed to invest also in a money market, an open market resembles the entire “closed” equity market in the sense that the market viability (lack of arbitrage) is equivalent to the existence of a numéraire portfolio (which cannot be outperformed). When access to the money market is prohibited, the class of portfolios shrinks significantly in open markets; in such a setting, we discuss how to construct functionally generated stock portfolios and the concept of the universal portfolio.

This talk is based on joint work with Ioannis Karatzas.

-------------------------------------------------------------------------------------------------

Mon, 07 Dec 2020

16:00 - 17:00

"Efficient approximation of high-dimensional functions with neural networks”

PATRICK CHERIDITO
((ETH) Zurich)
Abstract

We develop a framework for showing that neural networks can overcome the curse of dimensionality in different high-dimensional approximation problems. Our approach is based on the notion of a catalog network, which is a generalization of a standard neural network in which the nonlinear activation functions can vary from layer to layer as long as they are chosen from a predefined catalog of functions. As such, catalog networks constitute a rich family of continuous functions. We show that under appropriate conditions on the catalog, catalog networks can efficiently be approximated with ReLU-type networks and provide precise estimates on the number of parameters needed for a given approximation accuracy. As special cases of the general results, we obtain different classes of functions that can be approximated with ReLU networks without the curse of dimensionality. 

 

A preprint is here: https://arxiv.org/abs/1912.04310

Subscribe to