17:00
Can we truly understand by counting? - Hugo Duminil-Copin
Hugo will illustrate how counting can shed light on the behaviour of complex physical systems, while simultaneously revealing the need to sometimes go beyond what numbers tell us in order to unveil all the mysteries of the world around us.
Hugo Duminil-Copin is is a French mathematician recognised for his groundbreaking work in probability theory and mathematical physics. He was appointed full professor at the University of Geneva in 2014 and since 2016 has also been a permanent professor at the Institut des Hautes Études Scientifiques (IHES) in France. In 2022 he was awarded the Fields Medal, the highest distinction in mathematics.
Please email @email to register to attend in person.
The lecture will be broadcast on the Oxford Mathematics YouTube Channel on Thursday 20 February at 5-6pm and any time after (no need to register for the online version).
The Oxford Mathematics Public Lectures are generously supported by XTX Markets.
15:30
Stochastic wave equations with constraints: well-posedness and Smoluchowski-Kramers diffusion approximation
Abstract
I will discuss the well-posedness of a class of stochastic second-order in time-damped evolution equations in Hilbert spaces, subject to the constraint that the solution lies on the unit sphere. A specific example is provided by the stochastic damped wave equation in a bounded domain of a $d$-dimensional Euclidean space, endowed with the Dirichlet boundary conditions, with the added constraint that the $L^2$-norm of the solution is equal to one. We introduce a small mass $\mu>0$ in front of the second-order derivative in time and examine the validity of the Smoluchowski-Kramers diffusion approximation. We demonstrate that, in the small mass limit, the solution converges to the solution of a stochastic parabolic equation subject to the same constraint. We further show that an extra noise-induced drift emerges, which in fact does not account for the Stratonovich-to-It\^{o} correction term. This talk is based on joint research with S. Cerrai (Maryland), hopefully to be published in Comm Maths Phys.
16:00
Rank-based models with listings and delistings: theory and calibration.
Abstract
Rank-based models for equity markets are reduced-form models where the asset dynamics depend on the rank that asset occupies in the investment universe. Such models are able to capture certain stylized macroscopic properties of equity markets, such as stability of the capital distribution curve and collision rates of stock rank switches. However, when calibrated to real equity data the models possess undesirable features such as an "Atlas stock" effect; namely the smallest security has an unrealistically large drift. Recently, Campbell and Wong (2024) identified that listings and delistings (i.e. entrances and exists) of securities in the market are important drivers for the stability of the capital distribution curve. In this work we develop a framework for ranked-based models with listings and delistings and calibrate them to data. By incorporating listings and delistings the calibration procedure no longer leads to an "Atlas stock" behaviour. Moreover, by studying an appropriate "local model", focusing on a specific target rank, we are able to connect collisions rates with a notion of particle density, which is more stable and easier to estimate from data than the collision rates. The calibration results are supported by novel theoretical developments such as a new master formula for functional generation of portfolios in this setting. This talk is based on joint work in progress with Martin Larsson and Licheng Zhang.
16:00
First-best implementation in dynamic adverse selection models with news
Abstract
This paper shows that a simple sale contract with a collection of options implements the full-information first-best allocation in a variety of continuous-time dynamic adverse selection settings with news. Our model includes as special cases most models in the literature. The implementation result holds regardless of whether news is public (i.e., contractible) or privately observed by the buyer, and it does not require deep pockets on either side of the market. It is an implication of our implementation result that, irrespective of the assumptions on the game played, no agent waits for news to trade in such models. The options here do not play a hedging role and are, thus, not priced using a no-arbitrage argument. Rather, they are priced using a game-theoretic approach.
14:15
Tame fundamental groups of rigid spaces
Abstract
The fundamental group of a complex variety is finitely presented. The talk will survey algebraic variants (in fact, distant corollaries) of this fact, in the context of variants of the etale fundamental group. We will then zoom in on "tame" etale fundamental groups of p-adic analytic spaces. Our main result is that it is (topologically) finitely generated (for a quasi-compact and quasi-separated rigid space over an algebraically closed field). The proof uses logarithmic geometry beyond its usual scope of finitely generated monoids to (eventually) reduce the problem to the more classical one of finite generation of tame fundamental groups of algebraic varieties over the residue field. This is joint work with Katharina Hübner, Marcin Lara, and Jakob Stix.
How to warm-start your unfolding network
Abstract
We present a new ensemble framework for boosting the performance of overparameterized unfolding networks solving the compressed sensing problem. We combine a state-of-the-art overparameterized unfolding network with a continuation technique, to warm-start a crucial quantity of the said network's architecture; we coin the resulting continued network C-DEC. Moreover, for training and evaluating C-DEC, we incorporate the log-cosh loss function, which enjoys both linear and quadratic behavior. Finally, we numerically assess C-DEC's performance on real-world images. Results showcase that the combination of continuation with the overparameterized unfolded architecture, trained and evaluated with the chosen loss function, yields smoother loss landscapes and improved reconstruction and generalization performance of C-DEC, consistently for all datasets.
On Objective-Free High Order Methods
Abstract
An adaptive regularization algorithm for unconstrained nonconvex optimization is presented in
which the objective function is never evaluated, but only derivatives are used and without prior knowledge of Lipschitz constant. This algorithm belongs to the class of adaptive regularization methods, for which optimal worst-case complexity results are known for the standard framework where the objective function is evaluated. It is shown in this paper that these excellent complexity bounds are also valid for the new algorithm. Theoretical analysis of both exact and stochastic cases are discussed and new probabilistic conditions on tensor derivatives are proposed. Initial experiments on large binary classification highlight the merits of our method.
14:15