Like so many people around the world, our Oxford Mathematics students are in the middle of the exam season.
Our fourth year students started at the end of May and finish tomorrow, third year students finish a week later while first and second students are preparing for their exams which start soon and finish on 23 June.
Schauder estimates at nearly linear growth
Abstract
Schauder estimates are a classical tool in linear and nonlinear elliptic and parabolic PDEs. They describe how regularity of coefficients reflects in regularity of solutions. They basically have a perturbative nature. This means that they can be obtained by perturbing the estimates available for problems without coefficients. This paradigm works as long as one deals with uniformly elliptic equations. The nonuniformly elliptic case is a different story and Schauder's theory turns out to be not perturbative any longer, as shown by counterexamples. In my talk, I will present a method allowing to bypass the perburbative schemes and leading to Schauder estimates in the nonuniformly elliptic regime. For this I will concentrate on the case of nonuniformly elliptic functionals with nearly linear growth, also covering a borderline case of so-called double phase energies. From recent, joint work with Cristiana De Filippis (Parma).
Fractional Sobolev lsometric lmmersions of Planar Domains
Abstract
We discuss $C^1$-regularity and developability of isometric immersions of flat domains into $\mathbb{R}^3$ enjoying a local fractional Sobolev $W^{1+s;2/s}$-regularity for $2/3 \leq s < 1$, generalising the known results on Sobolev (by Pakzad) and H\"{o}lder (by De Lellis--Pakzad) regimes. Ingredients of the proof include analysis of the weak Codazzi equations of isometric immersions, the study of $W^{1+s;2/s}$-gradient deformations with symmetric derivative and vanishing distributional Jacobian determinant, and the theory of compensated compactness. Joint work with M. Reza Pakzad and Armin Schikorra.
13:30
CDT in Mathematics of Random Systems June Workshop 2023
Abstract
1:30 Milena Vuletic
Simulation of Arbitrage-Free Implied Volatility Surfaces
We present a computationally tractable method for simulating arbitrage-free implied volatility surfaces. We illustrate how our method may be combined with a factor model based on historical SPX implied volatility data to generate dynamic scenarios for arbitrage-free implied volatility surfaces. Our approach conciliates static arbitrage constraints with a realistic representation of statistical properties of implied volatility co-movements.
2:00 Nicola Muca Cirone
Neural Signature Kernels
Motivated by the paradigm of reservoir computing, we consider randomly initialized controlled ResNets defined as Euler-discretizations of neural controlled differential equations (Neural CDEs), a unified architecture which enconpasses both RNNs and ResNets. We show that in the infinite-width-depth limit and under proper scaling, these architectures converge weakly to Gaussian processes indexed on some spaces of continuous paths and with kernels satisfying certain partial differential equations (PDEs) varying according to the choice of activation function, extending the results of Hayou (2022); Hayou & Yang (2023) to the controlled and homogeneous case. In the special, homogeneous, case where the activation is the identity, we show that the equation reduces to a linear PDE and the limiting kernel agrees with the signature kernel of Salvi et al. (2021a). We name this new family of limiting kernels neural signature kernels. Finally, we show that in the infinite-depth regime, finite-width controlled ResNets converge in distribution to Neural CDEs with random vector fields which, depending on whether the weights are shared across layers, are either time-independent and Gaussian or behave like a matrix-valued Brownian motion.
2:30 Break
2:50-3:50 Renyuan Xu, Assistant Professor, University of Southern California
Reversible and Irreversible Decisions under Costly Information Acquisition
Many real-world analytics problems involve two significant challenges: estimation and optimization. Due to the typically complex nature of each challenge, the standard paradigm is estimate-then-optimize. By and large, machine learning or human learning tools are intended to minimize estimation error and do not account for how the estimations will be used in the downstream optimization problem (such as decision-making problems). In contrast, there is a line of literature in economics focusing on exploring the optimal way to acquire information and learn dynamically to facilitate decision-making. However, most of the decision-making problems considered in this line of work are static (i.e., one-shot) problems which over-simplify the structures of many real-world problems that require dynamic or sequential decisions.
As a preliminary attempt to introduce more complex downstream decision-making problems after learning and to investigate how downstream tasks affect the learning behavior, we consider a simple example where a decision maker (DM) chooses between two products, an established product A with known return and a newly introduced product B with an unknown return. The DM will make an initial choice between A and B after learning about product B for some time. Importantly, our framework allows the DM to switch to Product A later on at a cost if Product B is selected as the initial choice. We establish the general theory and investigate the analytical structure of the problem through the lens of the Hamilton—Jacobi—Bellman equation and viscosity solutions. We then discuss how model parameters and the opportunity to reverse affect the learning behavior of the DM.
This is based on joint work with Thaleia Zariphopoulou and Luhao Zhang from UT Austin.
17:00
Beyond the Fontaine-Wintenberger theorem
Abstract
Given a perfectoid field, we find an elementary extension and a henselian defectless valuation on it, whose value group is divisible and whose residue field is an elementary extension of the tilt. This specializes to the almost purity theorem over perfectoid valuation rings and Fontaine-Wintenberger. Along the way, we prove an Ax-Kochen/Ershov principle for certain deeply ramified fields, which also uncovers some new model-theoretic phenomena in positive characteristic. Notably, we get that the perfect hull of Fp(t)^h is an elementary substructure of the perfect hull of Fp((t)). Joint work with Franziska Jahnke.