Four participants at It All Adds Up working on maths problems
It All Adds Up: three separate one-day conferences for female and non-binary pupils interested in maths, with workshops and talks to inspire students.
Four participants at It All Adds Up working on maths problems
It All Adds Up: three separate one-day conferences for female and non-binary pupils interested in maths, with workshops and talks to inspire students.
The Mean-Field Ensemble Kalman Filter: Near-Gaussian Setting
Carrillo, J Hoffmann, F Stuart, A Vaes, U SIAM Journal on Numerical Analysis volume 62 issue 6 2549-2587 (31 Dec 2024)

Hello all, we are the Mirzakhani Society, a group for all the women and non-binary mathematicians at Oxford. We run lots and lots of relaxed events from pMirzakhani Society | Oxfordizza nights to career conferences.

Thu, 05 Dec 2024
17:00

Model-theoretic havens for extremal and additive combinatorics

Mervyn Tong
(Leeds University)
Abstract

Model-theoretic dividing lines have long been a source of tameness for various areas of mathematics, with combinatorics jumping on the bandwagon over the last decade or so. Szemerédi’s regularity lemma saw improvements in the realm of NIP, which were further refined in the subrealms of stability and distality. We show how relations satisfying the distal regularity lemma enjoy improved bounds for Zarankiewicz’s problem. We then pivot to arithmetic regularity lemmas as pioneered by Green, for which NIP and stability also imply improvements. Unsettled by the absence of distality in this picture, we discuss the role of distality in additive combinatorics, appealing to our result connecting distality with arithmetic tameness.

Undergraduate students are invited to register their interest to join the Vice-Chancellor’s Colloquium on Climate in Hilary Term. Find out more and complete the expression of interest form before the deadline of midnight on Friday 22 November:

https://ox.ac.uk/vc-colloquium

 

SPARK 2024 is back, starting from 29th November until the 19th of December 2024. As a reminder, this coding challenge is especially suited for first-year undergraduates from all STEM disciplines. There are lots of daily prizes up for grabs, and a £1000 for the overall winner. 

Thu, 28 Nov 2024
16:00
L4

Regurgitative Training in Finance: Generative Models for Portfolios

Adil Rengim Cetingoz
(Centre d'Economie de la Sorbonne)
Further Information

Please join us for refreshments outside the lecture room from 15:30.

Abstract
Simulation methods have always been instrumental in finance, but data-driven methods with minimal model specification, commonly referred to as generative models, have attracted increasing attention, especially after the success of deep learning in a broad range of fields. However, the adoption of these models in practice has not kept pace with the growing interest, probably due to the unique complexities and challenges of financial markets. This paper aims to contribute to a deeper understanding of the development, use and evaluation of generative models, particularly in portfolio and risk management. To this end, we begin by presenting theoretical results on the importance of initial sample size, and point out the potential pitfalls of generating far more data than originally available. We then highlight the inseparable nature of model development and the desired use case by touching on a very interesting paradox: that generic generative models inherently care less about what is important for constructing portfolios (at least the interesting ones, i.e. long-short). Based on these findings, we propose a pipeline for the generation of multivariate returns that meets conventional evaluation standards on a large universe of US equities while providing interesting insights into the stylized facts observed in asset returns and how a few statistical factors are responsible for their existence. Recognizing the need for more delicate evaluation methods, we suggest, through an example of mean-reversion strategies, a method designed to identify bad models for a given application based on regurgitative training, retraining the model using the data it has itself generated.
 

 
Subscribe to