Going All Round the Houses: Mathematics, Horoscopes and History before 1600
Abstract
To be a mathematicus in 15th- and 16th-century Europe often meant practising as an astrologer. Far from being an unwelcome obligation, or simply a means of paying the rent, astrology frequently represented a genuine form of mathematical engagement. This is most clearly seen by examining changing definitions of one of the key elements of horoscope construction: the astrological houses. These twelve houses are divisions of the zodiac circle and their character fundamentally affects the significance of the planets which occupy them at any particular moment in time. While there were a number of competing systems for defining the houses, one system was standard throughout medieval Europe. However, the 16th-century witnessed what John North referred to as a “minor revolution”, as a different technique first developed in the Islamic world but adopted and promoted by Johannes Regiomontanus became increasingly prevalent. My paper reviews this shift in astrological practice and investigates the mathematical values it represents – from aesthetics and geometrical representation to efficiency and computational convenience.
Decentralised Finance and Automated Market Making: Optimal Execution and Liquidity Provision
Abstract
Automated Market Makers (AMMs) are a new prototype of
trading venues which are revolutionising the way market participants
interact. At present, the majority of AMMs are Constant Function
Market Makers (CFMMs) where a deterministic trading function
determines how markets are cleared. A distinctive characteristic of
CFMMs is that execution costs for liquidity takers, and revenue for
liquidity providers, are given by closed-form functions of price,
liquidity, and transaction size. This gives rise to a new class of
trading problems. We focus on Constant Product Market Makers with
Concentrated Liquidity and show how to optimally take and make
liquidity. We use Uniswap v3 data to study price and liquidity
dynamics and to motivate the models.
For liquidity taking, we describe how to optimally trade a large
position in an asset and how to execute statistical arbitrages based
on market signals. For liquidity provision, we show how the wealth
decomposes into a fee and an asset component. Finally, we perform
consecutive runs of in-sample estimation of model parameters and
out-of-sample trading to showcase the performance of the strategies.
Merton's optimal investment problem with jump signals
Abstract
This talk presents a new framework for Merton’s optimal investment problem which uses the theory of Meyer $\sigma$-fields to allow for signals that possibly warn the investor about impending jumps. With strategies no longer predictable, some care has to be taken to properly define wealth dynamics through stochastic integration. By means of dynamic programming, we solve the problem explicitly for power utilities. In a case study with Gaussian jumps, we find, for instance, that an investor may prefer to disinvest even after a mildly positive signal. Our setting also allows us to investigate whether, given the chance, it is better to improve signal quality or quantity and how much extra value can be generated from either choice.
This talk is based on joint work with Peter Bank.
MF-OMO: An Optimization Formulation of Mean-Field Games
Abstract
Theory of mean-field games (MFGs) has recently experienced an exponential growth. Existing analytical approaches to find Nash equilibrium (NE) solutions for MFGs are, however, by and large restricted to contractive or monotone settings, or rely on the uniqueness of the NE. We propose a new mathematical paradigm to analyze discrete-time MFGs without any of these restrictions. The key idea is to reformulate the problem of finding NE solutions in MFGs as solving an equivalent optimization problem, called MF-OMO (Mean-Field Occupation Measure Optimization), with bounded variables and trivial convex constraints. It is built on the classical work of reformulating a Markov decision process as a linear program, and by adding the consistency constraint for MFGs in terms of occupation measures, and by exploiting the complementarity structure of the linear program. This equivalence framework enables finding multiple (and possibly all) NE solutions of MFGs by standard algorithms such as projected gradient descent, and with convergence guarantees under appropriate conditions. In particular, analyzing MFGs with linear rewards and with mean-field independent dynamics is reduced to solving a finite number of linear programs, hence solvable in finite time. This optimization reformulation of MFGs can be extended to variants of MFGs such as personalized MFGs.
Scalable Second–order Tensor Based Methods for Unconstrained Non–convex Optimization
Regularization by inexact Krylov methods with applications to blind deblurring
Abstract
In this talk I will present a new class of algorithms for separable nonlinear inverse problems based on inexact Krylov methods. In particular, I will focus in semi-blind deblurring applications. In this setting, inexactness stems from the uncertainty in the parameters defining the blur, which are computed throughout the iterations. After giving a brief overview of the theoretical properties of these methods, as well as strategies to monitor the amount of inexactness that can be tolerated, the performance of the algorithms will be shown through numerical examples. This is joint work with Silvia Gazzola (University of Bath).
Some recent developments in high order finite element methods for incompressible flow
Abstract
Computing functions of matrices via composite rational functions
Abstract
Most algorithms for computing a matrix function $f(A)$ are based on finding a rational (or polynomial) approximant $r(A)≈f(A)$ to the scalar function on the spectrum of $A$. These functions are often in a composite form, that is, $f(z)≈r(z)=r_k(⋯r_2(r_1(z)))$ (where $k$ is the number of compositions, which is often the iteration count, and proportional to the computational cost); this way $r$ is a rational function whose degree grows exponentially in $k$. I will review algorithms that fall into this category and highlight the remarkable power of composite (rational) functions.