Merton's optimal investment problem with jump signals
Abstract
This talk presents a new framework for Merton’s optimal investment problem which uses the theory of Meyer $\sigma$-fields to allow for signals that possibly warn the investor about impending jumps. With strategies no longer predictable, some care has to be taken to properly define wealth dynamics through stochastic integration. By means of dynamic programming, we solve the problem explicitly for power utilities. In a case study with Gaussian jumps, we find, for instance, that an investor may prefer to disinvest even after a mildly positive signal. Our setting also allows us to investigate whether, given the chance, it is better to improve signal quality or quantity and how much extra value can be generated from either choice.
This talk is based on joint work with Peter Bank.
MF-OMO: An Optimization Formulation of Mean-Field Games
Abstract
Theory of mean-field games (MFGs) has recently experienced an exponential growth. Existing analytical approaches to find Nash equilibrium (NE) solutions for MFGs are, however, by and large restricted to contractive or monotone settings, or rely on the uniqueness of the NE. We propose a new mathematical paradigm to analyze discrete-time MFGs without any of these restrictions. The key idea is to reformulate the problem of finding NE solutions in MFGs as solving an equivalent optimization problem, called MF-OMO (Mean-Field Occupation Measure Optimization), with bounded variables and trivial convex constraints. It is built on the classical work of reformulating a Markov decision process as a linear program, and by adding the consistency constraint for MFGs in terms of occupation measures, and by exploiting the complementarity structure of the linear program. This equivalence framework enables finding multiple (and possibly all) NE solutions of MFGs by standard algorithms such as projected gradient descent, and with convergence guarantees under appropriate conditions. In particular, analyzing MFGs with linear rewards and with mean-field independent dynamics is reduced to solving a finite number of linear programs, hence solvable in finite time. This optimization reformulation of MFGs can be extended to variants of MFGs such as personalized MFGs.
14:00
A dynamical system perspective of optimization in data science
Abstract
In this talk, I will discuss and introduce deep insight from the dynamical system perspective to understand the convergence guarantees of first-order algorithms involving inertial features for convex optimization in a Hilbert space setting.
Such algorithms are widely popular in various areas of data science (data processing, machine learning, inverse problems, etc.).
They can be viewed discrete as time versions of an inertial second-order dynamical system involving different types of dampings (viscous damping, Hessian-driven geometric damping).
The dynamical system perspective offers not only a powerful way to understand the geometry underlying the dynamic, but also offers a versatile framework to obtain fast, scalable and new algorithms enjoying nice convergence guarantees (including fast rates). In addition, this framework encompasses known algorithms and dynamics such as the Nesterov-type accelerated gradient methods, and the introduction of time scale factors makes it possible to further accelerate these algorithms. The framework is versatile enough to handle non-smooth and non-convex objectives that are ubiquituous in various applications.