Thu, 27 Feb 2025
16:00
L5

Rank-based models with listings and delistings: theory and calibration.

David Itkin
(LSE)
Abstract

Rank-based models for equity markets are reduced-form models where the asset dynamics depend on the rank that asset occupies in the investment universe. Such models are able to capture certain stylized macroscopic properties of equity markets, such as stability of the capital distribution curve and collision rates of stock rank switches. However, when calibrated to real equity data the models possess undesirable features such as an "Atlas stock" effect; namely the smallest security has an unrealistically large drift. Recently, Campbell and Wong (2024) identified that listings and delistings (i.e. entrances and exists) of securities in the market are important drivers for the stability of the capital distribution curve. In this work we develop a framework for ranked-based models with listings and delistings and calibrate them to data. By incorporating listings and delistings the calibration procedure no longer leads to an "Atlas stock" behaviour. Moreover, by studying an appropriate "local model", focusing on a specific target rank, we are able to connect collisions rates with a notion of particle density, which is more stable and easier to estimate from data than the collision rates. The calibration results are supported by novel theoretical developments such as a new master formula for functional generation of portfolios in this setting. This talk is based on joint work in progress with Martin Larsson and Licheng Zhang.  

Thu, 07 Nov 2024
16:00
L4

Continuous-time persuasion by filtering

Dr Ofelia Bonesini
(LSE)
Further Information

Please join us for refreshments outside the lecture room from 15:30.

Abstract

We frame dynamic persuasion in a partial observation stochastic control game with an ergodic criterion. The receiver controls the dynamics of a multidimensional unobserved state process. Information is provided to the receiver through a device designed by the sender that generates the observation process. 

The commitment of the sender is enforced and an exogenous information process outside the control of the sender is allowed. We develop this approach in the case where all dynamics are linear and the preferences of the receiver are linear-quadratic.

We prove a verification theorem for the existence and uniqueness of the solution of the HJB equation satisfied by the receiver’s value function. An extension to the case of persuasion of a mean field of interacting receivers is also provided. We illustrate this approach in two applications: the provision of information to electricity consumers with a smart meter designed by an electricity producer; the information provided by carbon footprint accounting rules to companies engaged in a best-in-class emissions reduction effort. In the first application, we link the benefits of information provision to the mispricing of electricity production. In the latter, we show that when firms declare a high level of best-in-class target, the information provided by stringent accounting rules offsets the Nash equilibrium effect that leads firms to increase pollution to make their target easier to achieve.

This is a joint work with Prof. René Aïd, Prof. Giorgia Callegaro and Prof. Luciano Campi.

Mon, 14 Oct 2024
15:30
L3

A Mean Field Game approach for pollution regulation of competitive firms

Dr Giulia Livieri
(LSE)
Abstract

We develop a model based on mean-field games of competitive firms producing similar goods according to a standard AK model with a depreciation rate of capital generating pollution as a byproduct. Our analysis focuses on the widely-used cap-and-trade pollution regulation. Under this regulation, firms have the flexibility to respond by implementing pollution abatement, reducing output, and participating in emission trading, while a regulator dynamically allocates emission allowances to each firm. The resulting mean-field game is of linear quadratic type and equivalent to a mean-field type control problem, i.e., it is a potential game. We find explicit solutions to this problem through the solutions to differential equations of Riccati type. Further, we investigate the carbon emission equilibrium price that satisfies the market clearing condition and find a specific form of FBSDE of McKean-Vlasov type with common noise. The solution to this equation provides an approximate equilibrium price. Additionally, we demonstrate that the degree of competition is vital in determining the economic consequences of pollution regulation.

 

This is based on joint work with Gianmarco Del Sarto and Marta Leocata. 

https://arxiv.org/pdf/2407.12754

Thu, 01 Feb 2024
14:00
Lecture Room 3

A strongly polynomial algorithm for the minimum-cost generalized flow problem

Laszlo Vegh
(LSE)
Abstract

We give a strongly polynomial algorithm for minimum cost generalized flow, and as a consequence, for all linear programs with at most two nonzero entries per row, or at most two nonzero entries per column. While strongly polynomial algorithms for the primal and dual feasibility problems have been known for a long time, various combinatorial approaches used for those problems did not seem to carry over to the minimum-cost variant.

Our approach is to show that the ‘subspace layered least squares’ interior point method, an earlier joint work with Allamigeon, Dadush, Loho and Natura requires only a strongly polynomial number of iterations for minimum cost generalized flow. We achieve this by bounding the straight line complexity, introduced in the same paper. The talk will give an overview of the interior point method as well as the combinatorial straight-line complexity analysis for the particular setting. This is joint work with Daniel Dadush, Zhuan Khye Koh, Bento Natura, and Neil Olver.

Tue, 18 May 2021
15:15
Virtual

Factors in randomly perturbed graphs

Amedeo Sgueglia
(LSE)
Further Information

Part of the Oxford Discrete Maths and Probability Seminar, held via Zoom. Please see the seminar website for details.

Abstract

We study the model of randomly perturbed dense graphs, which is the union of any $n$-vertex graph $G_\alpha$ with minimum degree at least $\alpha n$ and the binomial random graph $G(n,p)$. In this talk, we shall examine the following central question in this area: to determine when $G_\alpha \cup G(n,p)$ contains $H$-factors, i.e. spanning subgraphs consisting of vertex disjoint copies of the graph $H$. We offer several new sharp and stability results.
This is joint work with Julia Böttcher, Olaf Parczyk, and Jozef Skokan.

Tue, 05 May 2020
14:00
Virtual

Ryser's conjecture and more

Liana Yepremyan
(LSE)
Further Information

Part of the Oxford Discrete Maths and Probability Seminar, held via Zoom. Please see the seminar website for details.

Abstract

A Latin square of order n is an $n \times n$ array filled with $n$ symbols such that each symbol appears only once in every row or column and a transversal is a collection of cells which do not share the same row, column or symbol. The study of Latin squares goes back more than 200 years to the work of Euler. One of the most famous open problems in this area is a conjecture of Ryser, Brualdi and Stein from 60s which says that every Latin square of order $n \times n$ contains a transversal of order $n-1$. A closely related problem is 40 year old conjecture of Brouwer that every Steiner triple system of order $n$ contains a matching of size $\frac{n-4}{3}$. The third problem we'd like to mention asks how many distinct symbols in Latin arrays suffice to guarantee a full transversal? In this talk we discuss a novel approach to attack these problems. Joint work with Peter Keevash, Alexey Pokrovskiy and Benny Sudakov.

Thu, 03 May 2018

16:00 - 17:30
L4

Generalized McKean-Vlasov stochastic control problems

Beatrice Acciaio
(LSE)
Abstract

Title: Generalized McKean-Vlasov stochastic control problems.

Abstract: I will consider McKean-Vlasov stochastic control problems 
where the cost functions and the state dynamics depend upon the joint 
distribution of the controlled state and the control process. First, I 
will provide a suitable version of the Pontryagin stochastic maximum 
principle, showing that, in the present general framework, pointwise 
minimization of the Hamiltonian with respect to the control is not a 
necessary optimality condition. Then I will take a different 
perspective, and present a variational approach to study a weak 
formulation of such control problems, thereby establishing a new 
connection between those and optimal transport problems on path space.

The talk is based on a joint project with J. Backhoff-Veraguas and R. Carmona.

Thu, 01 Dec 2016

16:00 - 17:30
L4

A Bayesian Methodology for Systemic Risk Assessment in Financial Networks

Luitgard A. M. Veraart
(LSE)
Abstract

We develop a Bayesian methodology for systemic risk assessment in financial networks such as the interbank market. Nodes represent participants in the network and weighted directed edges represent liabilities. Often, for every participant, only the total liabilities and total assets within this network are observable. However, systemic risk assessment needs the individual liabilities. We propose a model for the individual liabilities, which, following a Bayesian approach, we then condition on the observed total liabilities and assets and, potentially, on certain observed individual liabilities. We construct a Gibbs sampler to generate samples from this conditional distribution. These samples can be used in stress testing, giving probabilities for the outcomes of interest. As one application we derive default probabilities of individual banks and discuss their sensitivity with respect to prior information included to model the network. An R-package implementing the methodology is provided. (This is joint work with Axel Gandy (Imperial College London).)

Thu, 23 Oct 2014

16:00 - 17:30
L4

4pm (Joint Nomura-OMI Seminar) - The Use of Randomness in Time Series Analysis

Professor Piotr Fryzlewicz
(LSE)
Abstract
This is an exploratory talk in which we describe different potential 
uses of randomness in time series analysis.

In the first part, we talk about Wild Binary Segmentation for change-point detection, where randomness is used as a device for sampling from the space of all possible contrasts (change-point detection statistics) in order to reduce the computational complexity from cubic to just over linear in the number of observations, without compromising on the accuracy of change-point estimates. We also discuss an interesting related measure of change-point certainty/importance, and extensions to more general nonparametric problems.

In the second part, we use random contemporaneous linear combinations of time series panel data coming from high-dimensional factor models and argue that this gives the effect of "compressively sensing" the components of the multivariate time series, often with not much loss of information but with reduction in the dimensionality of the model.

In the final part, we speculate on the use of random filtering in time series analysis. As an illustration, we show how the appropriate use of this device can reduce the problem of estimating changes in the autocovariance structure of the process to the problem of estimating changes in variance, the latter typically being an easier task.
 
Subscribe to LSE