Past Nomura Seminar

29 October 2015
16:00
to
17:30
Abstract

The talk is concerned with adapted solution of a multi-dimensional BSDE with a "diagonally" quadratic generator, the quadratic part of whose iith component only depends on the iith row of the second unknown variable. Local and global solutions are given. In our proofs, it is natural and crucial to apply both John-Nirenberg and reverse Holder inequalities for BMO martingales. 

20 October 2015
12:30
Sebastian Ebert
Abstract

We provide a result on prospect theory decision makers who are naïve about the time inconsistency induced by probability weighting. If a market offers a sufficiently rich set of investment strategies, investors postpone their trading decisions indefinitely due to a strong preference for skewness. We conclude that probability weighting in combination with naïveté leads to unrealistic predictions for a wide range of dynamic setups. Finally, I discuss recent work on the topic that invokes different assumptions on the dynamic modeling of prospect theory.

15 October 2015
16:00
to
17:30
Abstract

We provide a new algorithm for approximating the law of a one-dimensional diffusion M solving a stochastic differential equation with possibly irregular coefficients.
The algorithm is based on the construction of Markov chains whose laws can be embedded into the diffusion M with a sequence of stopping times. The algorithm does not require any regularity or growth assumption; in particular it applies to SDEs with coefficients that are nowhere continuous and that grow superlinearly. We show that if the diffusion coefficient is bounded and bounded away from 0, then our algorithm has a weak convergence rate of order 1/4. Finally, we illustrate the algorithm's performance with several examples.

18 June 2015
16:00
Prof. Stephane Villeneuve
Abstract

We revisit the optimal exit problem by adding a moral hazard problem where a firm owner contracts out with an agent to run a project. We analyse the optimal contracting problem between the owner and the agent in a Brownian framework, when the latter modifies the project cash-flows with an hidden action. The analysis leads to the resolution of a constrained optimal stopping problem that we solve explicitly.

9 June 2015
12:30
Philip Maymin
Abstract

I prove that if markets are weak-form efficient, meaning current prices fully reflect all information available in past prices, then P = NP, meaning every computational problem whose solution can be verified in polynomial time can also be solved in polynomial time. I also prove the converse by showing how we can "program" the market to solveNP-complete problems. Since P probably does not equal NP, markets are probably not efficient. Specifically, markets become increasingly inefficient as the time series lengthens or becomes more frequent. An illustration by way of partitioning the excess returns to momentum strategies based on data availability confirms this prediction.

For more info please visit: http://philipmaymin.com/academic-papers#pnp

4 June 2015
16:00
Yu-Jui Huang
Abstract

We present a dynamic theory for time-inconsistent stopping problems. The theory is developed under the paradigm of expected discounted
payoff, where the process to stop is continuous and Markovian. We introduce equilibrium stopping policies, which are imple-mentable
stopping rules that take into account the change of preferences over time. When the discount function induces decreasing impatience, we
establish a constructive method to find equilibrium policies. A new class of stopping problems, involving equilibrium policies, is
introduced, as opposed to classical optimal stopping. By studying the stopping of a one-dimensional Bessel process under hyperbolic discounting, we illustrate our theory in an explicit manner.

28 May 2015
16:00
Gianluca Fusai
Abstract

In this talk, we aim to provide a valuation framework for counterparty credit risk based on a structural default model which incorporates jumps and dependence between the assets of interest. In this framework default is caused by the firm value falling below a prespecified threshold following unforeseeable shocks, which deteriorate its liquidity and ability to meet its liabilities. The presence of dependence between names captures wrong-way risk and right-way risk effects. The structural model traces back to Merton (1974), who considered only the possibility of default occurring at the maturity of the contract; first passage time models starting from the seminal contribution of Black and Cox (1976) extend the original framework to incorporate default events at any time during the lifetime of the contract. However, as the driving risk process used is the Brownian motion, all these models suffers of vanishing credit spreads over the short period - a feature not observed in reality. As a consequence, the Credit Value Adjustment (CVA) would be underestimated for short term deals as well as the so-called gap risk, i.e. the unpredictable loss due to a jump event in the market. Improvements aimed at resolving this issue include for example random default barriers, time dependent volatilities, and jumps. In this contribution, we adopt Lévy processes and capture dependence via a linear combination of two independent Lévy processes representing respectively the systematic risk factor and the idiosyncratic shock. We then apply this framework to the valuation of CVA and DVA related to equity contracts such as forwards and swaps. The main focus is on the impact of correlation between entities on the value of CVA and DVA, with particular attention to wrong-way risk and right-way risk, the inclusion of mitigating clauses such as netting and collateral, and finally the impact of gap risk. Particular attention is also devoted to model calibration to market data, and development of adequate numerical methods for the complexity of the model considered.

 
This is joint work with 
Laura Ballotta (Cass Business School, City University of London) and 
Daniele Marazzina (Department of Mathematics, Politecnico of Milan).
21 May 2015
16:00
Prof Stephane Gaiffas
Abstract

We consider the problem of unveiling the implicit network structure of user interactions in a social network, based only on high-frequency timestamps. Our inference is based on the minimization of the least-squares loss associated with a multivariate Hawkes model, penalized by $\ell_1$ and trace norms. We provide a first theoretical analysis of the generalization error for this problem, that includes sparsity and low-rank inducing priors. This result involves a new data-driven concentration inequality for matrix martingales in continuous time with observable variance, which is a result of independent interest. The analysis is based on a new supermartingale property of the trace exponential, based on tools from stochastic calculus. A consequence of our analysis is the construction of sharply tuned $\ell_1$ and trace-norm penalizations, that leads to a data-driven scaling of the variability of information available for each users. Numerical experiments illustrate the strong improvements achieved by the use of such data-driven penalizations.

14 May 2015
16:00
Professor Warren Powell
Abstract

Stochastic optimization for sequential decision problems under uncertainty arises in many settings, and as a result as evolved under several canonical frameworks with names such as dynamic programming, stochastic programming, optimal control, robust optimization, and simulation optimization (to name a few).  This is in sharp contrast with the universally accepted canonical frameworks for deterministic math programming (or deterministic optimal control).  We have found that these competing frameworks are actually hiding different classes of policies to solve a single problem which encompasses all of these fields.  In this talk, I provide a canonical framework which, while familiar to some, is not universally used, but should be.  The framework involves solving an objective function which requires searching over a class of policies, a step that can seem like mathematical hand waving.  We then identify four fundamental classes of policies, called policy function approximations (PFAs), cost function approximations (CFAs), policies based on value function approximations (VFAs), and lookahead policies (which themselves come in different flavors).  With the exception of CFAs, these policies have been widely studied under names that make it seem as if they are fundamentally different approaches (policy search, approximate dynamic programming or reinforcement learning, model predictive control, stochastic programming and robust optimization).  We use a simple energy storage problem to demonstrate that minor changes in the nature of the data can produce problems where each of the four classes might work best, or a hybrid.  This exercise supports our claim that any formulation of a sequential decision problem should start with a recognition that we need to search over a space of policies.

7 May 2015
16:00
Sara Biagini
Abstract

We derive a closed form portfolio optimization rule for an investor who is diffident about mean return and volatility estimates, and has a CRRA utility. The novelty is that confidence is here represented using ellipsoidal uncertainty sets for the drift, given a volatility realization. This specification affords a simple and concise analysis, as the optimal portfolio allocation policy is shaped by a rescaled market Sharpe ratio, computed under the worst case volatility. The result is based on a max-min Hamilton-Jacobi-Bellman-Isaacs PDE, which extends the classical Merton problem and reverts to it for an ambiguity-neutral investor.

Pages