Thu, 19 Jan 2017

14:00 - 15:00
L5

On the worst-case performance of the optimization method of Cauchy for smooth, strongly convex functions

Prof. Etienne de Klerk
(Tilburg University)
Abstract

We consider the Cauchy (or steepest descent) method with exact line search applied to a strongly convex function with Lipschitz continuous gradient. We establish the exact worst-case rate of convergence of this scheme, and show that this worst-case behavior is exhibited by a certain convex quadratic function. We also give worst-case complexity bound for a noisy variant of gradient descent method. Finally, we show that these results may be applied to study the worst-case performance of Newton's method for the minimization of self-concordant functions.

The proofs are computer-assisted, and rely on the resolution of semidefinite programming performance estimation problems as introduced in the paper [Y. Drori and M. Teboulle.  Performance of first-order methods for smooth convex minimization: a novel approach. Mathematical Programming, 145(1-2):451-482, 2014].

Joint work with F. Glineur and A.B. Taylor.

Tue, 20 Oct 2015

12:30 - 13:30
Oxford-Man Institute

On prospect theory in a dynamic context

Sebastian Ebert
(Tilburg University)
Abstract

We provide a result on prospect theory decision makers who are naïve about the time inconsistency induced by probability weighting. If a market offers a sufficiently rich set of investment strategies, investors postpone their trading decisions indefinitely due to a strong preference for skewness. We conclude that probability weighting in combination with naïveté leads to unrealistic predictions for a wide range of dynamic setups. Finally, I discuss recent work on the topic that invokes different assumptions on the dynamic modeling of prospect theory.

Thu, 08 May 2014

16:00 - 17:30
L4

Time-Consistent and Market-Consistent Evaluations

Mitja Stadje
(Tilburg University)
Abstract

We consider evaluation methods for payoffs with an inherent

financial risk as encountered for instance for portfolios held

by pension funds and insurance companies. Pricing such payoffs

in a way consistent to market prices typically involves

combining actuarial techniques with methods from mathematical

finance. We propose to extend standard actuarial principles by

a new market-consistent evaluation procedure which we call `two

step market evaluation.' This procedure preserves the structure

of standard evaluation techniques and has many other appealing

properties. We give a complete axiomatic characterization for

two step market evaluations. We show further that in a dynamic

setting with continuous stock prices every evaluation which is

time-consistent and market-consistent is a two step market

evaluation. We also give characterization results and examples

in terms of $g$-expectations in a Brownian-Poisson setting.

Thu, 04 Oct 2007

14:00 - 15:00
Comlab

On the computational complexity of optimization over a simplex, hypercube or sphere

Prof Etienne de Klerk
(Tilburg University)
Abstract

We consider the computational complexity of optimizing various classes

of continuous functions over a simplex, hypercube or sphere. These

relatively simple optimization problems arise naturally from diverse

applications. We review known approximation results as well as negative

(inapproximability) results from the recent literature.

Thu, 24 Feb 2011

14:00 - 15:00
Gibson Grd floor SR

Iterative Valid Polynomial Inequalities Generation for Polynomial Programing

Dr Juan Vera
(Tilburg University)
Abstract

Polynomial Programs are ussually solved by using hierarchies of convex relaxations. This scheme rapidly becomes computationally expensive and is often tractable only for problems of small sizes. We propose an iterative scheme that improves an initial relaxation without incurring exponential growth in size. The key ingredient is a dynamic scheme for generating valid polynomial inequalities for general polynomial programs. These valid inequalities are then used to construct better approximations of the original problem. As a result, the proposed scheme is in principle scalable to large general combinatorial optimization problems.

Joint work with Bissan Ghaddar and Miguel Anjos

Thu, 15 Nov 2007

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

On the estimation of a large sparse Bayesian system: the Snaer program

Prof Jan Magnus
(Tilburg University)
Abstract

The Snaer program calculates the posterior mean and variance of variables on some of which we have data (with precisions), on some we have prior information (with precisions), and on some prior indicator ratios (with precisions) are available. The variables must satisfy a number of exact restrictions. The system is both large and sparse. Two aspects of the statistical and computational development are a practical procedure for solving a linear integer system, and a stable linearization routine for ratios. We test our numerical method for solving large sparse linear least-squares estimation problems, and find that it performs well, even when the $n \times k$ design matrix is large ( $nk = O (10^{8})$ ).

Subscribe to Tilburg University