Date
Fri, 08 May 2015
Time
12:30 - 14:00
Location
L5
Speaker
Paul Goulart

This talk will describe methods for computing sharp upper bounds on the probability of a random vector falling outside of a convex set, or on the expected value of a convex loss function, for situations in which limited information is available about the probability distribution. Such bounds are of interest across many application areas in control theory, mathematical finance, machine learning and signal processing. If only the first two moments of the distribution are available, then Chebyshev-like worst-case bounds can be computed via solution of a single semidefinite program. However, the results can be very conservative since they are typically achieved by a discrete worst-case distribution. The talk will show that considerable improvement is possible if the probability distribution can be assumed unimodal, in which case less pessimistic Gauss-like bounds can be computed instead. Additionally, both the Chebyshev- and Gauss-like bounds for such problems can be derived as special cases of a bound based on a generalised definition of unmodality.

Please contact us with feedback and comments about this page. Last updated on 04 Apr 2022 14:57.