Past Special Lecture

6 September 2016
Volkan Cevher

Bayesian optimization (BO) is a powerful tool for sequentially optimizing black-box functions that are expensive to evaluate, and has extensive applications including automatic hyperparameter tuning, environmental monitoring, and robotics. The problem of level-set estimation (LSE) with Gaussian processes is closely related; instead of performing optimization, one seeks to classify the whole domain according to whether the function lies above or below a given threshold, which is also of direct interest in applications.

In this talk, we present a new algorithm, truncated variance reduction (TruVaR) that addresses Bayesian optimization and level-set estimation in a unified fashion. The algorithm greedily shrinks a sum of truncated variances within a set of potential maximizers (BO) or unclassified points (LSE), which is updated based on confidence bounds. TruVaR is effective in several important settings that are typically non-trivial to incorporate into myopic algorithms, including pointwise costs, non-uniform noise, and multi-task settings. We provide a general theoretical guarantee for TruVaR covering these phenomena, and use it to obtain regret bounds for several specific settings. We demonstrate the effectiveness of the algorithm on both synthetic and real-world data sets.

4 March 2016
Professor Kerry Emanuel

In his talk, Kerry will explore the pressing practical problem of how hurricane activity will respond to global warming, and how hurricanes could in turn be influencing the atmosphere and ocean

1 December 2015
Professor Philippe Toint
Weather prediction and, more generally, data assimilation in earth sciences, set a significant computing challenge 
because the size of the problem involved is very large.  The talk discusses algorithmic aspects related to the numerical 
solution of such problems and, in particular, focusses on how the lower dimensionality of the (dual) observation space 
may be used to advantage for computing a primal solution.  This is achieved both by adapting the preconditioned 
conjugate gradient and trust-region algorithms to dual space and by reducing the dimensionality of the latter as much 
as possible using observation hierarchies.
27 October 2015
Professor Peter McCullagh, FRS,

In 1943 Fisher, together with Corbet and Williams, published a study on the relation between the number of species and the number of individuals, which has since been recognized as one of the most influential papers in 20th century ecology. It was a combination of empirical work backed up by a simple theoretical argument, which describes a sort of universal law governing random partitions, such as the celebrated Ewens partition whose original derivation flows from the Fisher-Wright model. This talk will discuss several empirical studies of a similar sort, including Taylor's law and recent work related to Fairfield-Smith's work on the variance of spatial averages.

15 May 2015
Lukasz Grabowski

Two subsets A and B of R^n are equidecomposable if it is possible to partition A into pieces and rearrange them via isometries to form a partition of B. Motivated by what is nowadays known as Banach-Tarski paradox, Tarski asked if the unit square and the disc of unit area in R^2 are equidecomposable. 65 years later Laczkovich showed that they are, at least when the pieces are allowed to be non-measurable sets. I will talk about a joint work with A. Mathe and O. Pikhurko which implies in particular the existence of a measurable equidecomposition of circle and square in R^2.