Forthcoming events in this series


Thu, 14 Nov 2013

14:00 - 15:00
L5

Range space Krylov methods for data assimilation in meteorology and oceanography

Professor Philippe Toint
(University of Namur)
Abstract

The context of data assimilation in oceanography will be described as well as the computational challenges associated with it. A class of numerical linear algebra methods is described whose purpose is to exploit the problem structure in order to reduce the computational burden and provide provable convergence results for what remains a (very large) nonlinear problem. This class belongs to the Krylov-space family of methods and the special structure used is the imbalance between the dimensions of the state space and the observation space. It is also shown how inexact matrix-vector products can be exploited. Finally, preconditioning issues and resulting adaptations of the trust-region methodology for nonlinear minimization will also be outlined.

By Serge Gratton, Selime Gurol, Philippe Toint, Jean Tshimanga and Anthony Weaver.

Thu, 07 Nov 2013

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Sparse multifrontal QR factorization on the GPU

Professor Tim Davis
(University of Florida)
Abstract

Sparse matrix factorization involves a mix of regular and irregular computation, which is a particular challenge when trying to obtain high-performance on the highly parallel general-purpose computing cores available on graphics processing units (GPUs). We present a sparse multifrontal QR factorization method that meets this challenge, and is up to ten times faster than a highly optimized method on a multicore CPU. Our method is unique compared with prior methods, since it factorizes many frontal matrices in parallel, and keeps all the data transmitted between frontal matrices on the GPU. A novel bucket scheduler algorithm extends the communication-avoiding QR factorization for dense matrices, by exploiting more parallelism and by exploiting the staircase form present in the frontal matrices of a sparse multifrontal method.

This is joint work with Nuri Yeralan and Sanjay Ranka.

Thu, 31 Oct 2013

14:00 - 15:00
L5

Don't be afraid of the 1001st (numerical) derivative

Professor Folkmar Bornemann
(Technical University Munich)
Abstract

The accurate and stable numerical calculation of higher-order

derivatives of holomorphic functions (as required, e.g., in random matrix

theory to extract probabilities from a generating function) turns out to

be a surprisingly rich topic: there are connections to asymptotic analysis,

the theory of entire functions, and to algorithmic graph theory.

Thu, 24 Oct 2013

14:00 - 15:00
L5

A geometric theory of phase transitions in convex optimization

Dr Martin Lotz
(University of Manchester)
Abstract

Convex regularization has become a popular approach to solve large scale inverse or data separation problems. A prominent example is the problem of identifying a sparse signal from linear samples my minimizing the l_1 norm under linear constraints. Recent empirical research indicates that many convex regularization problems on random data exhibit a phase transition phenomenon: the probability of successfully recovering a signal changes abruptly from zero to one as the number of constraints increases past a certain threshold. We present a rigorous analysis that explains why phase transitions are ubiquitous in convex optimization. It also describes tools for making reliable predictions about the quantitative aspects of the transition, including the location and the width of the transition region. These techniques apply to regularized linear inverse problems, to demixing problems, and to cone programs with random affine constraints. These applications depend on a new summary parameter, the statistical dimension of cones, that canonically extends the dimension of a linear subspace to the class of convex cones.

Joint work with Dennis Amelunxen, Mike McCoy and Joel Tropp.

Thu, 17 Oct 2013

14:00 - 15:00
L5

Model Selection with Piecewise Regular Gauges

Dr Gabriel Peyre
(Université Paris Dauphine)
Abstract

In this talk, we investigate in a unified way the structural properties of a large class of convex regularizers for linear inverse problems. We consider regularizations with convex positively 1-homogenous functionals (so-called gauges) which are piecewise smooth. Singularies of such functionals are crucial to force the solution to the regularization to belong to an union of linear space of low dimension. These spaces (the so-called "models") allows one to encode many priors on the data to be recovered, conforming to some notion of simplicity/low complexity. This family of priors encompasses many special instances routinely used in regularized inverse problems such as L^1, L^1-L^2 (group sparsity), nuclear norm, or the L^infty norm. The piecewise-regular requirement is flexible enough to cope with analysis-type priors that include a pre-composition with a linear operator, such as for instance the total variation and polyhedral gauges. This notion is also stable under summation of regularizers, thus enabling to handle mixed regularizations.

The main set of contributions of this talk is dedicated to assessing the theoretical recovery performance of this class of regularizers. We provide sufficient conditions that allow to provably controlling the deviation of the recovered solution from the true underlying object, as a function of the noise level. More precisely we establish two main results. The first one ensures that the solution to the inverse problem is unique and lives on the same low dimensional sub-space as the true vector to recover, with the proviso that the minimal signal to noise ratio is large enough. This extends previous results well-known for the L^1 norm [1], analysis L^1 semi-norm [2], and the nuclear norm [3] to the general class of piecewise smooth gauges. In the second result, we establish L^2 stability by showing that the L^2 distance between the recovered and true vectors is within a factor of the noise level, thus extending results that hold for coercive convex positively 1-homogenous functionals [4].

This is a joint work with S. Vaiter, C. Deledalle, M. Golbabaee and J. Fadili. For more details, see [5].

Bibliography:
[1] J.J. Fuchs, On sparse representations in arbitrary redundant bases. IEEE Transactions on Information Theory, 50(6):1341-1344, 2004.
[2] S. Vaiter, G. Peyré, C. Dossal, J. Fadili, Robust Sparse Analysis Regularization, to appear in IEEE Transactions on Information Theory, 2013.
[3] F. Bach, Consistency of trace norm minimization, Journal of Machine Learning Research, 9, 1019-1048, 2008.
[4] M. Grasmair, Linear convergence rates for Tikhonov regularization with positively homogeneous functionals. Inverse Problems, 27(7):075014, 2011.
[5] S. Vaiter, M. Golbabaee, J. Fadili, G. Peyré, Model Selection with Piecewise Regular Gauges, Preprint hal-00842603, 2013

Thu, 13 Jun 2013

14:00 - 15:00
Gibson Grd floor SR

Lattice rules in a nutshell

Dr Dirk Nuyens
(KU Leuven)
Abstract

Lattice rules are equal-weight quadrature/cubature rules for the approximation of multivariate integrals which use lattice points as the cubature nodes. The quality of such cubature rules is directly related to the discrepancy between the uniform distribution and the discrete distribution of these points in the unit cube, and so, they are a kind of low-discrepancy sampling points. As low-discrepancy based cubature rules look like Monte Carlo rules, except that they use cleverly chosen deterministic points, they are sometimes called quasi-Monte Carlo rules.

\\

\\

The talk starts by motivating the usage of Monte Carlo and then quasi-Monte Carlo methods after which some more recent developments are discussed. Topics include: worst-case errors in reproducing kernel Hilbert spaces, weighted spaces and the construction of lattice rules and sequences.

\\

\\

In the minds of many, quasi-Monte Carlo methods seem to share the bad stanza of the Monte Carlo method: a brute force method of last resort with slow order of convergence, i.e., $O(N^{-1/2})$. This is not so.

While the standard rate of convergence for quasi-Monte Carlo is rather slow, being $O(N^{-1})$, the theory shows that these methods achieve the optimal rate of convergence in many interesting function spaces.

E.g., in function spaces with higher smoothness one can have $O(N^{-\alpha})$, $\alpha > 1$. This will be illustrated by numerical examples.

Thu, 06 Jun 2013

14:00 - 15:00
Gibson Grd floor SR

Discontinuous Galerkin Methods for Modeling the Coastal Ocean

Professor Clint Dawson
(University of Texas at Austin)
Abstract

The coastal ocean contains a diversity of physical and biological

processes, often occurring at vastly different scales. In this talk,

we will outline some of these processes and their mathematical

description. We will then discuss how finite element methods are used

in coastal ocean modeling and recent research into

improvements to these algorithms. We will also highlight some of the

successes of these methods in simulating complex events, such as

hurricane storm surges. Finally, we will outline several interesting

challenges which are ripe for future research.

Thu, 30 May 2013

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

The FEAST eigenvalue algorithm and solver with new perspectives for first-principle electronic structure calculations

Professor Eric Polizzi
(University of Massachusetts)
Abstract

FEAST is a new general purpose eigenvalue algorithm that takes its inspiration from the density-matrix representation and contour integration technique in quantum mechanics [Phys. Rev. B 79, 115112, (2009)], and it can be understood as a subspace iteration algorithm using approximate spectral projection [http://arxiv.org/abs/1302.0432 (2013)]. The algorithm combines simplicity and efficiency and offers many important capabilities for achieving high performance, robustness, accuracy, and multi-level parallelism on modern computing platforms. FEAST is also the name of a comprehensive numerical library package which currently (v2.1) focuses on solving the symmetric eigenvalue problems on both shared-memory architectures (i.e. FEAST-SMP version -- also integrated into Intel MKL since Feb 2013) and distributed architectures (i.e. FEAST-MPI version) including three levels of parallelism MPI-MPI-OpenMP.

\\

\\

In this presentation, we aim at expanding the current capabilies of the FEAST eigenvalue algorithm and developing an unified numerical approach for solving linear, non-linear, symmetric and non-symmetric eigenvalue problems. The resulting algorithms retain many of the properties of the symmetric FEAST including the multi-level parallelism. It will also be outlined that the development strategy roadmap for FEAST is closely tied together with the needs to address the variety of eigenvalue problems arising in computational nanosciences. Consequently, FEAST will also be presented beyond the "black-box" solver as a fundamental modeling framework for electronic structure calculations.

\\

\\

Three problems will be presented and discussed: (i) a highly efficient and robust FEAST-based alternative to traditional self-consistent field

(SCF) procedure for solving the non-linear eigenvector problem (J. Chem. Phys. 138, p194101 (2013)]); (ii) a fundamental and practical solution of the exact muffin-tin problem for performing both accurate and scalable all-electron electronic structure calculations using FEAST on parallel architectures [Comp. Phys. Comm. 183, p2370 (2012)]; (iii) a FEAST-spectral-based time-domain propagation techniques for performing real-time TDDFT simulations. In order to illustrate the efficiency of the FEAST framework, numerical examples are provided for various molecules and carbon-based materials using our in-house all-electron real-space FEM implementation and both the DFT/Kohn-Sham/LDA and TDDFT/ALDA approaches.

Thu, 23 May 2013

14:00 - 15:00
Gibson Grd floor SR

Compressive Imaging: Stable Sampling Strategies using Shearlets

Professor Gitta Kutyniok
(TU Berlin)
Abstract

In imaging science, efficient acquisition of images by few samples with the possibility to precisely recover the complete image is a topic of significant interest. The area of compressed sensing, which in particular advocates random sampling strategies, has had already a tremendous impact on both theory and applications. The necessary requirement for such techniques to be applicable is the sparsity of the original data within some transform domain. Recovery is then achieved by, for instance, $\ell_1$ minimization. Various applications however do not allow complete freedom in the choice of the samples. Take Magnet Resonance Imaging (MRI) for example, which only provides access to Fourier samples. For this particular application, empirical results still showed superior performance of compressed sensing techniques.

\\

\\

In this talk, we focus on sparse sampling strategies under the constraint that only Fourier samples can be accessed. Since images -- and in particular images from MRI -- are governed by anisotropic features and shearlets do provide optimally sparse approximations of those, compactly supported shearlet systems will be our choice for the reconstruction procedure. Our sampling strategy then exploits a careful variable density sampling of the Fourier samples with $\ell_1$-analysis based reconstruction using shearlets. Our main result provides success guarantees and shows that this sampling and reconstruction strategy is optimal.

\\

\\

This is joint work with Wang-Q Lim (Technische Universit\"at Berlin).

Thu, 16 May 2013

14:00 - 15:00
Gibson Grd floor SR

Numerical Modeling of Vesicles: Inertial Flow and Electric Fields

Dr David Salac
(University at Buffalo)
Abstract

The behavior of lipid vesicles is due to a complex interplay between the mechanics of the vesicle membrane, the surrounding fluids, and any external fields which may be present. In this presentation two aspects of vesicle behavior are explored: vesicles in inertial flows and vesicles exposed to electric fields.

The first half of the talk will present work done investigating the behavior of two-dimensional vesicles in inertial flows. A level set representation of the interface is coupled to a Navier-Stokes projection solver. The standard projection method is modified to take into account not only the volume incompressibility of the fluids but also the surface incompressibility of the vesicle membrane. This surface incompressibility is enforced by using the closest point level set method to calculate the surface tension needed to enforce the surface incompressibility. Results indicate that as inertial effects increase vesicle change from a tumbling behavior to a stable tank-treading configuration. The numerical results bear a striking similarity to rigid particles in inertial flows. Using rigid particles as a guide scaling laws are determined for vesicle behavior in inertial flows.

The second half of the talk will move to immersed interface solvers for three-dimensional vesicles exposed to electric fields. The jump conditions for pressure and fluid velocity will be developed for the three-dimensional Stokes flow or constant density Navier-Stokes equations assuming a piecewise continuous viscosity and an inextensible interface. An immersed interface solver is implemented to determine the fluid and membrane state. Analytic test cases have been developed to examine the convergence of the fluids solver.

Time permitting an immersed interface solver developed to calculate the electric field for a vesicle exposed to an electric field will be discussed. Unlike other multiphase systems, vesicle membranes have a time-varying potential which must be taken into account. This potential is implicitly determined along with the overall electric potential field.

Thu, 09 May 2013

14:00 - 15:00
Gibson Grd floor SR

Superconvergence for Discontinuous Galerkin solutions: Making it Useful

Dr Jennifer Ryan
(University of East Anglia)
Abstract

The discontinuous Galerkin (DG) method has recently become one of the most widely researched and utilized discretization methodologies for solving problems in science and engineering. It is fundamentally based upon the mathematical framework of variational methods, and provides hope that computationally fast, efficient and robust methods can be constructed for solving real-world problems. By not requiring that the solution to be continuous across element boundaries, DG provides a flexibility that can be exploited both for geometric and solution adaptivity and for parallelization. This freedom comes at a cost. Lack of smoothness across elements can hamper simulation post-processing like feature extraction and visualization. However, these limitations can be overcome by taking advantage of an additional property of DG - that of superconvergence. Superconvergence can aid in addressing the lack of continuity through the development of Smoothness-Increasing Accuracy-Conserving (SIAC) filters. These filters respect the mathematical properties of the data while providing levels of smoothness so that commonly used visualization tools can be used appropriately, accurately, and efficiently. In this talk, the importance of superconvergence in applications such as visualization will be discussed as well as overcoming the mathematical barriers in making superconvergence useful for applications.

Thu, 02 May 2013

14:00 - 15:00
Gibson Grd floor SR

Barycentric Interpolation

Dr Kai Hormann
(University of Lugano)
Abstract

In this talk I will focus on the method of barycentric interpolation, which ties up to the ideas that August Ferdinand Möbius published in his seminal work "Der barycentrische Calcül" in 1827. For univariate data, this gives a special kind of rational interpolant which is guaranteed to have no poles and favourable approximation properties.

I further discuss how to extend this idea to bivariate data, where it leads to the concept of generalized barycentric coordinates and to an efficient method for interpolating data given at the vertices of an arbitrary polygon.

Thu, 25 Apr 2013

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Scalable Data Analytics

Dr Tobias Berka
(University of Cambridge)
Abstract

Very-large scale data analytics are an alleged golden goose for efforts in parallel and distributed computing, and yet contemporary statistics remain somewhat of a dark art for the uninitiated. In this presentation, we are going to take a mathematical and algorithmic look beyond the veil of Big Data by studying the structure of the algorithms and data, and by analyzing the fit to existing and proposed computer systems and programming models. Towards highly scalable kernels, we will also discuss some of the promises and challenges of approximation algorithms using randomization, sampling, and decoupled processing, touching some contemporary topics in parallel numerics.

Thu, 18 Apr 2013

14:00 - 15:00
Gibson Grd floor SR

The exponentially convergent trapezoid rule

Professor Nick Trefethen
(University of Oxford)
Abstract

It is well known that the trapezoid rule converges geometrically when applied to analytic functions on periodic intervals or the real line. The mathematics and history of this phenomenon are reviewed and it is shown that far from being a curiosity, it is linked with powerful algorithms all across scientific computing, including double exponential and Gauss quadrature, computation of inverse Laplace transforms, special functions, computational complex analysis, the computation of functions of matrices and operators, rational approximation, and the solution of partial differential equations.

This talk represents joint work with Andre Weideman of the University of Stellenbosch.

Thu, 07 Mar 2013

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

The How and Why of Balancing

Dr Philip Knight
(University of Strathclyde)
Abstract

We consider the problem of taking a matrix A and finding diagonal matrices D and E such that the rows and columns of B = DAE satisfy some specific constraints. Examples of constraints are that

* the row and column sums of B should all equal one;
* the norms of the rows and columns of B should all be equal;
* the row and column sums of B should take values specified by vectors p and q.

Simple iterative algorithms for solving these problems have been known for nearly a century. We provide a simple framework for describing these algorithms that allow us to develop robust convergence results and describe a straightforward approach to accelerate the rate of convergence.

We describe some of the diverse applications of balancing with examples from preconditioning, clustering, network analysis and psephology.

This is joint work with Kerem Akartunali (Strathclyde), Daniel Ruiz (ENSEEIHT, Toulouse) and Bora Ucar (ENS, Lyon).

Thu, 28 Feb 2013

14:00 - 15:00
Gibson Grd floor SR

Introduction to tensor numerical methods in higher dimensions

Dr Boris Khoromskij
(MPI Leipzig)
Abstract

Tensor numerical methods provide the efficient separable representation of multivariate functions and operators discretized on large $n^{\otimes d}$-grids, providing a base for the solution of $d$-dimensional PDEs with linear complexity scaling in the dimension, $O(d n)$. Modern methods of separable approximation combine the canonical, Tucker, matrix product states (MPS) and tensor train (TT) low-parametric data formats.

\\

\\

The recent quantized-TT (QTT) approximation method is proven to provide the logarithmic data-compression on a wide class of functions and operators. Furthermore, QTT-approximation makes it possible to represent multi-dimensional steady-state and dynamical equations in quantized tensor spaces with the log-volume complexity scaling in the full-grid size, $O(d \log n)$, instead of $O(n^d)$.

\\

\\

We show how the grid-based tensor approximation in quantized tensor spaces applies to super-compressed representation of functions and operators (super-fast convolution and FFT, spectrally close preconditioners) as well to hard problems arising in electronic structure calculations, such as multi-dimensional convolution, and two-electron integrals factorization in the framework of Hartree-Fock calculations. The QTT method also applies to the time-dependent molecular Schr{\"o}dinger, Fokker-Planck and chemical master equations.

\\

\\

Numerical tests are presented indicating the efficiency of tensor methods in approximation of functions, operators and PDEs in many dimensions.

\\

\\

http://personal-homepages.mis.mpg.de/bokh

Thu, 21 Feb 2013

14:00 - 15:00
Gibson Grd floor SR

Optimization meets Statistics: Fast global convergence for high-dimensional statistical recovery

Professor Martin Wainwright
(UC Berkeley)
Abstract

Many methods for solving high-dimensional statistical inverse problems are based on convex optimization problems formed by the weighted sum of a loss function with a norm-based regularizer.

\\

Particular examples include $\ell_1$-based methods for sparse vectors and matrices, nuclear norm for low-rank matrices, and various combinations thereof for matrix decomposition and robust PCA. In this talk, we describe an interesting connection between computational and statistical efficiency, in particular showing that the same conditions that guarantee that an estimator has good statistical error can also be used to certify fast convergence of first-order optimization methods up to statistical precision.

\\

\\

Joint work with Alekh Agarwahl and Sahand Negahban Pre-print (to appear in Annals of Statistics)

\\

http://www.eecs.berkeley.edu/~wainwrig/Papers/AgaNegWai12b_SparseOptFul…

Thu, 14 Feb 2013

14:00 - 15:00
Gibson Grd floor SR

High frequency acoustic scattering by screens: computation and analysis

Professor Simon Chandler-Wilde
(University of Reading)
Abstract

We address, in the study of acoustic scattering by 2D and 3D planar screens, three inter-related and challenging questions. Each of these questions focuses particularly on the formulation of these problems as boundary integral equations. The first question is, roughly, does it make sense to consider scattering by screens which occupy arbitrary open sets in the plane, and do different screens cause the same scattering if the open sets they occupy have the same closure? This question is intimately related to rather deep properties of fractional Sobolev spaces on general open sets, and the capacity and Haussdorf dimension of their boundary. The second question is, roughly, that, in answering the first question, can we understand explicitly and quantitatively the influence of the frequency of the incident time harmonic wave? It turns out that we can, that the problems have variational formations with sesquilinear forms which are bounded and coercive on fractional Sobolev spaces, and that we can determine explicitly how continuity and coercivity constants depend on the frequency. The third question is: can we design computational methods, adapted to the highly oscillatory solution behaviour at high frequency, which have computational cost essentially independent of the frequency? The answer here is that in 2D we can provably achieve solutions to any desired accuracy using a number of degrees of freedom which provably grows only logarithmically with the frequency, and that it looks promising that some extension to 3D is possible.

\\

\\

This is joint work with Dave Hewett, Steve Langdon, and Ashley Twigger, all at Reading.

Thu, 07 Feb 2013

14:00 - 15:00
Gibson Grd floor SR

Pointwise convergence of the feasibility violation for Moreau-Yosida regularized optimal control problems

Dr Winnifried Wollner
(University of Hamburg)
Abstract

Subtitle:

And applications to problems involving pointwise constraints on the gradient of the state on non smooth polygonal domains

\\

\\

In this talk we are concerned with an analysis of Moreau-Yosida regularization of pointwise state constrained optimal control problems. As recent analysis has already revealed the convergence of the primal variables is dominated by the reduction of the feasibility violation in the maximum norm.

\\

\\

We will use a new method to derive convergence of the feasibility violation in the maximum norm giving improved the known convergence rates.

\\

\\

Finally we will employ these techniques to analyze optimal control problems governed by elliptic PDEs with pointwise constraints on the gradient of the state on non smooth polygonal domains. For these problems, standard analysis, fails because the control to state mapping does not yield sufficient regularity for the states to be continuously differentiable on the closure of the domain. Nonetheless, these problems are well posed. In particular, the results of the first part will be used to derive convergence rates for the primal variables of the regularized problem.

Thu, 31 Jan 2013

14:00 - 15:00
Gibson Grd floor SR

On the Origins of Domain Decomposition Methods

Professor Martin Gander
(University of Geneva)
Abstract

Domain decomposition methods have been developed in various contexts, and with very different goals in mind. I will start my presentation with the historical inventions of the Schwarz method, the Schur methods and Waveform Relaxation. I will show for a simple model problem how all these domain decomposition methods function, give precise results for the model problem, and also explain the most general convergence results available currently for these methods. I will conclude with the parareal algorithm as a new variant for parallelization of evolution problems in the time direction.

Thu, 24 Jan 2013

14:00 - 15:00
Gibson Grd floor SR

A hybrid finite element-Lagrangian marker technique for geodynamics: Spatial discretisations, implicit solvers and numerics

Dr David May
(ETH Zurich)
Abstract

Over million year time scales, the evolution and deformation of rocks on Earth can be described by the equations governing the motion of a very viscous, incompressible fluid. In this regime, the rocks within the crust and mantle lithosphere exhibit both brittle and ductile behaviour. Collectively, these rheologies result in an effective viscosity which is non-linear and may exhibit extremely large variations in space. In the context of geodynamics applications, we are interested in studying large deformation processes both prior and post to the onset of material failure.

\\

\\

Here I introduce a hybrid finite element (FE) - Lagrangian marker discretisation which has been specifically designed to enable the numerical simulation of geodynamic processes. In this approach, a mixed FE formulation is used to discretise the incompressible Stokes equations, whilst the markers are used to discretise the material lithology.

\\

\\

First I will show the a priori error estimates associated with this hybrid discretisation and demonstrate the convergence characteristics via several numerical examples. Then I will discuss several multi-level preconditioning strategies for the saddle point problem which are robust with respect to both large variations in viscosity and the underlying topological structure of the viscosity field.

\\

Finally, I will describe an extension of the multi-level preconditioning strategy that enables high-resolution, three-dimensional simulations to be performed with a small memory footprint and which is performant on multi-core, parallel architectures.

Thu, 17 Jan 2013

14:00 - 15:00
Gibson Grd floor SR

Multi-task Learning and Structured Sparsity

Professor Massimiliano Pontil
(University College London)
Abstract

We discuss the problem of estimating a structured matrix with a large number of elements. A key motivation for this problem occurs in multi-task learning. In this case, the columns of the matrix correspond to the parameters of different regression or classification tasks, and there is structure due to relations between the tasks. We present a general method to learn the tasks' parameters as well as their structure. Our approach is based on solving a convex optimization problem, involving a data term and a penalty term. We highlight different types of penalty terms which are of practical and theoretical importance. They implement structural relations between the tasks and achieve a sparse representations of parameters. We address computational issues as well as the predictive performance of the method. Finally we discuss how these ideas can be extended to learn non-linear task functions by means of reproducing kernels.

Thu, 10 Jan 2013

14:00 - 15:00
Gibson Grd floor SR

Packing Ellipsoids with Overlap

Professor Stephen Wright
(University of Wisconsin-Madison)
Abstract

Problems of packing shapes with maximal density, sometimes into a

container of restricted size, are classical in discrete

mathematics. We describe here the problem of packing a given set of

ellipsoids of different sizes into a finite container, in a way that

allows overlap but that minimizes the maximum overlap between adjacent

ellipsoids. We describe a bilevel optimization algorithm for finding

local solutions of this problem, both the general case and the simpler

special case in which the ellipsoids are spheres. Tools from conic

optimization, especially semidefinite programming, are key to the

algorithm. Finally, we describe the motivating application -

chromosome arrangement in cell nuclei - and compare the computational

results obtained with this approach to experimental observations.

\\

\\

This talk represents joint work with Caroline Uhler (IST Austria).

Thu, 29 Nov 2012

14:00 - 15:00
Gibson Grd floor SR

A locally adaptive Cartesian finite-volume framework for solving PDEs on surfaces

Dr Donna Calhoun
(Boise State University)
Abstract

We describe our current efforts to develop finite volume

schemes for solving PDEs on logically Cartesian locally adapted

surfaces meshes. Our methods require an underlying smooth or

piecewise smooth grid transformation from a Cartesian computational

space to 3d surface meshes, but does not rely on analytic metric terms

to obtain second order accuracy. Our hyperbolic solvers are based on

Clawpack (R. J. LeVeque) and the parabolic solvers are based on a

diamond-cell approach (Y. Coudi\`ere, T. Gallou\"et, R. Herbin et

al). If time permits, I will also discuss Discrete Duality Finite

Volume methods for solving elliptic PDEs on surfaces.

\\

\\

To do local adaption and time subcycling in regions requiring high

spatial resolution, we are developing ForestClaw, a hybrid adaptive

mesh refinement (AMR) code in which non-overlapping fixed-size

Cartesian grids are stored as leaves in a forest of quad- or

oct-trees. The tree-based code p4est (C. Burstedde) manages the

multi-block connectivity and is highly scalable in realistic

applications.

\\

\\

I will present results from reaction-diffusion systems on surface

meshes, and test problems from the atmospheric sciences community.

Thu, 22 Nov 2012

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Domain decomposition for total variation regularisation and applications

Dr Carola-Bibiane Schönlieb
(DAMTP, University of Cambridge)
Abstract

Domain decomposition methods were introduced as techniques for solving partial differential equations based on a decomposition of the spatial domain of the problem into several subdomains. The initial equation restricted to the subdomains defines a sequence of new local problems. The main goal is to solve the initial equation via the solution of the local problems. This procedure induces a dimension reduction which is the major responsible of the success of such a method. Indeed, one of the principal motivations is the formulation of solvers which can be easily parallelized.

In this presentation we shall develop a domain decomposition algorithm to the minimization of functionals with total variation constraints. In this case the interesting solutions may be discontinuous, e.g., along curves in 2D. These discontinuities may cross the interfaces of the domain decomposition patches. Hence, the crucial difficulty is the correct treatment of interfaces, with the preservation of crossing discontinuities and the correct matching where the solution is continuous instead. I will present our domain decomposition strategy, including convergence results for the algorithm and numerical examples for its application in image inpainting and magnetic resonance imaging.

Thu, 15 Nov 2012

14:00 - 15:00
Gibson Grd floor SR

Optimally Blended Spectral-Finite Element Scheme for Wave Propagation and Non-Standard Reduced Integration

Professor Mark Ainsworth
(Brown University)
Abstract

We study the dispersion and dissipation of the numerical scheme obtained by taking a weighted averaging of the consistent (finite element) mass matrix and lumped (spectral element) mass matrix for the small wave number limit. We find and prove that for the optimum blending the resulting scheme

(a) provides $2p+4$ order accuracy for $p$th order method (two orders more accurate compared with finite and spectral element schemes);

(b) has an absolute accuracy which is $\mathcal{O}(p^{-3})$ and $\mathcal{O}(p^{-2})$ times better than that of the pure finite and spectral element schemes, respectively;

(c) tends to exhibit phase lag.

Moreover, we show that the optimally blended scheme can be efficiently implemented merely by replacing the usual Gaussian quadrature rule used to assemble the mass and stiffness matrices by novel nonstandard quadrature rules which are also derived.

Thu, 08 Nov 2012

14:00 - 15:00
Gibson Grd floor SR

On the design and error control of higher order in time ALE formulations

Dr Irene Kyza
(IACM-FORTH)
Abstract

ALE formulations are useful when approximating solutions of problems in deformable domains, such as fluid-structure interactions. For realistic simulations involving fluids in 3d, it is important that the ALE method is at least of second order of accuracy. Second order ALE methods in time, without any constraint on the time step, do not exist in the literature and the role of the so-called geometric conservation law (GCL) for stability and accuracy is not clear. We propose discontinuous Galerkin (dG) methods of any order in time for a time dependent advection-diffusion model problem in moving domains. We prove that our proposed schemes are unconditionally stable and that the conservative and non conservative formulations are equivalent. The same results remain true when appropriate quadrature is used for the approximation of the involved integrals in time. The analysis hinges on the validity of a discrete Reynolds' identity and generalises the GCL to higher order methods. We also prove that the computationally less intensive Runge-Kutta-Radau (RKR) methods of any order are stable, subject to a mild ALE constraint. A priori and a posteriori error analysis is provided. The final estimates are of optimal order of accuracy. Numerical experiments confirm and complement our theoretical results.

This is joint work with Andrea Bonito and Ricardo H. Nochetto.

Thu, 01 Nov 2012

14:00 - 15:00
Gibson Grd floor SR

Discontinuous Galerkin Methods for Surface PDEs

Dr Andreas Dedner
(University of Warwick)
Abstract

The Discontinuous Galerkin (DG) method has been used to solve a wide range of partial differential equations. Especially for advection dominated problems it has proven very reliable and accurate. But even for elliptic problems it has advantages over continuous finite element methods, especially when parallelization and local adaptivity are considered.

In this talk we will first present a variation of the compact DG method for elliptic problems with varying coefficients. For this method we can prove stability on general grids providing a computable bound for all free parameters. We developed this method to solve the compressible Navier-Stokes equations and demonstrated its efficiency in the case of meteorological problems using our implementation within the DUNE software framework, comparing it to the operational code COSMO used by the German weather service.

After introducing the notation and analysis for DG methods in Euclidean spaces, we will present a-priori error estimates for the DG method on surfaces. The surface finite-element method with continuous ansatz functions was analysed a few years ago by Dzuik/Elliot; we extend their results to the interior penalty DG method where the non-smooth approximation of the surface introduces some additional challenges.

Thu, 25 Oct 2012

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Numerical Methods for PDEs with Random Coefficients

Dr Elisabeth Ullmann
(University of Bath)
Abstract

Partial differential equations (PDEs) with random coefficients are used in computer simulations of physical processes in science, engineering and industry applications with uncertain data. The goal is to obtain quantitative statements on the effect of input data uncertainties for a comprehensive evaluation of simulation results. However, these equations are formulated in a physical domain coupled with a sample space generated by random parameters and are thus very computing-intensive.

We outline the key computational challenges by discussing a model elliptic PDE of single phase subsurface flow in random media. In this application the coefficients are often rough, highly variable and require a large number of random parameters which puts a limit on all existing discretisation methods. To overcome these limits we employ multilevel Monte Carlo (MLMC), a novel variance reduction technique which uses samples computed on a hierarchy of physical grids. In particular, we combine MLMC with mixed finite element discretisations to calculate travel times of particles in groundwater flows.

For coefficients which can be parameterised by a small number of random variables we employ spectral stochastic Galerkin (SG) methods which give rise to a coupled system of deterministic PDEs. Since the standard SG formulation of the model elliptic PDE requires expensive matrix-vector products we reformulate it as a convection-diffusion problem with random convective velocity. We construct and analyse block-diagonal preconditioners for the nonsymmetric Galerkin matrix for use with Krylov subspace methods such as GMRES.

Thu, 18 Oct 2012

14:00 - 15:00
Gibson Grd floor SR

FEM/BEM coupling for wave propagation

Dr Lehel Banjai
(Heriot-Watt University)
Abstract

We will discuss the numerical simulation of acoustic wave propagation with localized inhomogeneities. To do this we will apply a standard finite element method (FEM) in space and explicit time-stepping in time on a finite spatial domain containing the inhomogeneities. The equations in the exterior computational domain will be dealt with a time-domain boundary integral formulation discretized by the Galerkin boundary element method (BEM) in space and convolution quadrature in time.

\\

\\

We will give the analysis of the proposed method, starting with the proof of a positivity preservation property of convolution quadrature as a consequence of a variant of the Herglotz theorem. Combining this result with standard energy analysis of leap-frog discretization of the interior equations will give us both stability and convergence of the method. Numerical results will also be given.

Thu, 11 Oct 2012

14:00 - 15:00
Gibson Grd floor SR

Automated parallel adjoints for model differentiation, optimisation and stability analysis

Dr Patrick Farrell
(Imperial College London)
Abstract

The derivatives of PDE models are key ingredients in many

important algorithms of computational science. They find applications in

diverse areas such as sensitivity analysis, PDE-constrained

optimisation, continuation and bifurcation analysis, error estimation,

and generalised stability theory.

\\

\\

These derivatives, computed using the so-called tangent linear and

adjoint models, have made an enormous impact in certain scientific fields

(such as aeronautics, meteorology, and oceanography). However, their use

in other areas has been hampered by the great practical

difficulty of the derivation and implementation of tangent linear and

adjoint models. In his recent book, Naumann (2011) describes the problem

of the robust automated derivation of parallel tangent linear and

adjoint models as "one of the great open problems in the field of

high-performance scientific computing''.

\\

\\

In this talk, we present an elegant solution to this problem for the

common case where the original discrete forward model may be written in

variational form, and discuss some of its applications.

Thu, 04 Oct 2012

14:00 - 15:00
Gibson Grd floor SR

The Science of Ice Sheets: the Mathematical Modeling and Computational Simulation of Ice Flows

Professor Max Gunzburger
(Florida State University)
Abstract

The melting of ice in Greenland and Antarctica would, of course, be by far the major contributor any possible sea level rise. Thus, to make science-based predictions about sea-level rise, it is crucial that the ice sheets covering those land masses be accurately mathematically modeled and computationally simulated. In fact, the 2007 IPCC report on the state of the climate did not include predictions about sea level rise because it was concluded there that the science of ice sheets was not developed to a sufficient degree. As a result, predictions could not be rationally and

confidently made. In recent years, there has been much activity in trying to improve the state-of-the-art of ice sheet modeling and simulation. In

this lecture, we review a hierarchy of mathematical models for the flow of ice, pointing out the relative merits and demerits of each, showing how

they are coupled to other climate system components (ocean and atmosphere), and discussing where further modeling work is needed. We then discuss algorithmic approaches for the approximate solution of ice sheet flow models and present and compare results obtained from simulations using the different mathematical models.

Thu, 14 Jun 2012

14:00 - 15:00
Gibson Grd floor SR

Piecewise constant control approximation to multi-dimensional HJB equations

Dr Christoph Reisinger
(University of Oxford)
Abstract

While a general framework of approximating the solution to Hamilton-Jacobi-Bellman (HJB) equations by difference methods is well established, and efficient numerical algorithms are available for one-dimensional problems, much less is known in the multi-dimensional case. One difficulty is the monotone approximation of cross-derivatives, which guarantees convergence to the viscosity solution. We propose a scheme combining piecewise freezing of the policies in time with a suitable spatial discretisation to establish convergence for a wide class of equations, and give numerical illustrations for a diffusion equation with uncertain parameters. These equations arise, for instance, in the valuation of financial derivatives under model uncertainty.

This is joint work with Peter Forsyth.

Thu, 07 Jun 2012

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

From Numerical Rocks to Spatial Data Assimilation

Dr Chris Farmer
(University of Oxford)
Abstract

Uncertainty quantification can begin by specifying the initial state of a system as a probability measure. Part of the state (the 'parameters') might not evolve, and might not be directly observable. Many inverse problems are generalisations of uncertainty quantification such that one modifies the probability measure to be consistent with measurements, a forward model and the initial measure. The inverse problem, interpreted as computing the posterior probability measure of the states, including the parameters and the variables, from a sequence of noise-corrupted observations, is reviewed in the talk. Bayesian statistics provides a natural framework for a solution but leads to very challenging computational problems, particularly when the dimension of the state space is very large, as when arising from the discretisation of a partial differential equation theory.

\\

\\

In this talk we show how the Bayesian framework leads to a new algorithm - the 'Variational Smoothing Filter' - that unifies the leading techniques in use today. In particular the framework provides an interpretation and generalisation of Tikhonov regularisation, a method of forecast verification and a way of quantifying and managing uncertainty. To deal with the problem that a good initial prior may not be Gaussian, as with a general prior intended to describe, for example a geological structure, a Gaussian mixture prior is used. This has many desirable properties, including ease of sampling to make 'numerical rocks' or 'numerical weather' for visualisation purposes and statistical summaries, and in principle can approximate any probability density. Robustness is sought by combining a variational update with this full mixture representation of the conditional posterior density.

Thu, 31 May 2012

14:00 - 15:00
Gibson Grd floor SR

High order adaptive finite element approximations for cardiac electrophysiology

Dr David Kay
(University of Oxford)
Abstract

This talk will present a computationally efficient method of simulating cardiac electrical propagation using an

adaptive high-order finite element method. The refinement strategy automatically concentrates computational

effort where it is most needed in space on each time-step. We drive the adaptivity using a residual-based error

indicator, and demonstrate using norms of the error that the indicator allows to control it successfully. Our

results using two-dimensional domains of varying complexity demonstrate in that significant improvements in

efficiency are possible over the state-of-the-art, indicating that these methods should be investigated for

implementation in whole-heart scale software.

Thu, 24 May 2012

14:00 - 15:00
Gibson Grd floor SR

A linear eigenvalue algorithm for nonlinear eigenvalue problems

Dr Elias Jarlebring
(KTH Stockholm)
Abstract

The Arnoldi method for standard eigenvalue problems possesses several

attractive properties making it robust, reliable and efficient for

many problems. We will present here a new algorithm equivalent to the

Arnoldi method, but designed for nonlinear eigenvalue problems

corresponding to the problem associated with a matrix depending on a

parameter in a nonlinear but analytic way. As a first result we show

that the reciprocal eigenvalues of an infinite dimensional operator.

We consider the Arnoldi method for this and show that with a

particular choice of starting function and a particular choice of

scalar product, the structure of the operator can be exploited in a

very effective way. The structure of the operator is such that when

the Arnoldi method is started with a constant function, the iterates

will be polynomials. For a large class of NEPs, we show that we can

carry out the infinite dimensional Arnoldi algorithm for the operator

in arithmetic based on standard linear algebra operations on vectors

and matrices of finite size. This is achieved by representing the

polynomials by vector coefficients. The resulting algorithm is by

construction such that it is completely equivalent to the standard

Arnoldi method and also inherits many of its attractive properties,

which are illustrated with examples.

Thu, 17 May 2012

14:00 - 15:00
Gibson Grd floor SR

Towards time-stepping-free solution of large initial value problems by block Krylov projections

Dr Mike Botchev
(University of Twente)
Abstract

Exponential time integrators are a powerful tool for numerical solution

of time dependent problems. The actions of the matrix functions on vectors,

necessary for exponential integrators, can be efficiently computed by

different elegant numerical techniques, such as Krylov subspaces.

Unfortunately, in some situations the additional work required by

exponential integrators per time step is not paid off because the time step

can not be increased too much due to the accuracy restrictions.

To get around this problem, we propose the so-called time-stepping-free

approach. This approach works for linear ordinary differential equation (ODE)

systems where the time dependent part forms a small-dimensional subspace.

In this case the time dependence can be projected out by block Krylov

methods onto the small, projected ODE system. Thus, there is just one

block Krylov subspace involved and there are no time steps. We refer to

this method as EBK, exponential block Krylov method. The accuracy of EBK

is determined by the Krylov subspace error and the solution accuracy in the

projected ODE system. EBK works for well for linear systems, its extension

to nonlinear problems is an open problem and we discuss possible ways for

such an extension.

Thu, 10 May 2012

14:00 - 15:00
Gibson Grd floor SR

Frequency-independent approximation of integral formulations of Helmholtz boundary value problems

Professor Mario Bebendorf
(University of Bonn)
Abstract

We present recent numerical techniques for the treatment of integral formulations of Helmholtz boundary value problems in the case of high frequencies. The combination of $H^2$-matrices with further developments of the adaptive cross approximation allows to solve such problems with logarithmic-linear complexity independent of the frequency. An advantage of this new approach over existing techniques such as fast multipole methods is its stability over the whole range of frequencies, whereas other methods are efficient either for low or high frequencies.

Thu, 03 May 2012

14:00 - 15:00
Gibson Grd floor SR

The orthogonal gradients method: A radial basis functions method for solving partial differential equations on arbitrary surfaces

Dr Cécile Piret
(Université catholique de Louvain.)
Abstract

Although much work has been done on using RBFs for reconstructing arbitrary surfaces, using RBFs to solve PDEs on arbitrary manifolds is only now being considered and is the subject of this talk. We will review current methods and introduce a new technique that is loosely inspired by the Closest Point Method. This new technique, the Orthogonal Gradients Method (OGr), benefits from the advantages of using RBFs: the simplicity, the high accuracy but also the meshfree character, which gives the flexibility to represent the most complex geometries in any dimension.

Thu, 26 Apr 2012

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

qr_mumps: a multithreaded multifrontal QR solver

Dr Alfredo Buttari
(CNRS-IRIT Toulouse)
Abstract

The advent of multicore processors represents a disruptive event in the history of computer science as conventional parallel programming paradigms are proving incapable of fully exploiting their potential for concurrent computations. The need for different or new programming models clearly arises from recent studies which identify fine-granularity and dynamic execution as the keys to achieve high efficiency on multicore systems. This talk shows how these models can be effectively applied to the multifrontal method for the QR factorization of sparse matrices providing a very high efficiency achieved through a fine-grained partitioning of data and a dynamic scheduling of computational tasks relying on a dataflow parallel programming model. Moreover, preliminary results will be discussed showing how the multifrontal QR factorization can be accelerated by using low-rank approximation techniques.

Thu, 19 Apr 2012

14:00 - 15:00
Gibson Grd floor SR

Navier-Stokes-Fokker-Planck systems: analysis and approximation

Professor Endre Süli
(University of Oxford)
Abstract

The talk will survey recent developments concerning the existence and the approximation of global weak solutions to a general class of coupled microscopic-macroscopic bead-spring chain models that arise in the kinetic theory of dilute solutions of polymeric liquids with noninteracting polymer chains. The class of models involves the unsteady incompressible Navier-Stokes equations in a bounded domain for the velocity and the pressure of the fluid, with an elastic extra-stress tensor appearing on the right-hand side of the momentum equation. The extra-stress tensor stems from the random movement of the polymer chains and is defined by the Kramers expression through the associated probability density function that satisfies a Fokker-Planck type parabolic equation. Models of this kind were proposed in work of Hans Kramers in the early 1940's, and the existence of global weak solutions to the model has been a long-standing question in the mathematical analysis of kinetic models of dilute polymers.

\\

\\

We also discuss computational challenges associated with the numerical approximation of the high-dimensional Fokker-Planck equation featuring in the model.

Thu, 08 Mar 2012

14:00 - 15:00
Gibson Grd floor SR

Solution of ill-posed inverse problems pertaining to signal restoration

Professor Rosie Renaut
(Arizona State University)
Abstract

In this talk I review the use of the spectral decomposition for understanding the solution of ill-posed inverse problems. It is immediate to see that regularization is needed in order to find stable solutions. These solutions, however, do not typically allow reconstruction of signal features such as edges. Generalized regularization assists but is still insufficient and methods of total variation are commonly suggested as an alternative. In the talk I consider application of standard approaches from Tikhonov regularization for finding appropriate regularization parameters in the total variation augmented Lagrangian implementations. Areas for future research will be considered.

Thu, 01 Mar 2012

14:00 - 15:00
Gibson Grd floor SR

Two-Grid hp-Adaptive Discontinuous Galerkin Finite Element Methods for Second-Order Quasilinear Elliptic PDEs

Professor Paul Houston
(University of Nottingham)
Abstract

In this talk we present an overview of some recent developments concerning the a posteriori error analysis and adaptive mesh design of $h$- and $hp$-version discontinuous Galerkin finite element methods for the numerical approximation of second-order quasilinear elliptic boundary value problems. In particular, we consider the derivation of computable bounds on the error measured in terms of an appropriate (mesh-dependent) energy norm in the case when a two-grid approximation is employed. In this setting, the fully nonlinear problem is first computed on a coarse finite element space $V_{H,P}$. The resulting 'coarse' numerical solution is then exploited to provide the necessary data needed to linearise the underlying discretization on the finer space $V_{h,p}$; thereby, only a linear system of equations is solved on the richer space $V_{h,p}$. Here, an adaptive $hp$-refinement algorithm is proposed which automatically selects the local mesh size and local polynomial degrees on both the coarse and fine spaces $V_{H,P}$ and $V_{h,p}$, respectively. Numerical experiments confirming the reliability and efficiency of the proposed mesh refinement algorithm are presented.

Thu, 23 Feb 2012

14:00 - 15:00
Gibson Grd floor SR

High frequency scattering by non-convex polygons

Dr Stephen Langdon
(University of Reading)
Abstract

Standard numerical schemes for acoustic scattering problems suffer from the restriction that the number of degrees of freedom required to achieve a prescribed level of accuracy must grow at least linearly with respect to frequency in order to maintain accuracy as frequency increases. In this talk, we review recent progress on the development and analysis of hybrid numerical-asymptotic boundary integral equation methods for these problems. The key idea of this approach is to form an ansatz for the solution based on knowledge of the high frequency asymptotics, allowing one to achieve any required accuracy via the approximation of only (in many cases provably) non-oscillatory functions. In particular, we discuss very recent work extending these ideas for the first time to non-convex scatterers.

Thu, 09 Feb 2012

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Efficient, communication-minimizing algorithms for the symmetric eigenvalue decomposition and the singular value decomposition

Dr Yuji Nakatsukasa
(University of Manchester)
Abstract

Computing the eigenvalue decomposition of a symmetric matrix and the singular value decomposition of a general matrix are two of the central tasks in numerical linear algebra. There has been much recent work in the development of linear algebra algorithms that minimize communication cost. However, the reduction in communication cost sometimes comes at the expense of significantly more arithmetic and potential instability.

\\

\\

In this talk I will describe algorithms for the two decompositions that have optimal communication cost and arithmetic cost within a small factor of those for the best known algorithms. The key idea is to use the best rational approximation of the sign function, which lets the algorithm converge in just two steps. The algorithms are backward stable and easily parallelizable. Preliminary numerical experiments demonstrate their efficiency.

Thu, 02 Feb 2012

14:00 - 15:00
Gibson Grd floor SR

Optimal Newton-type methods for nonconvex smooth optimization

Dr Coralia Cartis
(University of Edinburgh)
Abstract

We show that the steepest-descent and Newton's methods for unconstrained nonconvex optimization

under standard assumptions may both require a number of iterations and function evaluations

arbitrarily close to the steepest-descent's global worst-case complexity bound. This implies that

the latter upper bound is essentially tight for steepest descent and that Newton's method may be as

slow as the steepest-descent method in the worst case. Then the cubic regularization of Newton's

method (Griewank (1981), Nesterov & Polyak (2006)) is considered and extended to large-scale

problems, while preserving the same order of its improved worst-case complexity (by comparison to

that of steepest-descent); this improved worst-case bound is also shown to be tight. We further

show that the cubic regularization approach is, in fact, optimal from a worst-case complexity point

of view amongst a wide class of second-order methods. The worst-case problem-evaluation complexity

of constrained optimization will also be discussed. This is joint work with Nick Gould (Rutherford

Appleton Laboratory, UK) and Philippe Toint (University of Namur, Belgium).

Thu, 26 Jan 2012

14:00 - 15:00
Gibson Grd floor SR

Interior Point warmstarts and stochastic programming

Dr Andreas Grothey
(University of Edinburgh)
Abstract

We present progress on an Interior Point based multi-step solution approach for stochastic programming problems. Our approach works with a series of scenario trees that can be seen as successively more accurate discretizations of an underlying probability distribution and employs IPM warmstarts to "lift" approximate solutions from one tree to the next larger tree.