Thu, 26 Jan 2012

14:00 - 15:00
Gibson Grd floor SR

Interior Point warmstarts and stochastic programming

Dr Andreas Grothey
(University of Edinburgh)
Abstract

We present progress on an Interior Point based multi-step solution approach for stochastic programming problems. Our approach works with a series of scenario trees that can be seen as successively more accurate discretizations of an underlying probability distribution and employs IPM warmstarts to "lift" approximate solutions from one tree to the next larger tree.

Mon, 28 Nov 2011

12:00 - 13:00
L3

Emergent IR CFTs in black hole physics

Joan Simon
(University of Edinburgh)
Abstract

I will discuss the dynamical emergence of IR conformal invariance describing the low energy excitations of near-extremal R-charged global AdS${}_5$ black holes. To keep some non-trivial dynamics in the sector of ${\cal N}=4$ SYM captured by the near horizon limits describing these IR physics, we are lead to study large N limits in the UV theory involving near vanishing horizon black holes. I will consider both near-BPS and non-BPS regimes, emphasising the differences in the local AdS${}_3$ throats emerging in both cases. I will compare these results with the predictions obtained by Kerr/CFT, obtaining a natural quantisation for the central charge of the near-BPS emergent IR CFT describing the open strings stretched between giant gravitons.

Thu, 13 Oct 2011

14:00 - 15:00
Gibson Grd floor SR

Efficiency of serial and parallel block-coordinate descent methods for minimizing composite functions

Dr Peter Richtárik
(University of Edinburgh)
Abstract

We develop a randomized block-coordinate descent method for minimizing the sum of a smooth and a simple nonsmooth block-separable convex function and prove that it obtains an $\epsilon$-accurate solution with probability at least $1-\rho$ in at most $O[(2n/\epsilon)\log(1/\rho)]$ iterations, where $n$ is the dimension of the problem. For strongly convex functions the method converges linearly. This extends recent results of Nesterov [Efficiency of coordinate descent methods on huge-scale optimization problems, CORE Discussion Paper \#2010/2], which cover the smooth case, to composite minimization, while at the same time improving the complexity by the factor of 4 and removing $\epsilon$ from the logarithm. More importantly, in contrast with the aforementioned work in which the authors achieve the results by applying their method to a regularized version of the objective function with an unknown scaling factor, we show that this is not necessary, thus achieving true iteration complexity bounds. In the smooth case we also allow for arbitrary probability vectors and non-Euclidean norms. Time permitting, we will also mention new iteration complexity results for a parallel version of the method.

\\

\\

In the second part of the talk we demonstrate numerically that the algorithm is able to solve huge-scale $\ell_1$-regularized support vector machine and least squares problems with billion variables. Finally, we present preliminary computational results with a GPU-accelerated parallel version of the method, on truss topology design instances, achieving speedups of up to two orders of magnitude when compared to a single-core implementation in C.

\\

\\

References:

\\

P. Richtarik and M. Takac, Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function, 2011; http://arxiv.org/abs/1107.2848

\\

P. Richtarik and M. Takac, Efficient serial and parallel coordinate descent methods for huge-scale truss topology design, 2011; http://www.optimization-online.org/DB_HTML/2011/08/3118.html

\\

P. Richtarik and M. Takac, Efficiency of randomized coordinate descent methods on minimization problems with a composite objective function, Proceedings of SPARS11

Thu, 20 Oct 2005

14:00 - 15:00
Comlab

From sparsity to block-sparsity: direct solution of linear systems of dimension 10^9

Prof Jacek Gondzio
(University of Edinburgh)
Abstract

We discuss a method for solving very large structured symmetric indefinite equation systems arising in optimization with interior point methods.

Many real-life economic models involve system dynamics, spatial distribution or uncertainty and lead to large-scale optimization problems. Such problems usually have a hidden structure: they are constructed by replication of some small generic block. The linear algebra subproblems which arise in optimization algorithms for such problems involve matrices which are not only sparse, but they additionally display a block-structure with many smaller blocks sparsely distributed in the large matrix.

We have developed a structure-exploiting parallel interior point solver for optimization problems. Its design uses object-orientated programming techniques. The progress OOPS (Object-Orientated Parallel Solver: http://www.maths.ed.ac.uk/~gondzio/parallel/solver.html) on a number of different computing platforms and achieves scalability on a number of different computing platforms. We illustrate its performance on a collection of problems with sizes reaching 109 variables arising from asset liability management and portfolio optimization.

This is a joint work with Andreas Grothey.

Mon, 13 Jun 2011
15:45
Oxford-Man Institute

"The Second Law of Probability: Entropy growth in the central limit process."

Keith Ball
(University of Edinburgh)
Abstract

The talk will explain how a geometric principle gave rise to a new variational description of information-theoretic entropy and how this led to the solution of a problem dating back to the 50's: whether the the central limit theorem is driven by an analogue of the second law of thermodynamics.

Thu, 02 Dec 2010

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

A high performance dual revised simplex solver

Dr Julian Hall
(University of Edinburgh)
Abstract

Implementations of the revised simplex method for solving large scale sparse linear programming (LP) problems are highly efficient for single-core architectures. This talk will discuss the limitations of the underlying techniques in the context of modern multi-core architectures, in particular with respect to memory access. Novel techniques for implementing the dual revised simplex method will be introduced, and their use in developing a dual revised simplex solver for multi-core architectures will be described.

Mon, 17 Nov 2008
15:45
Oxford-Man Institute

The story of three polytopes and what they tell us about information acquisition

Dr. Jared Tanner
(University of Edinburgh)
Abstract

We will examine the typical structure of random polytopes by projecting the three fundamental regular polytopes: the simplex, cross-polytope, and hypercube. Along the way we will explore the implications of their structure for information acquisition and optimization. Examples of these implications include: that an N-vector with k non-zeros can be recovered computationally efficiently from only n random projections with n=2e k log(N/n), or that for a surprisingly large set of optimization problems the feasible set is actually a point. These implications are driving a new signal processing paradigm, Compressed Sensing, which has already lead to substantive improvements in various imaging modalities. This work is joint with David L. Donoho.

Thu, 20 Nov 2008
12:00
Gibson 1st Floor SR

Elliptic equations in the plane satisfying a Carleson measure condition

David Rule
(University of Edinburgh)
Abstract

We study the Neumann and regularity boundary value problems for a divergence form elliptic equation in the plane. We assume the gradient

of the coefficient matrix satisfies a Carleson measure condition and consider data in L^p, 1

Subscribe to University of Edinburgh