Mon, 16 Nov 2015

15:00 - 16:00
L2

Magnitudes of compact sets in euclidean spaces: an application of analysis to the theory of enriched categories

Tony Carbery
(University of Edinburgh)
Abstract

Leinster and Willerton have introduced the concept of the magnitude of a metric space, as a special case as that of an enriched category. It is a numerical invariant which is designed to capture the important geometric information about the space, but concrete examples of ts values on compact sets in euclidean space have hitherto been lacking. We discuss progress in some conjectures of Leinster and Willerton.

Wed, 03 Feb 2016
15:00
L4

Computing with Encrypted Data

Elham Kashefi
(University of Edinburgh)
Abstract

The concept of delegated quantum computing is a quantum extension of  
the classical task of computing with encrypted data without decrypting  
them first. Many quantum protocols address this challenge for a  
futuristic quantum client-server setting achieving a wide range of  
security properties. The central challenge of all these protocols to  
be applicable for classical tasks (such as secure multi party  
computation or fully homomorphic encryption) is the requirement of a  
server with a universal quantum computer. By restricting the task to  
classical computation only, we derive a protocol for unconditionally  
secure delegation of classical computation to a remote server that has  
access to basic quantum devices.

Thu, 11 Jun 2015

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

Interior Point Methods for Optimal Power Flow Formulations

Andreas Grothey
(University of Edinburgh)
Abstract

Security Constrained Optimal Power Flow is an increasingly important problem for power systems operation both in its own right and as a subproblem for more complex problems such as transmission switching or
unit commitment.

The structure of the problem resembles stochastic programming problems in that one aims to find a cost optimal operation schedule that is feasible for all possible equipment outage scenarios
(contingencies). Due to the presence of power flow constraints (in their "DC" or "AC" version), the resulting problem is a large scale linear or nonlinear programming problem.

However it is known that only a small subset of the contingencies is active at the solution. We show how Interior Point methods can exploit this structure both by simplifying the linear algebra operations as
well as generating necessary contingencies on the fly and integrating them into the algorithm using IPM warmstarting techniques. The final problem solved by this scheme is significantly smaller than the full
contingency constrained problem, resulting in substantial speed gains.

Numerical and theoretical results of our algorithm will be presented.

Tue, 28 Apr 2015

14:00 - 15:00
L3

Newton-type methods for Support Vector Machines and Signal Reconstruction Problems

Kimon Fountoulakis
(University of Edinburgh)
Abstract
Support vector machines and signal reconstruction problems have initiated a resurgence of optimization methods with inexpensive iterations, namely first-order methods. The efficiency of first-order methods has been shown for several well conditioned instances. However, their practical convergence might be slow on ill-conditioned/pathological instances.
 
In this talk we will consider Newton-type methods, which aim to exploit the trade-off between inexpensive iterations and robustness. Two methods will be presented, a robust block coordinate descent method and a primal-dual Newton conjugate gradients method.  We will discuss theoretical properties of the methods and we will present numerical experiments on large scale l1-regularized logistic regression and total variation problems.
Thu, 06 Nov 2014

16:00 - 17:30
L4

Securitization and equilibrium pricing under relative performance concerns

Dr. Gonçalo dos Reis
(University of Edinburgh)
Abstract

We investigate the effects of a finite set of agents interacting socially in an equilibrium pricing mechanism. A derivative written on non-tradable underlyings is introduced to the market and priced in an equilibrium framework by agents who assess risk using convex dynamic risk measures expressed by Backward Stochastic Differential Equations (BSDE). An agent is not only exposed to financial and non-financial risk factors, but he also faces performance concerns with respect to the other agents. The equilibrium analysis leads to systems of fully coupled multi-dimensional quadratic BSDEs.

Within our proposed models we prove the existence and uniqueness of an equilibrium. We show that aggregation of risk measures is possible and that a representative agent exists. We analyze the impact of the problem's parameters in the pricing mechanism, in particular how the agent's concern rates affect prices and risk perception.

Tue, 18 Feb 2014

14:00 - 14:30
L5

Optimal active-set prediction for interior point methods

Yiming Yan
(University of Edinburgh)
Abstract

When applied to an inequality constrained optimization problem, interior point methods generate iterates that belong to the interior of the set determined by the constraints, thus avoiding/ignoring the combinatorial aspect of the solution. This comes at the cost of difficulty in predicting the optimal active constraints that would enable termination.  We propose the use of controlled perturbations to address this challenge. Namely, in the context of linear programming with only nonnegativity constraints as the inequality constraints, we consider perturbing the nonnegativity constraints so as to enlarge the feasible set. Theoretically, we show that if the perturbations are chosen appropriately, the solution of the original problem lies on or close to the central path of the perturbed problem and that a primal-dual path-following algorithm applied to the perturbed problem is able to predict the optimal active-set of the original problem when the duality gap (for the perturbed problem) is not too small. Encouraging preliminary numerical experience is obtained when comparing the perturbed and unperturbed interior point algorithms' active-set predictions for the purpose of cross-over to simplex.

Thu, 13 Feb 2014

16:30 - 17:30
L1

Running the MMP via homological methods (COW SEMINAR)

Michael Wemyss
(University of Edinburgh)
Abstract

I will explain how, given a crepant morphism with one-dimensional fibres between 3-folds, it is possible to use noncommutative deformations to run the MMP in a satisfyingly algorithmic fashion.  As part of this, a flop is viewed homologically as the solution to a universal property, and so is constructed not by changing GIT, but instead by changing the algebra. Carrying this extra information of the new algebra allows us to continue to flop, and thus continue the MMP, without having to calculate everything from scratch. Proving things in this manner does in fact have other consequences too, and I will explain some them, both theoretical and computational.

Tue, 28 Jan 2014

14:00 - 14:30
L5

Preconditioning and deflation techniques for interior point methods

Rachael Tappenden
(University of Edinburgh)
Abstract

The accurate and efficient solution of linear systems $Ax=b$ is very important in many engineering and technological applications, and systems of this form also arise as subproblems within other algorithms. In particular, this is true for interior point methods (IPM), where the Newton system must be solved to find the search direction at each iteration. Solving this system is a computational bottleneck of an IPM, and in this talk I will explain how preconditioning and deflation techniques can be used, to lessen this computational burden.  This work is joint with Jacek Gondzio.

Tue, 14 Jan 2014

18:00 - 18:50
L4

Decay for the Maxwell field outside a slowly rotating Kerr black hole

Pieter Blue
(University of Edinburgh)
Abstract

The Maxwell equation is an intermediate linear model for

Einstein's equation lying between the scalar wave equation and the

linearised Einstein equation. This talk will present the 5 key

estimates necessary to prove a uniform bound on an energy and a

Morawetz integrated local energy decay estimate for the nonradiating

part.

The major obstacles, relative to the scalar wave equation are: that a

scalar equation must be found for at least one of the components,

since there is no known decay estimate directly at the tensor level;

that the scalar equation has a complex potential; and that there are

stationary solutions and, in the nonzero $a$ Kerr case, it is more

difficult to project away from these stationary solutions.

If time permits, some discussion of a geometric proof using the hidden

symmetries will be given.

This is joint work with L. Andersson and is arXiv:1310.2664.

Thu, 29 May 2014
16:00
L3

Stochastic-Dynamical Methods for Molecular Modelling

Ben Leimkuhler
(University of Edinburgh)
Abstract

Molecular modelling has become a valuable tool and is increasingly part of the standard methodology of chemistry, physics, engineering and biology. The power of molecular modelling lies in its versatility: as potential energy functions improve, a broader range of more complex phenomena become accessible to simulation, but much of the underlying methodology can be re-used. For example, the Verlet method is still the most popular molecular dynamics scheme for constant energy molecular dynamics simulations despite it being one of the first to be proposed for the purpose.

One of the most important challenges in molecular modelling remains the computation of averages with respect to the canonical Gibbs (constant temperature) distribution, for which the Verlet method is not appropriate. Whereas constant energy molecular dynamics prescribes a set of equations (Newton's equations), there are many alternatives for canonical sampling with quite different properties. The challenge is therefore to identify formulations and numerical methods that are robust and maximally efficient in the computational setting.

One of the simplest and most effective methods for sampling is based on Langevin dynamics which mimics coupling to a heat bath by the incorporation of random forces and an associated dissipative term. Schemes for Langevin dynamics simulation may be developed based on the familiar principle of splitting. I will show that the invariant measure ('long term') approximation may be strongly affected by a simple re-ordering of the terms of the splitting. I will describe a transition in weak numerical order of accuracy that occurs (in one case) in the t->infty limit.

I will also entertain some more radical suggestions for canonical sampling, including stochastic isokinetic methods that enable the use of greatly enlarged timesteps for expensive but slowly-varying force field components.

Subscribe to University of Edinburgh