Mon, 13 Jun 2011
15:45
Oxford-Man Institute

"The Second Law of Probability: Entropy growth in the central limit process."

Keith Ball
(University of Edinburgh)
Abstract

The talk will explain how a geometric principle gave rise to a new variational description of information-theoretic entropy and how this led to the solution of a problem dating back to the 50's: whether the the central limit theorem is driven by an analogue of the second law of thermodynamics.

Thu, 02 Dec 2010

14:00 - 15:00
Rutherford Appleton Laboratory, nr Didcot

A high performance dual revised simplex solver

Dr Julian Hall
(University of Edinburgh)
Abstract

Implementations of the revised simplex method for solving large scale sparse linear programming (LP) problems are highly efficient for single-core architectures. This talk will discuss the limitations of the underlying techniques in the context of modern multi-core architectures, in particular with respect to memory access. Novel techniques for implementing the dual revised simplex method will be introduced, and their use in developing a dual revised simplex solver for multi-core architectures will be described.

Mon, 17 Nov 2008
15:45
Oxford-Man Institute

The story of three polytopes and what they tell us about information acquisition

Dr. Jared Tanner
(University of Edinburgh)
Abstract

We will examine the typical structure of random polytopes by projecting the three fundamental regular polytopes: the simplex, cross-polytope, and hypercube. Along the way we will explore the implications of their structure for information acquisition and optimization. Examples of these implications include: that an N-vector with k non-zeros can be recovered computationally efficiently from only n random projections with n=2e k log(N/n), or that for a surprisingly large set of optimization problems the feasible set is actually a point. These implications are driving a new signal processing paradigm, Compressed Sensing, which has already lead to substantive improvements in various imaging modalities. This work is joint with David L. Donoho.

Thu, 20 Nov 2008
12:00
Gibson 1st Floor SR

Elliptic equations in the plane satisfying a Carleson measure condition

David Rule
(University of Edinburgh)
Abstract

We study the Neumann and regularity boundary value problems for a divergence form elliptic equation in the plane. We assume the gradient

of the coefficient matrix satisfies a Carleson measure condition and consider data in L^p, 1

Mon, 28 Apr 2008
15:45
Oxford-Man Institute

Some results concerning the q-optimal martingale measure

Dr Sotirios Sabanis
(University of Edinburgh)
Abstract

An important and challenging problem in mathematical finance is how to choose a pricing measure in an incomplete market, i.e. how to find a probability measure under which expected payoffs are calculated and fair option prices are derived under some notion of optimality.

The notion of q-optimality is linked to the unique equivalent martingale measure (EMM) with minimal q-moment (if q > 1) or minimal relative entropy (if q=1). Hobson's (2004) approach to identifying the q-optimal measure (through a so-called fundamental equation) suggests a relaxation of an essential condition appearing in Delbaen & Schachermayer (1996). This condition states that for the case q=2, the Radon-Nikodym process, whose last element is the density of the candidate measure, is a uniformly integrable martingale with respect to any EMM with a bounded second moment. Hobson (2004) alleges that it suffices to show that the above is true only with respect to the candidate measure itself and extrapolates for the case q>1. Cerny & Kallsen (2008) however presented a counterexample (for q=2) which demonstrates that the above relaxation does not hold in general.

The speaker will present the general form of the q-optimal measure following the approach of Delbaen & Schachermayer (1994) and prove its existence under mild conditions. Moreover, in the light of the counterexample in Cerny & Kallsen (2008) concerning Hobson's (2004) approach, necessary and sufficient conditions will be presented in order to determine when a candidate measure is the q-optimal measure.

Thu, 17 Nov 2005
16:30
DH Common Room

Optimising Routes in Ad-Hoc TDD-CDMA Communication Systems

Steve McLaughlin
(University of Edinburgh)
Abstract

In this talk, a network topology is presented that allows both peer-to-peer and non-local traffic in a cellular based TDD-CDMA system known as opportunity driven multiple access (ODMA). The key to offering appropriate performance of peer-to-peer communication in such a system relies on the use of a routing algorithm which minimises interference. This talk will discuss the constraints and limitations on the capacity of such a system using a variety of routing techniques. A congestion based routing algorithm will be presented that attempts to minimize the overall power of the system as well as providing a measure of feasibility. This technique provides the lowest required transmit power in all circumstances, and the highest capacity in nearly all cases studied. All of the routing algorithms considered allocate TDD time slots on a first come first served basis according to a set of pre-defined rules. This fact is utilised to enable the development of a combined routing and resource allocation algorithm for TDD-CDMA relaying. A novel method of time slot allocation according to relaying requirements is then developed.

Two measures of assessing congestion are presented based on matrix norms. One is suitable for a current interior point solution, the other is more elegant but is not currently suitable for efficient minimisation and thus practical implementation.

Subscribe to University of Edinburgh