Fri, 07 Mar 2014

17:00 - 18:00
L3

Icosahedral clusters: the stem cell of the solid state?

Jean Taylor
(Rutgers University)
Abstract

Recent experimental work has determined the atomic structure of a quasicrystalline Cd-Yb alloy. It highlights the elegant role of polyhedra with icosahedral symmetry. Other work suggests that while chunks of periodic crystals and disordered glass predominate in the solid state, there are many hints of icosahedral clusters. This talk is based on a recent Mathematical Intelligencer article on quasicrystals with Marjorie Senechal.


The seminar will be followed by a drinks reception and forms part of a longer PDE and CoV related Workshop.


To register for the seminar and drinks reception go to http://doodle.com/acw6bbsp9dt5bcwb

Fri, 07 Mar 2014

10:00 - 11:00
L5

Mathematics and energy policy. Markets or central control power

John Rhys (The Oxford Institute for Energy Studies)
Abstract

This talk is intended to explain the link between some relatively straightforward mathematical concepts, in terms of linear programming and optimisation over a convex set of feasible solutions, and questions for the organisation of the power sector and hence for energy policy.

Both markets and centralised control systems should in theory optimise the use of the current stock of generation assets and ensure electricity is generated at least cost, by ranking plant in ascending order of short run marginal cost (SRMC), sometimes known as merit order operation. Wholesale markets, in principle at least, replicate exactly what would happen in a perfect but centrally calculated optimal dispatch of plant. This happens because the SRMC of each individual plant is “discovered” through the market and results in a price equal to “system marginal cost” (SMC), which is just high enough to incentivise the most costly plant required to meet the actual load.

More generally, defining the conditions for this to work - “decentralised prices replicate perfect central planning” - is of great interest to economists. Quite apart from any ideological implications, it also helps to define possible sources of market failure. There is an extensive literature on this, but we can explain why it has appeared to work so well, and so obviously, for merit order operation, and then consider whether the conditions underpinning its success will continue to apply in the future.

The big simplifying assumptions, regarded as an adequate approximation to reality, behind most current power markets are the following:

• Each optimisation period can be considered independent of all past and future periods.

• The only relevant costs are well defined short term operating costs, essentially fuel.

• (Fossil) plant is (infinitely) flexible, and costs vary continuously and linearly with output.

• Non-fossil plant has hitherto been intra-marginal, and hence has little impact

The merit order is essentially very simple linear programming, with the dual value of the main constraint equating to the “correct” market price. Unfortunately the simplifying assumptions cease to apply as we move towards types of plant (and consumer demand) with much more complex constraints and cost structures. These include major inflexibilities, stochastic elements, and storage, and many non-linearities. Possible consequences include:

• Single period optimisation, as a concept underlying the market or central control, will need to be abandoned. Multi period optimisation will be required.

• Algorithms much more complicated than simple merit order will be needed, embracing non-linearities and complex constraints.

• Mathematically there is no longer a “dual” price, and the conditions for decentralisation are broken. There is no obvious means of calculating what the price “ought” to be, or even knowing that a meaningful price exists.

The remaining questions are clear. The theory suggests that current market structures may be broken, but how do we assess or show when and how much this might matter?

Thu, 06 Mar 2014

16:00 - 17:00
C6

Basic examples in deformation quantisation

Emanuele Ghedin
Abstract

Following last week's talk on Beilinson-Bernstein localisation theorem, we give basic notions in deformation quantisation explaining how this theorem can be interpreted as a quantised version of the Springer resolution. Having attended last week's talk will be useful but not necessary.

Thu, 06 Mar 2014

16:00 - 17:00
L5

Isogeny classes of abelian varieties and weakly special subvarieties

Martin Orr
(UCL)
Abstract
Let Z be a subvariety of the moduli space of abelian varieties, and suppose that Z contains a dense set of points for which the corresponding abelian varieties are isogenous. A corollary of the Zilber-Pink conjecture predicts that Z is a weakly special subvariety. I shall discuss the proof of this conjecture in the case when Z is a curve and obstacles to its proof for higher dimensions.

For Logic Seminar: Note change of time and place.

Thu, 06 Mar 2014

16:00 - 17:00
L3

The effect of boundary conditions on linear and nonlinear waves

Beatrice Pelloni
(Reading)
Abstract

In this talk, I will discuss the effect of boundary conditions on the solvability of PDEs that have formally an integrable structure, in the

sense of possessing a Lax pair. Many of these PDEs arise in wave propagation phenomena, and boundary value problems for these models are very important in applications. I will discuss the extent to which general approaches that are successful for solving the initial value problem extend to the solution of boundary value problem.

I will survey the solution of specific examples of integrable PDE, linear and nonlinear. The linear theory is joint work with David Smith. For the nonlinear case, I will discuss boundary conditions that yield boundary value problems that are fully integrable, in particular recent joint results with Thanasis Fokas and Jonatan Lenells on the solution of boundary value problems for the elliptic sine-Gordon equation.

Thu, 06 Mar 2014

16:00 - 17:30
L1

Algorithmic Trading with Learning

Alvaro Cartea
(UCL)
Abstract

We propose a model where an algorithmic trader takes a view on the distribution of prices at a future date and then decides how to trade in the direction of her predictions using the optimal mix of market and limit orders. As time goes by, the trader learns from changes in prices and updates her predictions to tweak her strategy. Compared to a trader that cannot learn from market dynamics or form a view of the market, the algorithmic trader's profits are higher and more certain. Even though the trader executes a strategy based on a directional view, the sources of profits are both from making the spread as well as capital appreciation of inventories. Higher volatility of prices considerably impairs the trader's ability to learn from price innovations, but this adverse effect can be circumvented by learning from a collection of assets that co-move.

Thu, 06 Mar 2014

14:00 - 15:00
L5

Kullback-Leibler Approximation Of Probability Measures

Professor Andrew Stuart
(University of Warwick)
Abstract

Many problems in the physical sciences

require the determination of an unknown

function from a finite set of indirect measurements.

Examples include oceanography, oil recovery,

water resource management and weather forecasting.

The Bayesian approach to these problems

is natural for many reasons, including the

under-determined and ill-posed nature of the inversion,

the noise in the data and the uncertainty in

the differential equation models used to describe

complex mutiscale physics. The object of interest

in the Bayesian approach is the posterior

probability distribution on the unknown field [1].

\\

\\

However the Bayesian approach presents a

computationally formidable task as it

results in the need to probe a probability

measure on separable Banach space. Monte

Carlo Markov Chain methods (MCMC) may be

used to achieve this [2], but can be

prohibitively expensive. In this talk I

will discuss approximation of probability measures

by a Gaussian measure, looking for the closest

approximation with respect to the Kullback-Leibler

divergence. This methodology is widely

used in machine-learning [3]. In the context of

target measures on separable Banach space

which themselves have density with respect to

a Gaussian, I will show how to make sense of the

resulting problem in the calculus of variations [4].

Furthermore I will show how the approximate

Gaussians can be used to speed-up MCMC

sampling of the posterior distribution [5].

\\

\\

[1] A.M. Stuart. "Inverse problems: a Bayesian

perspective." Acta Numerica 19(2010) and

http://arxiv.org/abs/1302.6989

\\

[2] S.L.Cotter, G.O.Roberts, A.M. Stuart and D. White,

"MCMC methods for functions: modifying old algorithms

to make them faster". Statistical Science 28(2013).

http://arxiv.org/abs/1202.0709

\\

[3] C.M. Bishop, "Pattern recognition and machine learning".

Springer, 2006.

\\

[4] F.J. Pinski G. Simpson A.M. Stuart H. Weber, "Kullback-Leibler

Approximations for measures on infinite dimensional spaces."

http://arxiv.org/abs/1310.7845

\\

[5] F.J. Pinski G. Simpson A.M. Stuart H. Weber, "Algorithms

for Kullback-Leibler approximation of probability measures in

infinite dimensions." In preparation.

Thu, 06 Mar 2014
11:00
C5

'Defining p-henselian valuations'

Franziska Yahnke
(Muenster)
Abstract

(Joint work with Jochen Koenigsmann) Admitting a p-henselian
valuation is a weaker assumption on a field than admitting a henselian
valuation. Unlike henselianity, p-henselianity is an elementary property
in the language of rings. We are interested in the question when a field
admits a non-trivial 0-definable p-henselian valuation (in the language
of rings). They often then give rise to 0-definable henselian
valuations. In this talk, we will give a classification of elementary
classes of fields in which the canonical p-henselian valuation is
uniformly 0-definable. This leads to the new phenomenon of p-adically
(pre-)Euclidean fields.

Thu, 06 Mar 2014

10:00 - 11:00
C6

A survey of derivator K-theory

George Raptis
(Osnabrueck and Regensburg)
Abstract

 The theory of derivators is an approach to homotopical algebra
that focuses on the existence of homotopy Kan extensions. Homotopy
theories (e.g. model categories) typically give rise to derivators by
considering the homotopy categories of all diagrams categories
simultaneously. A general problem is to understand how faithfully the
derivator actually represents the homotopy theory. In this talk, I will
discuss this problem in connection with algebraic K-theory, and give a
survey of the results around the problem of recovering the K-theory of a
good Waldhausen category from the structure of the associated derivator.

Wed, 05 Mar 2014
16:00
C4

tba

Kohei Kishida
(Computing Laboratory)
Wed, 05 Mar 2014
10:30
N3.12

Modularity and Galois Representations

Benjamin Green
Abstract

The modularity theorem saying that all (semistable) elliptic curves are modular was one of the two crucial parts in the proof of Fermat's last theorem. In this talk I will explain what elliptic curves being 'modular' means and how an alternative definition can be given in terms of Galois representations. I will then state some of the conjectures of the Langlands program which in some sense generalise the modularity theorem.

Tue, 04 Mar 2014

17:00 - 18:00
C5

Maximal subgroups of exceptional groups of Lie type and morphisms of algebraic groups

Dr David Craven
(University of Birmingham)
Abstract

The maximal subgroups of the exceptional groups of Lie type

have been studied for many years, and have many applications, for

example in permutation group theory and in generation of finite

groups. In this talk I will survey what is currently known about the

maximal subgroups of exceptional groups, and our recent work on this

topic. We explore the connection with extending morphisms from finite

groups to algebraic groups.

Tue, 04 Mar 2014
16:00
L1

“Why there are no 3-headed monsters, resolving some problems with brain tumours, divorce prediction and how to save marriages”

Professor James D Murray
(University of Oxford & Senior Scholar)
Abstract

 “Understanding the generation and control of pattern and form is still a challenging and major problem in the biomedical sciences. I shall describe three very different problems. First I shall briefly describe the development and application of the mechanical theory of morphogenesis and the discovery of morphogenetic laws in limb development and how it was used to move evolution backwards. I shall then describe a surprisingly informative model, now used clinically, for quantifying the growth of brain tumours, enhancing imaging techniques and quantifying individual patient treatment protocols prior to their use.  Among other things, it is used to estimate patient life expectancy and explain why some patients live longer than others with the same treatment protocols. Finally I shall describe an example from the social sciences which quantifies marital interaction that is used to predict marital stability and divorce.  From a large study of newly married couples it had a 94% accuracy. I shall show how it has helped design a new scientific marital therapy which is currently used in clinical practice.”

 

Tue, 04 Mar 2014

15:45 - 16:45
L4

Factorization homology is a fully extended TFT

Damien Calaque
(ETH Zurich)
Abstract

We will start with a recollection on factorization algebras and factorization homology. We will then explain what fully extended TFTs are, after Jacob Lurie. And finally we will see how factorization homology can be turned into a fully extended TFT. This is a joint work with my student Claudia Scheimbauer.

Tue, 04 Mar 2014
15:30
Comlab

"Stochastic Petri nets, chemical reaction networks and Feynman diagrams"

John Baez
(University of California)
Abstract

 Nature and the world of human technology are full of
networks. People like to draw diagrams of networks: flow charts,
electrical circuit diagrams, signal flow diagrams, Bayesian networks,
Feynman diagrams and the like. Mathematically-minded people know that
in principle these diagrams fit into a common framework: category
theory. But we are still far from a unified theory of networks.

Tue, 04 Mar 2014

14:00 - 15:00
L4

Lagrangian structures on derived mapping stacks

Damien Calaque
(ETH Zurich)
Abstract

We will explain how the result of Pantev-Toën-Vaquié-Vezzosi, about shifted symplectic structures on mapping stacks, can be extended to relative mapping stacks and Lagrangian structures. We will also provide applications in ordinary symplectic geometry and topological field theories.

Tue, 04 Mar 2014

14:00 - 15:00
L5

Towards realistic performance for iterative methods on shared memory machines

Shengxin (Jude) Zhu
(University of Oxford)
Abstract

This talk introduces a random linear model to investigate the memory bandwidth barrier effect on current shared memory computers. Based on the fact that floating-point operations can be hidden by implicit compiling techniques, the runtime for memory intensive applications can be modelled by memory reference time plus a random term. The random term due to cache conflicts, data reuse and other environmental factors is proportional to memory reference volume. Statistical techniques are used to quantify the random term and the runtime performance parameters. Numerical results based on thousands representative matrices from various applications are presented, compared, analysed and validated to confirm the proposed model. The model shows that a realistic and fair metric for performance of iterative methods and other memory intensive applications should consider the memory bandwidth capability and memory efficiency.

Tue, 04 Mar 2014

14:00 - 14:30
L5

Euler-Maclaurin and Newton-Gregory Interpolants

Mohsin Javed
(University of Oxford)
Abstract

The Euler-Maclaurin formula is a quadrature rule based on corrections to the trapezoid rule using odd derivatives at the end-points of the function being integrated. It appears that no one has ever thought about a related function approximation that will give us the Euler-Maclaurin quadrature rule, i.e., just like we can derive Newton-Cotes quadrature by integrating polynomial approximations of the function, we investigate, what function approximation will integrate exactly to give the corresponding Euler-Maclaurin quadrature. It turns out, that the right function approximation is a combination of a trigonometric interpolant and a polynomial.

To make the method more practical, we also look at the closely related Newton-Gregory quadrature, which is very similar to the Euler-Maclaurin formula but instead of derivatives, uses finite differences. Following almost the same procedure, we find another mixed function approximation, derivative free, whose exact integration yields the Newton-Gregory quadrature rule.

Mon, 03 Mar 2014

17:00 - 18:00
L6

Elliptic and parabolic systems with general growth

Paolo Marcellini
(University of Florence)
Abstract

Motivated by integrals of the Calculus of Variations considered in

Nonlinear Elasticity, we study mathematical models which do not fit in

the classical existence and regularity theory for elliptic and

parabolic Partial Differential Equations. We consider general

nonlinearities with non-standard p,q-growth, both in the elliptic and

in the parabolic contexts. In particular, we introduce the notion of

"variational solution/parabolic minimizer" for a class of

Cauchy-Dirichlet problems related to systems of parabolic equations.