11:00
11:00
Ornstein's L<sup>1</sup> non-inequalities and Rank-one convexity.
String topology of classifying spaces
Abstract
Chataur and Menichi showed that the homology of the free loop space of the classifying space of a compact Lie group admits a rich algebraic structure: It is part of a homological field theory, and so admits operations parametrised by the homology of mapping class groups. I will present a new construction of this field theory that improves on the original in several ways: It enlarges the family of admissible Lie groups. It extends the field theory to an open-closed one. And most importantly, it allows for the construction of co-units in the theory. This is joint work with Anssi Lahtinen.
14:15
Generalized quark-antiquark potential of N=4 SYM at weak and strong coupling
Abstract
I will present a two-parameter family of Wilson loop operators in N = 4 supersymmetric Yang-Mills theory which interpolates smoothly between the 1/2 BPS line or circle and a pair of antiparallel lines. These observables capture a natural generalization of the quark-antiquark potential. These loops are calculated on the gauge theory side to second order in perturbation theory and in a semiclassical expansion in string theory to one-loop order. The resulting determinants are given in integral form and can be evaluated numerically for general values of the parameters or analytically in a systematic expansion around the 1/2 BPS configuration. I will comment about the feasibility of deriving all-loop results for these Wilson loops.
Excursions in Algebraic Topology
Abstract
Three short talks by the authors of essays on topics related to c3 Algebraic topology: Whitehead's theorem, Cohomology of fibre bundles, Division algebras
OCCAM Group Meeting
Abstract
- Cameron Hall - Dislocations and discrete-to-continuum asymptotics: the summary
- Kostas Zygalakis - Multi scale methods: theory numerics and applications
- Lian Duan - Barcode Detection and Deconvolution in Well Testing
Spectral decompositions and nonnormality of boundary integral operators in acoustic scattering
Abstract
Nonnormality is a well studied subject in the context of partial differential operators. Yet, only little is known for boundary integral operators. The only well studied case is the unit ball, where the standard single layer, double layer and conjugate double layer potential operators in acoustic scattering diagonalise in a unitary basis. In this talk we present recent results for the analysis of spectral decompositions and nonnormality of boundary integral operators on more general domains. One particular application is the analysis of stability constants for boundary element discretisations. We demonstrate how these are effected by nonnormality and give several numerical examples, illustrating these issues on various domains.
The relativistic heat equation via optimal transportation methods
Abstract
The aim of this talk is to explain how to construct solutions to a
relativistic transport equation via a time discrete scheme based on an
optimal transportation problem.
First of all, I will present a joint work with J. Bertrand, where we prove the existence of an optimal map
for the Monge-Kantorovich problem associated to relativistic cost functions.
Then, I will explain a joint work with Robert McCann, where
we study the limiting process between the discrete and the continuous
equation.
A formula for the maximum voltage drop in on-chip power distribution networks.
Abstract
We will consider a simplified model for on-chip power distribution networks of array bonded integrated circuits. In this model the voltage is the solution of a Poisson equation in an infinite planar domain whose boundary is an array of circular or square pads of size $\epsilon$. We deal with the singular limit as $\epsilon\to 0$ and we are interested in deriving an explicit formula for the maximum voltage drop in the domain in terms of a power series in $\epsilon$. A procedure based on the method of matched asymptotic expansions will be presented to compute all the successive terms in the approximation, which can be interpreted as using multipole solutions of equations involving spatial derivatives of $\delta$-functions.
Clone of (HoRSE seminar) Real variation of stabilities and equivariant quantum cohomology II
Abstract
I will describe a version of the definition of stability conditions on a triangulated category to which we were led by the study of quantization of symplectic resolutions of singularities over fields of positive characteristic. Partly motivated by ideas of Tom Bridgeland, we conjectured a relation of this structure to equivariant quantum cohomology; this conjecture has been verified in some classes of examples. The talk is based on joint projects with Anno, Mirkovic, Okounkov and others
(HoRSE seminar) Real variation of stabilities and equivariant quantum cohomology I
Abstract
I will describe a version of the definition of stability conditions on a triangulated category to which we were led by the study of quantization of symplectic resolutions of singularities over fields of positive characteristic. Partly motivated by ideas of Tom Bridgeland, we conjectured a relation of this structure to equivariant quantum cohomology; this conjecture has been verified in some classes of examples. The talk is based on joint projects with Anno, Mirkovic, Okounkov and others
Orthogonality and stability in large matrix iterative algorithms
Abstract
Many iterative algorithms for large sparse matrix problems are based on orthogonality (or $A$-orthogonality, bi-orthogonality, etc.), but these properties can be lost very rapidly using vector orthogonalization (subtracting multiples of earlier supposedly orthogonal vectors from the latest vector to produce the next orthogonal vector). Yet many of these algorithms are some of the best we have for very large sparse problems, such as Conjugate Gradients, Lanczos' method for the eigenproblem, Golub and Kahan bidiagonalization, and MGS-GMRES.
\\
\\
Here we describe an ideal form of orthogonal matrix that arises from any sequence of supposedly orthogonal vectors. We illustrate some of its fascinating properties, including a beautiful measure of orthogonality of the original set of vectors. We will indicate how the ideal orthogonal matrix leads to expressions for new concepts of stability of such iterative algorithm. These are expansions of the concept of backward stability for matrix transformation algorithms that was so effectively developed and applied by J. H. Wilkinson (FRS). The resulting new expressions can be used to understand the subtle and effective performance of some (and hopefully eventually all) of these iterative algorithms.
applying loads in bone tissue engineering problems
Abstract
Please note that this is taking place in the afternoon - partly to avoid a clash with the OCCAM group meeting in the morning.
OCCAM Group Meeting
Abstract
- Ian Griffiths - Control and optimization in filtration and tissue engineering
- Vladimir Zubkov - Comparison of the Navier-Stokes and the lubrication models for the tear film dynamics
- Victor Burlakov - Applying the ideas of 1-st order phase transformations to various nano-systems
Some linear algebra problems arising in the analysis of complex networks
The role of carbon in past and future climate
Abstract
There is much current concern over the future evolution of climate under conditions of increased atmospheric carbon. Much of the focus is on a bottom-up approach in which weather/climate models of severe complexity are solved and extrapolated beyond their presently validated parameter ranges. An alternative view takes a top-down approach, in which the past Earth itself is used as a laboratory; in this view, ice-core records show a strong association of carbon with atmospheric temperature throughout the Pleistocene ice ages. This suggests that carbon variations drove the ice ages. In this talk I build the simplest model which can accommodate this observation, and I show that it is reasonably able to explain the observations. The model can then be extrapolated to offer commentary on the cooling of the planet since the Eocene, and the likely evolution of climate under the current industrial production of atmospheric carbon.
Multilevel dual approach for pricing American style derivatives
Abstract
In this article we propose a novel approach to reduce the computational
complexity of the dual method for pricing American options.
We consider a sequence of martingales that converges to a given
target martingale and decompose the original dual representation into a sum of
representations that correspond to different levels of approximation to the
target martingale. By next replacing in each representation true conditional expectations with their
Monte Carlo estimates, we arrive at what one may call a multilevel dual Monte
Carlo algorithm. The analysis of this algorithm reveals that the computational
complexity of getting the corresponding target upper bound, due to the target martingale,
can be significantly reduced. In particular, it turns out that using our new
approach, we may construct a multilevel version of the well-known nested Monte
Carlo algorithm of Andersen and Broadie (2004) that is, regarding complexity, virtually
equivalent to a non-nested algorithm. The performance of this multilevel
algorithm is illustrated by a numerical example. (joint work with Denis Belomestny)
Arguing about risks: a request for assistance
Abstract
The standard mathematical treatment of risk combines numerical measures of uncertainty (usually probabilistic) and loss (money and other natural estimators of utility). There are significant practical and theoretical problems with this interpretation. A particular concern is that the estimation of quantitative parameters is frequently problematic, particularly when dealing with one-off events such as political, economic or environmental disasters. Practical decision-making under risk, therefore, frequently requires extensions to the standard treatment.
An intuitive approach to reasoning under uncertainty has recently become established in computer science and cognitive science in which general theories (formalised in a non-classical first-order logic) are applied to descriptions of specific situations in order to construct arguments for and/or against claims about possible events. Collections of arguments can be aggregated to characterize the type or degree of risk, using the logical grounds of the arguments to explain, and assess the credibility of, the supporting evidence for competing claims. Discussions about whether a complex piece of equipment or software could fail, the possible consequences of such failure and their mitigation, for example, can be based on the balance and relative credibility of all the arguments. This approach has been shown to offer versatile risk management tools in a number of domains, including clinical medicine and toxicology (e.g. www.infermed.com; www.lhasa.com). Argumentation frameworks are also being used to support open discussion and debates about important issues (e.g. see debate on environmental risks at www.debategraph.org).
Despite the practical success of argument-based methods for risk assessment and other kinds of decision making they typically ignore measurement of uncertainty even if some quantitative data are available, or combine logical inference with quantitative uncertainty calculations in ad hoc ways. After a brief introduction to the argumentation approach I will demonstrate medical risk management applications of both kinds and invite suggestions for solutions which are mathematically more satisfactory.
Definitions (Hubbard: http://en.wikipedia.org/wiki/Risk)
Uncertainty: The lack of complete certainty, that is, the existence of more than one possibility. The "true" outcome/state/result/value is not known.
Measurement of uncertainty: A set of probabilities assigned to a set of possibilities. Example:"There is a 60% chance this market will double in five years"
Risk: A state of uncertainty where some of the possibilities involve a loss, catastrophe, or other undesirable outcome.
Measurement of risk: A set of possibilities each with quantified probabilities and quantified losses. Example: "There is a 40% chance the proposed oil well will be dry with a loss of $12 million in exploratory drilling costs".
The conceptual background to the argumentation approach to reasoning under uncertainty is reviewed in the attached paper “Arguing about the Evidence: a logical approach”.
Tsunami asymptotics
Abstract
Tsunami asymptotics: For most of their propagation, tsunamis are linear dispersive waves whose speed is limited by the depth of the ocean and which can be regarded as diffraction-decorated caustics in spacetime. For constant depth, uniform asymptotics gives a very accurate compact description of the tsunami profile generated by an arbitrary initial disturbance. Variations in depth can focus tsunamis onto cusped caustics, and this 'singularity on a singularity' constitutes an unusual diffraction problem, whose solution indicates that focusing can amplify the tsunami energy by an order of magnitude.
Sharpening `Manin-Mumford' for certain algebraic groups of dimension 2
Abstract
(Joint work with P. Corvaja and D.
Masser.)
The topic of the talk arises from the
Manin-Mumford conjecture and its extensions, where we shall
focus on the case of (complex connected) commutative
algebraic groups $G$ of dimension $2$. The `Manin-Mumford'
context in these cases predicts finiteness for the set of
torsion points in an algebraic curve inside $G$, unless the
curve is of `special' type, i.e. a translate of an algebraic
subgroup of $G$.
In the talk we shall consider not merely the set of torsion
points, but its topological closure in $G$ (which turns out
to be also the maximal compact subgroup). In the case of
abelian varieties this closure is the whole space, but this is
not so for other $G$; actually, we shall prove that in certain
cases (where a natural dimensional condition is fulfilled) the
intersection of this larger set with a non-special curve
must still be a finite set.
We shall conclude by stating in brief some extensions of
this problem to higher dimensions.
Uniformizing Bun(G) by the affine Grassmannian
Abstract
I'll present the work of Gaitsgory arXiv:1108.1741. In it he uses Beilinson-Drinfeld factorization techniques in order to uniformize the moduli stack of G-bundles on a curve. The main difference with the gauge theoretic technique is that the the affine Grassmannian is far from being contractible but the fibers of the map to Bun(G) are contractible.
Lectures on: Bifurcation Theory and Applications to Elliptic Boundary-Value Problems
Abstract
• Sufficient conditions for bifurcation from points that are not isolated eigenvalues of the linearisation.
• Odd potential operators.
• Defining min-max critical values using sets of finite genus.
• Formulating some necessary conditions for bifurcation.