OCCAM Group Meeting
Abstract
- Cameron Hall - Dislocations and discrete-to-continuum asymptotics: the summary
- Kostas Zygalakis - Multi scale methods: theory numerics and applications
- Lian Duan - Barcode Detection and Deconvolution in Well Testing
Nonnormality is a well studied subject in the context of partial differential operators. Yet, only little is known for boundary integral operators. The only well studied case is the unit ball, where the standard single layer, double layer and conjugate double layer potential operators in acoustic scattering diagonalise in a unitary basis. In this talk we present recent results for the analysis of spectral decompositions and nonnormality of boundary integral operators on more general domains. One particular application is the analysis of stability constants for boundary element discretisations. We demonstrate how these are effected by nonnormality and give several numerical examples, illustrating these issues on various domains.
The aim of this talk is to explain how to construct solutions to a
relativistic transport equation via a time discrete scheme based on an
optimal transportation problem.
First of all, I will present a joint work with J. Bertrand, where we prove the existence of an optimal map
for the Monge-Kantorovich problem associated to relativistic cost functions.
Then, I will explain a joint work with Robert McCann, where
we study the limiting process between the discrete and the continuous
equation.
We will consider a simplified model for on-chip power distribution networks of array bonded integrated circuits. In this model the voltage is the solution of a Poisson equation in an infinite planar domain whose boundary is an array of circular or square pads of size $\epsilon$. We deal with the singular limit as $\epsilon\to 0$ and we are interested in deriving an explicit formula for the maximum voltage drop in the domain in terms of a power series in $\epsilon$. A procedure based on the method of matched asymptotic expansions will be presented to compute all the successive terms in the approximation, which can be interpreted as using multipole solutions of equations involving spatial derivatives of $\delta$-functions.
I will describe a version of the definition of stability conditions on a triangulated category to which we were led by the study of quantization of symplectic resolutions of singularities over fields of positive characteristic. Partly motivated by ideas of Tom Bridgeland, we conjectured a relation of this structure to equivariant quantum cohomology; this conjecture has been verified in some classes of examples. The talk is based on joint projects with Anno, Mirkovic, Okounkov and others
I will describe a version of the definition of stability conditions on a triangulated category to which we were led by the study of quantization of symplectic resolutions of singularities over fields of positive characteristic. Partly motivated by ideas of Tom Bridgeland, we conjectured a relation of this structure to equivariant quantum cohomology; this conjecture has been verified in some classes of examples. The talk is based on joint projects with Anno, Mirkovic, Okounkov and others
Many iterative algorithms for large sparse matrix problems are based on orthogonality (or $A$-orthogonality, bi-orthogonality, etc.), but these properties can be lost very rapidly using vector orthogonalization (subtracting multiples of earlier supposedly orthogonal vectors from the latest vector to produce the next orthogonal vector). Yet many of these algorithms are some of the best we have for very large sparse problems, such as Conjugate Gradients, Lanczos' method for the eigenproblem, Golub and Kahan bidiagonalization, and MGS-GMRES.
\\
\\
Here we describe an ideal form of orthogonal matrix that arises from any sequence of supposedly orthogonal vectors. We illustrate some of its fascinating properties, including a beautiful measure of orthogonality of the original set of vectors. We will indicate how the ideal orthogonal matrix leads to expressions for new concepts of stability of such iterative algorithm. These are expansions of the concept of backward stability for matrix transformation algorithms that was so effectively developed and applied by J. H. Wilkinson (FRS). The resulting new expressions can be used to understand the subtle and effective performance of some (and hopefully eventually all) of these iterative algorithms.
Please note that this is taking place in the afternoon - partly to avoid a clash with the OCCAM group meeting in the morning.
There is much current concern over the future evolution of climate under conditions of increased atmospheric carbon. Much of the focus is on a bottom-up approach in which weather/climate models of severe complexity are solved and extrapolated beyond their presently validated parameter ranges. An alternative view takes a top-down approach, in which the past Earth itself is used as a laboratory; in this view, ice-core records show a strong association of carbon with atmospheric temperature throughout the Pleistocene ice ages. This suggests that carbon variations drove the ice ages. In this talk I build the simplest model which can accommodate this observation, and I show that it is reasonably able to explain the observations. The model can then be extrapolated to offer commentary on the cooling of the planet since the Eocene, and the likely evolution of climate under the current industrial production of atmospheric carbon.
In this article we propose a novel approach to reduce the computational
complexity of the dual method for pricing American options.
We consider a sequence of martingales that converges to a given
target martingale and decompose the original dual representation into a sum of
representations that correspond to different levels of approximation to the
target martingale. By next replacing in each representation true conditional expectations with their
Monte Carlo estimates, we arrive at what one may call a multilevel dual Monte
Carlo algorithm. The analysis of this algorithm reveals that the computational
complexity of getting the corresponding target upper bound, due to the target martingale,
can be significantly reduced. In particular, it turns out that using our new
approach, we may construct a multilevel version of the well-known nested Monte
Carlo algorithm of Andersen and Broadie (2004) that is, regarding complexity, virtually
equivalent to a non-nested algorithm. The performance of this multilevel
algorithm is illustrated by a numerical example. (joint work with Denis Belomestny)
The standard mathematical treatment of risk combines numerical measures of uncertainty (usually probabilistic) and loss (money and other natural estimators of utility). There are significant practical and theoretical problems with this interpretation. A particular concern is that the estimation of quantitative parameters is frequently problematic, particularly when dealing with one-off events such as political, economic or environmental disasters. Practical decision-making under risk, therefore, frequently requires extensions to the standard treatment.
An intuitive approach to reasoning under uncertainty has recently become established in computer science and cognitive science in which general theories (formalised in a non-classical first-order logic) are applied to descriptions of specific situations in order to construct arguments for and/or against claims about possible events. Collections of arguments can be aggregated to characterize the type or degree of risk, using the logical grounds of the arguments to explain, and assess the credibility of, the supporting evidence for competing claims. Discussions about whether a complex piece of equipment or software could fail, the possible consequences of such failure and their mitigation, for example, can be based on the balance and relative credibility of all the arguments. This approach has been shown to offer versatile risk management tools in a number of domains, including clinical medicine and toxicology (e.g. www.infermed.com; www.lhasa.com). Argumentation frameworks are also being used to support open discussion and debates about important issues (e.g. see debate on environmental risks at www.debategraph.org).
Despite the practical success of argument-based methods for risk assessment and other kinds of decision making they typically ignore measurement of uncertainty even if some quantitative data are available, or combine logical inference with quantitative uncertainty calculations in ad hoc ways. After a brief introduction to the argumentation approach I will demonstrate medical risk management applications of both kinds and invite suggestions for solutions which are mathematically more satisfactory.
Definitions (Hubbard: http://en.wikipedia.org/wiki/Risk)
Uncertainty: The lack of complete certainty, that is, the existence of more than one possibility. The "true" outcome/state/result/value is not known.
Measurement of uncertainty: A set of probabilities assigned to a set of possibilities. Example:"There is a 60% chance this market will double in five years"
Risk: A state of uncertainty where some of the possibilities involve a loss, catastrophe, or other undesirable outcome.
Measurement of risk: A set of possibilities each with quantified probabilities and quantified losses. Example: "There is a 40% chance the proposed oil well will be dry with a loss of $12 million in exploratory drilling costs".
The conceptual background to the argumentation approach to reasoning under uncertainty is reviewed in the attached paper “Arguing about the Evidence: a logical approach”.
Tsunami asymptotics: For most of their propagation, tsunamis are linear dispersive waves whose speed is limited by the depth of the ocean and which can be regarded as diffraction-decorated caustics in spacetime. For constant depth, uniform asymptotics gives a very accurate compact description of the tsunami profile generated by an arbitrary initial disturbance. Variations in depth can focus tsunamis onto cusped caustics, and this 'singularity on a singularity' constitutes an unusual diffraction problem, whose solution indicates that focusing can amplify the tsunami energy by an order of magnitude.
(Joint work with P. Corvaja and D.
Masser.)
The topic of the talk arises from the
Manin-Mumford conjecture and its extensions, where we shall
focus on the case of (complex connected) commutative
algebraic groups $G$ of dimension $2$. The `Manin-Mumford'
context in these cases predicts finiteness for the set of
torsion points in an algebraic curve inside $G$, unless the
curve is of `special' type, i.e. a translate of an algebraic
subgroup of $G$.
In the talk we shall consider not merely the set of torsion
points, but its topological closure in $G$ (which turns out
to be also the maximal compact subgroup). In the case of
abelian varieties this closure is the whole space, but this is
not so for other $G$; actually, we shall prove that in certain
cases (where a natural dimensional condition is fulfilled) the
intersection of this larger set with a non-special curve
must still be a finite set.
We shall conclude by stating in brief some extensions of
this problem to higher dimensions.
I'll present the work of Gaitsgory arXiv:1108.1741. In it he uses Beilinson-Drinfeld factorization techniques in order to uniformize the moduli stack of G-bundles on a curve. The main difference with the gauge theoretic technique is that the the affine Grassmannian is far from being contractible but the fibers of the map to Bun(G) are contractible.
• Sufficient conditions for bifurcation from points that are not isolated eigenvalues of the linearisation.
• Odd potential operators.
• Defining min-max critical values using sets of finite genus.
• Formulating some necessary conditions for bifurcation.
The fundamental task in climate variability research is to eke
out structure from climate signals. Ideally we want a causal
connection between a physical process and the structure of the
signal. Sometimes we have to settle for a correlation between
these. The challenge is that the data is often poorly
constrained and/or sparse. Even though many data gathering
campaigns are taking place or are being planned, the very high
dimensional state space of the system makes the prospects of
climate variability analysis from data alone impractical.
Progress in the analysis is possible by the use of models and
data. Data assimilation is one such strategy. In this talk we
will describe the methodology, illustrate some of its
challenges, and highlight some of the ways our group has
proposed to improving the methodology.
I will talk about $W^{2,1}$ regularity for strictly convex Aleksandrov solutions to the Monge Amp\`ere equation
\[
\det D^2 u =f
\]
where $f$ satisfies $\log f\in L^{\infty} $. Under the previous assumptions in the 90's Caffarelli was able to prove that $u \in C^{1,\alpha}$ and that $u\in W^{2,p}$ if $|f-1|\leq \varepsilon(p)$. His results however left open the question of Sobolev regularity of $u$ in the general case in which $f$ is just bounded away from $0$ and infinity. In a joint work with Alessio Figalli we finally show that actually $|D^2u| \log^k |D^2 u| \in L^1$ for every positive $k$.
\\
If time will permit I will also discuss some question related to the $W^{2,1}$ stability of solutions of Monge-Amp\`ere equation and optimal transport maps and some applications of the regularity to the study of the semi-geostrophic system, a simple model of large scale atmosphere/ocean flows (joint works with Luigi Ambrosio, Maria Colombo and Alessio Figalli).
After recalling some definitions and facts about spectra from the previous two "respectra" talks, I will explain what Thom spectra are, and give many examples. The cohomology theories associated to various different Thom spectra include complex cobordism, stable homotopy groups, ordinary mod-2 homology.......
I will then talk about Thom's theorem: the ring of homotopy groups of a Thom spectrum is isomorphic to the corresponding cobordism ring. This allows one to use homotopy-theoretic methods (calculating the homotopy groups of a spectrum) to answer a geometric question (determining cobordism groups of manifolds with some specified structure). If time permits, I'll also describe the structure of some cobordism rings obtained in this way.
Building on the previous talk, we continue the exploration of techniques required to understand Wise's results. We present an overview of classical small cancellation theory running in parallel with the newer one for cubical complexes.
I will give an overview of some of the most interesting algebraic-lattice theoretical results on bilattices. I will focus in particular on the product construction that is used to represent a subclass of bilattices, the so-called 'interlaced bilattices', mentioning some alternative strategies to prove such a result. If time allows, I will discuss other algebras of logic related to bilattices (e.g., Nelson lattices) and their product representation.
Turbidity currents are fast-moving streams of sediment in the ocean
which have the power to erode the sea floor and damage man-made
infrastructure anchored to the bed. They can travel for hundreds of
kilometres from the continental shelf to the deep ocean, but they are
unpredictable and can occur randomly without much warning making them
hard to observe and measure. Our main aim is to determine the distance
downstream at which the current will become extinct. We consider the
fluid model of Parker et al. [1986] and derive a simple shallow-water
description of the current which we examine numerically and analytically
to identify supercritical and subcritical flow regimes. We then focus on
the solution of the complete model and provide a new description of the
turbulent kinetic energy. This extension of the model involves switching
from a turbulent to laminar flow regime and provides an improved
description of the extinction process.
In recent decades, quantum field theory (QFT) has become the framework for
several basic and outstandingly successful physical theories. Indeed, it has
become the lingua franca of entire branches of physics and even mathematics.
The universal scope of QFT opens fascinating opportunities for philosophy.
Accordingly, although the philosophy of physics has been dominated by the
analysis of quantum mechanics, relativity and thermo-statistical physics,
several philosophers have recently undertaken conceptual analyses of QFT.
One common feature of these analyses is the emphasis on rigorous approaches,
such as algebraic and constructive QFT; as against the more heuristic and
physical formulations of QFT in terms of functional (also knows as: path)
integrals.
However, I will follow the example of some recent mathematicians such as
Atiyah, Connes and Kontsevich, who have adopted a remarkable pragmatism and
opportunism with regard to heuristic QFT, not corseted by rigor (as Connes
remarks). I will conceptually discuss the advances that have marked
heuristic QFT, by analysing some of the key ideas that accompanied its
development. I will also discuss the interactions between these concepts in
the various relevant fields, such as particle physics, statistical
mechanics, gravity and geometry.
I will describe a multiscale asymptotic framework for the analysis of the macroscopic behaviour of periodic
two-material composites with high contrast in a finite-strain setting. I will start by introducing the nonlinear
description of a composite consisting of a stiff material matrix and soft, periodically distributed inclusions. I shall then focus
on the loading regimes when the applied load is small or of order one in terms of the period of the composite structure.
I will show that this corresponds to the situation when the displacements on the stiff component are situated in the vicinity
of a rigid-body motion. This allows to replace, in the homogenisation limit, the nonlinear material law of the stiff component
by its linearised version. As a main result, I derive (rigorously in the spirit of $\Gamma$-convergence) a limit functional
that allows to establish a precise two-scale expansion for minimising sequences. This is joint work with M. Cherdantsev and
S. Neukamm.
I will discuss new rigidity and rationality phenomena
(related to the phenomenon of Arnold tongues) in the theory of
nonabelian group actions on the circle. I will introduce tools that
can translate questions about the existence of actions with prescribed
dynamics, into finite combinatorial questions that can be answered
effectively. There are connections with the theory of Diophantine
approximation, and with the bounded cohomology of free groups. A
special case of this theory gives a very short new proof of Naimi’s
theorem (i.e. the conjecture of Jankins-Neumann) which was the last
step in the classification of taut foliations of Seifert fibered
spaces. This is joint work with Alden Walker.
The
notion quantization originates from information theory, where it refers to the
approximation of a continuous signal on a discrete set. Our research on
quantization is mainly motivated by applications in quadrature problems. In
that context, one aims at finding for a given probability measure $\mu$ on a
metric space a discrete approximation that is supported on a finite number of
points, say $N$, and is close to $\mu$ in a Wasserstein metric.
In general it is a hard problem to find close to optimal quantizations, if
$N$ is large and/or $\mu$ is given implicitly, e.g. being the marginal
distribution of a stochastic differential equation. In this talk we analyse the
efficiency of empirical measures in the constructive quantization problem. That
means the random approximating measure is the uniform distribution on $N$
independent $\mu$-distributed elements.
We show that this approach is order order optimal in many cases. Further, we
give fine asymptotic estimates for the quantization error that involve moments
of the density of the absolutely continuous part of $\mu$, so called high
resolution formulas. The talk ends with an outlook on possible applications and
open problems.
The
talk is based on joint work with Michael Scheutzow (TU Berlin) and Reik
Schottstedt (U Marburg).
I'll recall the quasi-Hamiltonian approach to moduli spaces of flat connections on Riemann surfaces, as a nice finite dimensional algebraic version of operations with loop groups such as fusion. Recently, whilst extending this approach to meromorphic connections, a new operation arose, which we will call "fission". As will be explained, this operation enables the construction of many new algebraic symplectic manifolds, going beyond those we were trying to construct.
We present some recent results on the metastability of continuous time Markov chains on finite sets using potential theory. This approach is applied to the case of supercritical zero range processes.
I will discuss the dynamical emergence of IR conformal invariance describing the low energy excitations of near-extremal R-charged global AdS${}_5$ black holes. To keep some non-trivial dynamics in the sector of ${\cal N}=4$ SYM captured by the near horizon limits describing these IR physics, we are lead to study large N limits in the UV theory involving near vanishing horizon black holes. I will consider both near-BPS and non-BPS regimes, emphasising the differences in the local AdS${}_3$ throats emerging in both cases. I will compare these results with the predictions obtained by Kerr/CFT, obtaining a natural quantisation for the central charge of the near-BPS emergent IR CFT describing the open strings stretched between giant gravitons.
In this work, we consider the hedging error due to discrete trading in models with jumps. We propose a framework enabling to
(asymptotically) optimize the discretization times. More precisely, a strategy is said to be optimal if for a given cost function, no strategy has
(asymptotically) a lower mean square error for a smaller cost. We focus on strategies based on hitting times and give explicit expressions for
the optimal strategies. This is joint work with Peter Tankov.
PLEASE NOTE THAT THIS SEMINAR HAS BEEN CANCELLED DUE TO ILLNESS.
Initial stage of the flow with a free surface generated by a vertical
wall moving from a liquid of finite depth in a gravitational field is
studied. The liquid is inviscid and incompressible, and its flow is
irrotational. Initially the liquid is at rest. The wall starts to move
from the liquid with a constant acceleration.
It is shown that, if the acceleration of the plate is small, then the
liquid free surface separates from the wall only along an
exponentially small interval. The interval on the wall, along which
the free surface instantly separates for moderate acceleration of the
wall, is determined by using the condition that the displacements of
liquid particles are finite. During the initial stage the original
problem of hydrodynamics is reduced to a mixed boundary-value problem
with respect to the velocity field with unknown in advance position of
the separation point. The solution of this
problem is derived in terms of complete elliptic integrals. The
initial shape of the separated free surface is calculated and compared
with that predicted by the small-time solution of the dam break
problem. It is shown that the free surface at the separation point is
orthogonal to the moving plate.
Initial acceleration of a dam, which is suddenly released, is calculated.
• Bifurcation from isolated eigenvalues of finite multiplicity of the linearisation.
• Pseudo-inverses and parametrices for paths of Fredholm operators of index zero.
• Detecting a change of orientation along such a path.
• Lyapunov-Schmidt reduction
I will discuss some of new concepts and objects of two-dimensional number theory:
how the same object can be studied via low dimensional noncommutative theories or higher dimensional commutative ones,
what is higher Haar measure and harmonic analysis and how they can be used in representation theory of non locally compact groups such as loop groups and Kac-Moody groups,
how classical notions split into two different notions on surfaces on the example of adelic structures,
what is the analogue of the double quotient of adeles on surfaces and how one
could approach automorphic functions in geometric dimension two.
The liquid crystal (LC) flow model is a coupling between
orientation (director field) of LC molecules and a flow field.
The model may probably be one of simplest complex fluids and
is very similar to a Allen-Cahn phase field model for
multiphase flows if the orientation variable is replaced by a
phase function. There are a few large or small parameters
involved in the model (e.g. the small penalty parameter for
the unit length LC molecule or the small phase-change
parameter, possibly large Reynolds number of the flow field,
etc.). We propose a C^0 finite element formulation in space
and a modified midpoint scheme in time which accurately
preserves the inherent energy law of the model. We use C^0
elements because they are simpler than existing C^1 element
and mixed element methods. We emphasise the energy law
preservation because from the PDE analysis point of view the
energy law is very important to correctly catch the evolution
of singularities in the LC molecule orientation. In addition
we will see numerical examples that the energy law preserving
scheme performs better under some choices of parameters. We
shall apply the same idea to a Cahn-Hilliard phase field model
where the biharmonic operator is decomposed into two Laplacian
operators. But we find that under our scheme non-physical
oscillation near the interface occurs. We figure out the
reason from the viewpoint of differential algebraic equations
and then remove the non-physical oscillation by doing only one
step of a modified backward Euler scheme at the initial time.
A number of numerical examples demonstrate the good
performance of the method. At the end of the talk we will show
how to apply the method to compute a superconductivity model,
especially at the regime of Hc2 or beyond. The talk is based
on a few joint papers with Chun Liu, Qi Wang, Xingbin Pan and
Roland Glowinski, etc.
We analyse the effect of a natural change to the time variable on the convergence of the Crank-Nicholson scheme when applied to the solution of the heat equation with Dirac delta function initial conditions. In the original variables, the scheme is known to diverge as the time step is reduced with the ratio (lambda) of the time step to space step held constant - the value of lambda controls how fast the divergence occurs. After introducing the square root of time variable we prove that the numerical scheme for the transformed PDE now always converges and that lambda controls the order of convergence, quadratic convergence being achieved for lambda below a critical value. Numerical results indicate that the time change used with an appropriate value of lambda also results in quadratic convergence for the calculation of gamma for a European call option without the need for Rannacher start-up steps. Finally, some results and analysis are presented for the effect of the time change on the calculation of the option value and greeks for the American put calculated by the penalty method with Crank-Nicholson time-stepping.