Excursions in Algebraic Topology
Abstract
Three short talks by the authors of essays on topics related to c3 Algebraic topology: Whitehead's theorem, Cohomology of fibre bundles, Division algebras
Three short talks by the authors of essays on topics related to c3 Algebraic topology: Whitehead's theorem, Cohomology of fibre bundles, Division algebras
Nonnormality is a well studied subject in the context of partial differential operators. Yet, only little is known for boundary integral operators. The only well studied case is the unit ball, where the standard single layer, double layer and conjugate double layer potential operators in acoustic scattering diagonalise in a unitary basis. In this talk we present recent results for the analysis of spectral decompositions and nonnormality of boundary integral operators on more general domains. One particular application is the analysis of stability constants for boundary element discretisations. We demonstrate how these are effected by nonnormality and give several numerical examples, illustrating these issues on various domains.
The aim of this talk is to explain how to construct solutions to a
relativistic transport equation via a time discrete scheme based on an
optimal transportation problem.
First of all, I will present a joint work with J. Bertrand, where we prove the existence of an optimal map
for the Monge-Kantorovich problem associated to relativistic cost functions.
Then, I will explain a joint work with Robert McCann, where
we study the limiting process between the discrete and the continuous
equation.
We will consider a simplified model for on-chip power distribution networks of array bonded integrated circuits. In this model the voltage is the solution of a Poisson equation in an infinite planar domain whose boundary is an array of circular or square pads of size $\epsilon$. We deal with the singular limit as $\epsilon\to 0$ and we are interested in deriving an explicit formula for the maximum voltage drop in the domain in terms of a power series in $\epsilon$. A procedure based on the method of matched asymptotic expansions will be presented to compute all the successive terms in the approximation, which can be interpreted as using multipole solutions of equations involving spatial derivatives of $\delta$-functions.
I will describe a version of the definition of stability conditions on a triangulated category to which we were led by the study of quantization of symplectic resolutions of singularities over fields of positive characteristic. Partly motivated by ideas of Tom Bridgeland, we conjectured a relation of this structure to equivariant quantum cohomology; this conjecture has been verified in some classes of examples. The talk is based on joint projects with Anno, Mirkovic, Okounkov and others
I will describe a version of the definition of stability conditions on a triangulated category to which we were led by the study of quantization of symplectic resolutions of singularities over fields of positive characteristic. Partly motivated by ideas of Tom Bridgeland, we conjectured a relation of this structure to equivariant quantum cohomology; this conjecture has been verified in some classes of examples. The talk is based on joint projects with Anno, Mirkovic, Okounkov and others
Many iterative algorithms for large sparse matrix problems are based on orthogonality (or $A$-orthogonality, bi-orthogonality, etc.), but these properties can be lost very rapidly using vector orthogonalization (subtracting multiples of earlier supposedly orthogonal vectors from the latest vector to produce the next orthogonal vector). Yet many of these algorithms are some of the best we have for very large sparse problems, such as Conjugate Gradients, Lanczos' method for the eigenproblem, Golub and Kahan bidiagonalization, and MGS-GMRES.
\\
\\
Here we describe an ideal form of orthogonal matrix that arises from any sequence of supposedly orthogonal vectors. We illustrate some of its fascinating properties, including a beautiful measure of orthogonality of the original set of vectors. We will indicate how the ideal orthogonal matrix leads to expressions for new concepts of stability of such iterative algorithm. These are expansions of the concept of backward stability for matrix transformation algorithms that was so effectively developed and applied by J. H. Wilkinson (FRS). The resulting new expressions can be used to understand the subtle and effective performance of some (and hopefully eventually all) of these iterative algorithms.
Please note that this is taking place in the afternoon - partly to avoid a clash with the OCCAM group meeting in the morning.
There is much current concern over the future evolution of climate under conditions of increased atmospheric carbon. Much of the focus is on a bottom-up approach in which weather/climate models of severe complexity are solved and extrapolated beyond their presently validated parameter ranges. An alternative view takes a top-down approach, in which the past Earth itself is used as a laboratory; in this view, ice-core records show a strong association of carbon with atmospheric temperature throughout the Pleistocene ice ages. This suggests that carbon variations drove the ice ages. In this talk I build the simplest model which can accommodate this observation, and I show that it is reasonably able to explain the observations. The model can then be extrapolated to offer commentary on the cooling of the planet since the Eocene, and the likely evolution of climate under the current industrial production of atmospheric carbon.
In this article we propose a novel approach to reduce the computational
complexity of the dual method for pricing American options.
We consider a sequence of martingales that converges to a given
target martingale and decompose the original dual representation into a sum of
representations that correspond to different levels of approximation to the
target martingale. By next replacing in each representation true conditional expectations with their
Monte Carlo estimates, we arrive at what one may call a multilevel dual Monte
Carlo algorithm. The analysis of this algorithm reveals that the computational
complexity of getting the corresponding target upper bound, due to the target martingale,
can be significantly reduced. In particular, it turns out that using our new
approach, we may construct a multilevel version of the well-known nested Monte
Carlo algorithm of Andersen and Broadie (2004) that is, regarding complexity, virtually
equivalent to a non-nested algorithm. The performance of this multilevel
algorithm is illustrated by a numerical example. (joint work with Denis Belomestny)
The standard mathematical treatment of risk combines numerical measures of uncertainty (usually probabilistic) and loss (money and other natural estimators of utility). There are significant practical and theoretical problems with this interpretation. A particular concern is that the estimation of quantitative parameters is frequently problematic, particularly when dealing with one-off events such as political, economic or environmental disasters. Practical decision-making under risk, therefore, frequently requires extensions to the standard treatment.
An intuitive approach to reasoning under uncertainty has recently become established in computer science and cognitive science in which general theories (formalised in a non-classical first-order logic) are applied to descriptions of specific situations in order to construct arguments for and/or against claims about possible events. Collections of arguments can be aggregated to characterize the type or degree of risk, using the logical grounds of the arguments to explain, and assess the credibility of, the supporting evidence for competing claims. Discussions about whether a complex piece of equipment or software could fail, the possible consequences of such failure and their mitigation, for example, can be based on the balance and relative credibility of all the arguments. This approach has been shown to offer versatile risk management tools in a number of domains, including clinical medicine and toxicology (e.g. www.infermed.com; www.lhasa.com). Argumentation frameworks are also being used to support open discussion and debates about important issues (e.g. see debate on environmental risks at www.debategraph.org).
Despite the practical success of argument-based methods for risk assessment and other kinds of decision making they typically ignore measurement of uncertainty even if some quantitative data are available, or combine logical inference with quantitative uncertainty calculations in ad hoc ways. After a brief introduction to the argumentation approach I will demonstrate medical risk management applications of both kinds and invite suggestions for solutions which are mathematically more satisfactory.
Definitions (Hubbard: http://en.wikipedia.org/wiki/Risk)
Uncertainty: The lack of complete certainty, that is, the existence of more than one possibility. The "true" outcome/state/result/value is not known.
Measurement of uncertainty: A set of probabilities assigned to a set of possibilities. Example:"There is a 60% chance this market will double in five years"
Risk: A state of uncertainty where some of the possibilities involve a loss, catastrophe, or other undesirable outcome.
Measurement of risk: A set of possibilities each with quantified probabilities and quantified losses. Example: "There is a 40% chance the proposed oil well will be dry with a loss of $12 million in exploratory drilling costs".
The conceptual background to the argumentation approach to reasoning under uncertainty is reviewed in the attached paper “Arguing about the Evidence: a logical approach”.
Tsunami asymptotics: For most of their propagation, tsunamis are linear dispersive waves whose speed is limited by the depth of the ocean and which can be regarded as diffraction-decorated caustics in spacetime. For constant depth, uniform asymptotics gives a very accurate compact description of the tsunami profile generated by an arbitrary initial disturbance. Variations in depth can focus tsunamis onto cusped caustics, and this 'singularity on a singularity' constitutes an unusual diffraction problem, whose solution indicates that focusing can amplify the tsunami energy by an order of magnitude.
(Joint work with P. Corvaja and D.
Masser.)
The topic of the talk arises from the
Manin-Mumford conjecture and its extensions, where we shall
focus on the case of (complex connected) commutative
algebraic groups $G$ of dimension $2$. The `Manin-Mumford'
context in these cases predicts finiteness for the set of
torsion points in an algebraic curve inside $G$, unless the
curve is of `special' type, i.e. a translate of an algebraic
subgroup of $G$.
In the talk we shall consider not merely the set of torsion
points, but its topological closure in $G$ (which turns out
to be also the maximal compact subgroup). In the case of
abelian varieties this closure is the whole space, but this is
not so for other $G$; actually, we shall prove that in certain
cases (where a natural dimensional condition is fulfilled) the
intersection of this larger set with a non-special curve
must still be a finite set.
We shall conclude by stating in brief some extensions of
this problem to higher dimensions.
I'll present the work of Gaitsgory arXiv:1108.1741. In it he uses Beilinson-Drinfeld factorization techniques in order to uniformize the moduli stack of G-bundles on a curve. The main difference with the gauge theoretic technique is that the the affine Grassmannian is far from being contractible but the fibers of the map to Bun(G) are contractible.
• Sufficient conditions for bifurcation from points that are not isolated eigenvalues of the linearisation.
• Odd potential operators.
• Defining min-max critical values using sets of finite genus.
• Formulating some necessary conditions for bifurcation.
The fundamental task in climate variability research is to eke
out structure from climate signals. Ideally we want a causal
connection between a physical process and the structure of the
signal. Sometimes we have to settle for a correlation between
these. The challenge is that the data is often poorly
constrained and/or sparse. Even though many data gathering
campaigns are taking place or are being planned, the very high
dimensional state space of the system makes the prospects of
climate variability analysis from data alone impractical.
Progress in the analysis is possible by the use of models and
data. Data assimilation is one such strategy. In this talk we
will describe the methodology, illustrate some of its
challenges, and highlight some of the ways our group has
proposed to improving the methodology.
I will talk about $W^{2,1}$ regularity for strictly convex Aleksandrov solutions to the Monge Amp\`ere equation
\[
\det D^2 u =f
\]
where $f$ satisfies $\log f\in L^{\infty} $. Under the previous assumptions in the 90's Caffarelli was able to prove that $u \in C^{1,\alpha}$ and that $u\in W^{2,p}$ if $|f-1|\leq \varepsilon(p)$. His results however left open the question of Sobolev regularity of $u$ in the general case in which $f$ is just bounded away from $0$ and infinity. In a joint work with Alessio Figalli we finally show that actually $|D^2u| \log^k |D^2 u| \in L^1$ for every positive $k$.
\\
If time will permit I will also discuss some question related to the $W^{2,1}$ stability of solutions of Monge-Amp\`ere equation and optimal transport maps and some applications of the regularity to the study of the semi-geostrophic system, a simple model of large scale atmosphere/ocean flows (joint works with Luigi Ambrosio, Maria Colombo and Alessio Figalli).
After recalling some definitions and facts about spectra from the previous two "respectra" talks, I will explain what Thom spectra are, and give many examples. The cohomology theories associated to various different Thom spectra include complex cobordism, stable homotopy groups, ordinary mod-2 homology.......
I will then talk about Thom's theorem: the ring of homotopy groups of a Thom spectrum is isomorphic to the corresponding cobordism ring. This allows one to use homotopy-theoretic methods (calculating the homotopy groups of a spectrum) to answer a geometric question (determining cobordism groups of manifolds with some specified structure). If time permits, I'll also describe the structure of some cobordism rings obtained in this way.