17:00
'Amalgamated products of free groups: from algorithms to linguistic.'
Abstract
In my talk I shall give a small survey on some algorithmic properties of amalgamated products of finite rank
free groups. In particular, I'm going to concentrate on Membership Problem for this groups. Apart from being algorithmically interesting, amalgams of free groups admit a lot of interpretations. I shall show how to
characterize this construction from the point of view of geometry and linguistic.
Unlinking and unknottedness of monotone Lagrangian submanifolds
Abstract
I will explain some recent joint work with Georgios Dimitroglou Rizell in which we use moduli spaces of holomorphic discs with boundary on a monotone Lagrangian torus in ${\mathbb C}^n$ to prove that all such tori are smoothly isotopic when $n$ is odd and at least 5
12:00
From the holomorphic Wilson Loop, to dlog forms for Amplitudes and their integration
16:30
Systemic Risk
Abstract
The quantification and management of risk in financial markets
is at the center of modern financial mathematics. But until recently, risk
assessment models did not consider the effects of inter-connectedness of
financial agents and the way risk diversification impacts the stability of
markets. I will give an introduction to these problems and discuss the
implications of some mathematical models for dealing with them.
Exact Lagrangian immersions in Euclidean space
Abstract
Exact Lagrangian immersions are governed by an h-principle, whilst exact Lagrangian
embeddings are well-known to be constrained by strong rigidity theorems coming from
holomorphic curve theory. We consider exact Lagrangian immersions in Euclidean space with a
prescribed number of double points, and find that the borderline between flexibility and
rigidity is more delicate than had been imagined. The main result obtains constraints on such
immersions with exactly one double point which go beyond the usual setting of Morse or Floer
theory. This is joint work with Tobias Ekholm, and in part with Ekholm, Eliashberg and Murphy.
Uniqueness of Signature
Abstract
We relate the expected signature to the Fourier transform of n-point functions, first studied by O. Schramm, and subsequently
by J. Cardy and Simmon, D. Belyaev and J. Viklund. We also prove that the signatures determine the paths in the complement of a Chordal SLE null set. In the end, we will also discuss an idea on how to extend the uniqueness of signatures result by Hambly and Lyons (2006) to paths with finite 1<p<2variations.
INTERACTIONS OF THE FLUID AND SOLID PHASES IN COMPLEX MEDIA - COUPLING REACTIVE FLOWS, TRANSPORT AND MECHANICS
Abstract
Modelling reactive flows, diffusion, transport and mechanical interactions in media consisting of multiple phases, e.g. of a fluid and a solid phase in a porous medium, is giving rise to many open problems for multi-scale analysis and simulation. In this lecture, the following processes are studied:
diffusion, transport, and reaction of substances in the fluid and the solid phase,
mechanical interactions of the fluid and solid phase,
change of the mechanical properties of the solid phase by chemical reactions,
volume changes (“growth”) of the solid phase.
These processes occur for instance in soil and in porous materials, but also in biological membranes, tissues and in bones. The model equations consist of systems of nonlinear partial differential equations, with initial-boundary conditions and transmission conditions on fixed or free boundaries, mainly in complex domains. The coupling of processes on different scales is posing challenges to the mathematical analysis as well as to computing. In order to reduce the complexity, effective macroscopic equations have to be derived, including the relevant information from the micro scale.
In case of processes in tissues, a homogenization limit leads to an effective, mechanical system, containing a pressure gradient, which satisfies a generalized, time-dependent Darcy law, a Biot-law, where the chemical substances satisfy diffusion-transport-reaction equations and are influencing the mechanical parameters.
The interaction of the fluid and the material transported in a vessel with its flexible wall, incorporating material and changing its structure and mechanical behavior, is a process important e.g. in the vascular system (plague-formation) or in porous media.
The lecture is based on recent results obtained in cooperation with A. Mikelic, M. Neuss-Radu, F. Weller and Y. Yang.
14:15
Particle methods with applications in finance
Abstract
Abstract: The aim of this lecture is to give a general introduction to the theory of interacting particle methods and an overview of its applications to numerical finance. We survey the main techniques and results on interacting particle systems and explain how they can be applied to deal with a variety of financial numerical problems such as: pricing complex path dependent European options, computing sensitivities, American option pricing or solving numerically partially observed control problems.
14:00
nonlinear evolution systems and Green's function
Abstract
In this talk, we will introduce how to apply Green's function method to get pointwise estimates for solutions of the Cauchy problem of nonlinear evolution equations with dissipative structure. First of all, we introduce the pointwise estimates of the time-asymptotic shape of the solutions of the isentropic Navier-Stokes equations and exhibit the generalized Huygen's principle. Then, for other nonlinear dissipative evolution equations, we will introduce some recent results and give brief explanations. Our approach is based on the detailed analysis of the Green's function of the linearized system and micro-local analysis, such as frequency decomposition and so on.
Can We Recover?
Abstract
The Ross Recovery Theorem gives sufficient conditions under which the
market’s beliefs
can be recovered from risk-neutral probabilities. His approach places
mild restrictions on the form of the preferences of
the representative investor. We present an alternative approach which
has no restrictions beyond preferring more to less,
Instead, we restrict the form and risk-neutral dynamics of John Long’s
numeraire portfolio. We also replace Ross’ finite state Markov chain
with a diffusion with bounded state space. Finally, we present some
preliminary results for diffusions on unbounded state space.
In particular, our version of Ross recovery allows market beliefs to be
recovered from risk neutral probabilities in the classical Cox
Ingersoll Ross model for the short interest rate.
Hyperconifold Singularities and Transitions
Abstract
Robust Hedging, price intervals and optimal transport
Abstract
The original transport problem is to optimally move a pile of soil to an excavation.
Mathematically, given two measures of equal mass, we look for an optimal bijection that takes
one measure to the other one and also minimizes a given cost functional. Kantorovich relaxed
this problem by considering a measure whose marginals agree with given two measures instead of
a bijection. This generalization linearizes the problem. Hence, allows for an easy existence
result and enables one to identify its convex dual.
In robust hedging problems, we are also given two measures. Namely, the initial and the final
distributions of a stock process. We then construct an optimal connection. In general, however,
the cost functional depends on the whole path of this connection and not simply on the final value.
Hence, one needs to consider processes instead of simply the maps S. The probability distribution
of this process has prescribed marginals at final and initial times. Thus, it is in direct analogy
with the Kantorovich measure. But, financial considerations restrict the process to be a martingale
Interestingly, the dual also has a financial interpretation as a robust hedging (super-replication)
problem.
In this talk, we prove an analogue of Kantorovich duality: the minimal super-replication cost in
the robust setting is given as the supremum of the expectations of the contingent claim over all
martingale measures with a given marginal at the maturity.
This is joint work with Yan Dolinsky of Hebrew University.
14:00
Analysis of travel patterns from departure and arrival times
Abstract
Please note the change of venue!
Suppose there is a system where certain objects move through a network. The objects are detected only when they pass through a sparse set of points in the network. For example, the objects could be vehicles moving along a road network, and observed by a radar or other sensor as they pass through (or originate or terminate at) certain key points in the network, but which cannot be observed continuously and tracked as they travel from one point to another. Alternatively they could be data packets in a computer network. The detections only record the time at which an object passes by, and contain no information about identity that would trivially allow the movement of an individual object from one point to another to be deduced. It is desired to determine the statistics of the movement of the objects through the network. I.e. if an object passes through point A at a certain time it is desired to determine the probability density that the same object will pass through a point B at a certain later time.
The system might perhaps be represented by a graph, with a node at each point where detections are made. The detections at each node can be represented by a signal as a function of time, where the signal is a superposition of delta functions (one per detection). The statistics of the movement of objects between nodes must be deduced from the correlations between the signals at each node. The problem is complicated by the possibility that a given object might move between two nodes along several alternative routes (perhaps via other nodes or perhaps not), or might travel along the same route but with several alternative speeds.
What prior knowledge about the network, or constraints on the signals, are needed to make this problem solvable? Is it necessary to know the connections between the nodes or the pdfs for the transition time between nodes a priori, or can this be deduced? What conditions are needed on the information content of the signals? (I.e. if detections are very sparse on the time scale for passage through the network then the transition probabilities can be built up by considering each cascade of detections independently, while if detections are dense then it will presumably be necessary to assume that objects do not move through the network independently, but instead tend to form convoys that are apparent as a pattern of detections that persist for some distance on average). What limits are there on the noise in the signal or amount of unwanted signal, i.e. false detections, or objects which randomly fail to be detected at a particular node, or objects which are detected at one node but which do not pass through any other nodes? Is any special action needed to enforce causality, i.e. positive time delays for transitions between nodes?
Modular curves, Deligne-Lusztig curves and Serre weights
Abstract
One of the most subtle aspects of the correspondence between automorphic and Galois representations is the weight part of Serre conjectures, namely describing the weights of modular forms corresponding to mod p congruence class of Galois representations. We propose a direct geometric approach via studying the mod p cohomology groups of certain integral models of modular or Shimura curves, involving Deligne-Lusztig curves with the action of GL(2) over finite fields. This is a joint work with James Newton.
A mathematical approach to the mathematical modelling of Lithium-ion batteries
Abstract
In this talk we will discuss the mathematical modelling of the performance of Lithium-ion batteries. A mathematical model based on a macro-homogeneous approach developed by John Neuman will be presented. The uniqueness and existence of solution of the corresponding problem will be also discussed.
Scalable Data Analytics
Abstract
Very-large scale data analytics are an alleged golden goose for efforts in parallel and distributed computing, and yet contemporary statistics remain somewhat of a dark art for the uninitiated. In this presentation, we are going to take a mathematical and algorithmic look beyond the veil of Big Data by studying the structure of the algorithms and data, and by analyzing the fit to existing and proposed computer systems and programming models. Towards highly scalable kernels, we will also discuss some of the promises and challenges of approximation algorithms using randomization, sampling, and decoupled processing, touching some contemporary topics in parallel numerics.
12:00
From nonlinear to linearized elasticity via $\Gamma$-convergence: the case of multi-well energies satisfying weak coercivity conditions
Abstract
16:00
Separation properties and restrictions on the cardinality of topological spaces
16:00
Separation properties and restrictions on the cardinality of topological spaces
11:30
Boy's surface
Abstract
Following the recent paper of Ogasa, we attempt to construct Boy's surface using only paper and tape. If this is successful we hope to address such questions as:
Is that really Boy's surface?
Why should we care?
Do we have any more biscuits?
Equivariant classes, COHA, and quantum dilogarithm identities for Dynkin quivers II
Abstract
Consider non-negative integers assigned to the vertexes of an oriented graph. To this combinatorial data we associate a so-called quiver representation. We will study the geometry and the algebra of this representation, when the underlying un-oriented graph is of Dynkin type ADE.
A remarkable object we will consider is Kazarian's equivariant cohomology spectral sequence. The edge homomorphism of this spectral sequence defines the so-called quiver polynomials. These polynomials are generalizations of remarkable polynomials in algebraic combinatorics (Giambelli-Thom-Porteous, Schur, Schubert, their double, universal, and quantum versions). Quiver polynomials measure degeneracy loci of maps among vector bundles over a common base space. We will present interpolation, residue, and (conjectured) positivity properties of these polynomials.
The quiver polynomials are also encoded in the Cohomological Hall Algebra (COHA) associated with the oriented graph. This is a non-commutative algebra defined by Kontsevich and Soibelman in relation with Donaldson-Thomas invariants. The above mentioned spectral sequence has a structure identity expressing the fact that the sequence converges to explicit groups. We will show the role of this structure identity in understanding the structure of the COHA. The obtained identities are equivalent to Reineke's quantum dilogarithm identities associated to ADE quivers and certain stability conditions.
Inside the 4G Spectrum Auction
Abstract
The recently completed auction for 4G mobile spectrum was the most importantcombinatorial auction ever held in the UK. In general, combinatorial auctions allow bidders to place individual bids on packages of items,instead of separate bids on individual items, and this feature has theoretical advantages for bidders and sellers alike. The accompanying challenges of implementation have been the subject of intense work over the last few years, with the result that the advantages of combinatorial auctions can now be realised in practice on a large scale. Nowhere has this work been more prominent than in auctions for radio spectrum. The UK's 4G auction is the most recent of these and the publication by Ofcom (the UK's telecommunications regulator) of the auction's full bidding activity creates a valuable case study of combinatorial auctions in action.
14:15
Equivariant classes, COHA, and quantum dilogarithm identities for Dynkin quivers I
Abstract
Consider non-negative integers assigned to the vertexes of an oriented graph. To this combinatorial data we associate a so-called quiver representation. We will study the geometry and the algebra of this representation, when the underlying un-oriented graph is of Dynkin type ADE.
A remarkable object we will consider is Kazarian's equivariant cohomology spectral sequence. The edge homomorphism of this spectral sequence defines the so-called quiver polynomials. These polynomials are generalizations of remarkable polynomials in algebraic combinatorics (Giambelli-Thom-Porteous, Schur, Schubert, their double, universal, and quantum versions). Quiver polynomials measure degeneracy loci of maps among vector bundles over a common base space. We will present interpolation, residue, and (conjectured) positivity properties of these polynomials.
The quiver polynomials are also encoded in the Cohomological Hall Algebra (COHA) associated with the oriented graph. This is a non-commutative algebra defined by Kontsevich and Soibelman in relation with Donaldson-Thomas invariants. The above mentioned spectral sequence has a structure identity expressing the fact that the sequence converges to explicit groups. We will show the role of this structure identity in understanding the structure of the COHA. The obtained identities are equivalent to Reineke's quantum dilogarithm identities associated to ADE quivers and certain stability conditions.
The search for Intrinsic Decoherence
Abstract
Conventional decoherence (usually called 'Environmental
Decoherence') is supposed to be a result of correlations
established between some quantum system and the environment.
'Intrinsic decoherence' is hypothesized as being an essential
feature of Nature - its existence would entail a breakdown of
quantum mechanics. A specific mechanism of some interest is
'gravitational decoherence', whereby gravity causes intrinsic
decoherence.
I will begin by discussing what is now known about the mechanisms of
environmental decoherence, noting in particular that they can and do
involve decoherence without dissipation (ie., pure phase decoherence).
I will then briefly review the fundamental conflict between Quantum
Mechanics and General Relativity, and several arguments that suggest
how this might be resolved by the existence of some sort of 'gravitational
decoherence'. I then outline a theory of gravitational decoherence
(the 'GR-Psi' theory) which attempts to give a quantitative discussion of
gravitational decoherence, and which makes predictions for
experiments.
The weak field regime of this theory (relevant to experimental
predictions) is discussed in detail, along with a more speculative
discussion of the strong field regime.
Time-invariant surfaces in evolution equations
Abstract
A time-invariant level surface is a (codimension one)
spatial surface on which, for every fixed time, the solution of an
evolution equation equals a constant (depending on the time). A
relevant and motivating case is that of the heat equation. The
occurrence of one or more time-invariant surfaces forces the solution
to have a certain degree of symmetry. In my talk, I shall present a
set of results on this theme and sketch the main ideas involved, that
intertwine a wide variety of old and new analytical and geometrical
techniques.
Metric Geometry of Mapping Class and Relatively Hyperbolic Groups
Abstract
We prove that quasi-trees of spaces satisfying the axiomatisation given by Bestvina, Bromberg and Fujiwara are quasi-isometric to tree-graded spaces in the sense of Dru\c{t}u and Sapir. We then present a technique for obtaining `good' embeddings of such spaces into $\ell^p$ spaces, and show how results of Bestvina-Bromberg-Fujiwara and Mackay-Sisto allow us to better understand the metric geometry of such groups.
"Generalized equations of stability".
Abstract
In many models of Applied Probability, the distributional limits of recursively defined quantities satisfy distributional identities that are reminiscent of equations of stability. Therefore, there is an interest in generalized concepts of equations of stability.
One extension of this concept is that of random variables ``stable by random weighted mean'' (this notion is due to Liu).
A random variable $X$ taking values in $\mathbb{R}^d$ is called ``stable by random weighted mean'' if it satisfies a recursive distributional equation of the following type:
\begin{equation} \tag{1} \label{eq:1}
X ~\stackrel{\mathcal{D}}{=}~ C + \sum_{j \geq 1} T_j X_j.
\end{equation}
Here, ``$\stackrel{\mathcal{D}}{=}$'' denotes equality of the corresponding distributions, $(C,T_1,T_2,\ldots)$ is a given sequence of real-valued random variables,
and $X_1, X_2, \ldots$ denotes a sequence of i.i.d.\;copies of the random variable $X$ that are independent of $(C,T_1,T_2,\ldots)$.
The distributions $P$ on $\mathbb{R}^d$ such that \eqref{eq:1} holds when $X$ has distribution $P$ are called fixed points of the smoothing transform
(associated with $(C,T_1,T_2,\ldots)$).
A particularly prominent instance of \eqref{eq:1} is the {\texttt Quicksort} equation, where $T_1 = 1-T_2 = U \sim \mathrm{Unif}(0,1)$, $T_j = 0$ for all $j \geq 3$ and $C = g(U)$ for some function $g$.
In this talk, I start with the {\texttt Quicksort} algorithm to motivate the study of \eqref{eq:1}.
Then, I consider the problem of characterizing the set of all solutions to \eqref{eq:1}
in a very general context.
Special emphasis is put on \emph{endogenous} solutions to \eqref{eq:1} since they play an important role in the given setting.
Ito's formula via rough paths.
Abstract
Abstract: Non-geometric rough paths arise
when one encounters stochastic integrals for which the the classical
integration by parts formula does not hold. We will introduce two notions of
non-geometric rough paths - one old (branched rough paths) and one new (quasi
geometric rough paths). The former (due to Gubinelli) assumes one knows nothing
about products of integrals, instead those products must be postulated as new
components of the rough path. The latter assumes one knows a bit about
products, namely that they satisfy a natural generalisation of the
"Ito" integration by parts formula. We will show why they are both
reasonable frameworks for a large class of integrals. Moreover, we will show
that Ito's formula can be derived in either framework and that this derivation
is completely algebraic. Finally, we will show that both types of non-geometric
rough path can be re-written as geometric rough paths living above an extended
version of the original path. This means that every non-geometric rough
differential equation can be re-written as a geometric rough differential
equation, hence generalising the Ito-Stratonovich correction formula.
09:20
Deformation Week - Day 4
Abstract
A workshop on different aspects of deformation theory in various fields
The exponentially convergent trapezoid rule
Abstract
It is well known that the trapezoid rule converges geometrically when applied to analytic functions on periodic intervals or the real line. The mathematics and history of this phenomenon are reviewed and it is shown that far from being a curiosity, it is linked with powerful algorithms all across scientific computing, including double exponential and Gauss quadrature, computation of inverse Laplace transforms, special functions, computational complex analysis, the computation of functions of matrices and operators, rational approximation, and the solution of partial differential equations.
This talk represents joint work with Andre Weideman of the University of Stellenbosch.
10:00
Deformation Week - Day 3
Abstract
A workshop on different aspects of deformation theory in various fields
10:00
Deformation Week - Day 2
Abstract
A workshop on different aspects of deformation theory in various fields
11:00
Deformation Week - Day 1
Abstract
A workshop on different aspects of deformation theory in various fields
OCCAM Group Meeting
Abstract
- Jen Pestana - Fast multipole method preconditioners for discretizations of elliptic PDEs
- Derek Moulton - A tangled tale: hunt for the contactless trefoil
- Thomas Lessines - Morphoelastic rods - growing rings, bilayers and bundles: foldable tents, shooting plants, slap bracelets & fibre reinforced tubes
Spiral phyllotaxis, pushed pattern fronts and optimal packing
Abstract
To follow
Exact solutions to the total generalised variation minimisation problem
Abstract
********** PLEASE NOTE THE SPECIAL TIME **********
Total generalised variation (TGV) was introduced by Bredies et al. as a high quality regulariser for variational problems arising in mathematical image processing like denoising and deblurring. The main advantage over the classical total variation regularisation is the elimination of the undesirable stairscasing effect. In this talk we will give a small introduction to TGV and provide some properties of the exact solutions to the L^{2}-TGV model in the one dimensional case.
12:00
Random FBSDEs: Burgers SPDEs, Rational Expectations / Consol Rate Models, Control for Large Investors, and Stochastic Viscosity Solutions.
Abstract
Abstract: Burgers equation is a quasilinear partial differential equation (PDE), proposed in 1930's to model the evolution of turbulent fluid motion, which can be linearized to the heat equation via the celebrated Cole-Hopf transformation. In the first part of the talk, we study in detail general versions of stochastic Burgers equation with random coefficients, in both forward and backward sense. Concerning the former, the Cole-Hopf transformation still applies and we reduce a forward stochastic Burgers equation to a forward stochastic heat equation that can be treated in a “pathwise" manner. In case of deterministic coefficients, we obtain a probabilistic representation of the Cole-Hopf transformation by associating the backward Burgers equation with a system of forward-backward stochastic differential equations (FBSDEs). Returning to random coefficients, we exploit this representation in order to establish a stochastic version of the Cole-Hopf transformation. This generalized transformation allows us to find solutions to a backward stochastic Burgers equation through a backward stochastic heat equation, subject to additional constraints that reflect the presence of randomness in the coefficients. In both settings, forward and backward, stochastic Feynman-Kac formulae are derived for the solutions of the respective stochastic Burgers equations, as well. Finally, an application that illustrates the obtained results is presented to a pricing/hedging problem arising from mathematical finance.
In the second part of the talk, we study a class of stochastic saddlepoint systems, represented by fully coupled FBSDEs with infinite horizon, that gives rise to a continuous time rational expectations / consol rate model with random coefficients. Under standard Lipschitz and monotonicity conditions, and by means of the contraction mapping principle, we establish existence, uniqueness and dependence on a parameter of adapted solutions. Making further the connection with quasilinear backward stochastic PDEs (BSPDEs), we are led to the notion of stochastic viscosity solutions. A stochastic maximum principle for the optimal control problem of a large investor is also provided as an application to this framework.
This is joint work with N. Frangos, X.- I. Kartala and A. N. Yannacopoulos*
Pathwise approximation of SDE solutions using coupling
Abstract
The standard Taylor series approach to the higher-order approximation of vector SDEs requires simulation of iterated stochastic integrals, which is difficult. The talk will describe an approach using methods from optimal transport theory which avoid this difficulty in the case of non-degenerate diffusions, for which one can attain arbitrarily high order pathwise approximation in the Vaserstein 2-metric, using easily generated random variables.
Dislocations
Abstract
Please note the unusual day of the week for this workshop (a Monday) and also the unusual location.
16:00
A stochastic control approach to robust duality in finance
Abstract
A celebrated financial application of convex duality theory gives an explicit relation between the following two quantities:
(i) The optimal terminal wealth X*(T) := Xφ* (T) of the classical problem to
maximise the expected U-utility of the terminal wealth Xφ(T) generated by admissible
portfolios φ(t); 0 ≤ t ≤ T in a market with the risky asset price process modeled as a semimartingale;
(ii) The optimal scenario dQ*/dP of the dual problem to minimise the expected
V -value of dQ/dP over a family of equivalent local martingale measures Q. Here V is
the convex dual function of the concave function U.
In this talk we consider markets modeled by Itô-Lėvy processes, and we present
in a first part a new proof of the above result in this setting, based on the maximum
principle in stochastic control theory. An advantage with our approach is that it also
gives an explicit relation between the optimal portfolio φ* and the optimal scenario
Q*, in terms of backward stochastic differential equations. In a second part we present
robust (model uncertainty) versions of the optimization problems in (i) and (ii), and
we prove a relation between them. We illustrate the results with explicit examples.
The presentation is based on recent joint work with Bernt ¬Oksendal, University of
Oslo, Norway.