Freeness of critical cohomological Hall algebras, Kac polynomials and character varieties II
Abstract
I will discuss some very well studied cohomology groups that turn out to be captured by the machinery of critical CoHAs, for example the compactly supported cohomology of singular quiver varieties and untwisted character varieties. I will explain the usefulness of this extra CoHA structure on these groups, starting with a new proof of the Kac conjecture, and discuss a conjectural form for the CoHA associated to untwisted character varieties that provides a new way to think about the conjectures of Hausel and Rodriguez-Villegas. Finally I will discuss an approach to purity for the compactly supported cohomology of quiver varieties and a related approach to a conjecture of Shiffmann and Vasserot, analogous to Kirwan surjectivity for the stack of commuting matrices.
15:30
"Bayesian networks, information and entropy"
Abstract
Nature and the world of human technology are full of
networks. People like to draw diagrams of networks: flow charts,
electrical circuit diagrams, signal flow diagrams, Bayesian networks,
Feynman diagrams and the like. Mathematically-minded people know that
in principle these diagrams fit into a common framework: category
theory. But we are still far from a unified theory of networks.
Freeness of critical cohomological Hall algebras, Kac polynomials and character varieties I
Abstract
The cohomological Hall algebra of vanishing cycles associated to a quiver with potential is a categorification of the refined DT invariants associated to the same data, and also a very powerful tool for calculating them and proving positivity and integrality conjectures. This becomes especially true if the quiver with potential is "self dual" in a sense to be defined in the talk. After defining and giving a general introduction to the relevant background, I will discuss the main theorem regarding such CoHAs: they are free supercommutative.
Particle Methods for Inference in Non-linear Non-Gaussian State-Space Models
Abstract
State-space models are a very popular class of time series models which have found thousands of applications in engineering, robotics, tracking, vision, econometrics etc. Except for linear and Gaussian models where the Kalman filter can be used, inference in non-linear non-Gaussian models is analytically intractable. Particle methods are a class of flexible and easily parallelizable simulation-based algorithms which provide consistent approximations to these inference problems. The aim of this talk is to introduce particle methods and to present the most recent developments in this area.
Understanding the Dynamics of Embryonic Stem Cell Differentiation: A Combined Experimental and Modeling Approach
Abstract
Pluripotency is a key feature of embryonic stem cells (ESCs), and is defined as the ability to give rise to all cell lineages in the adult body. Currently, there is a good understanding of the signals required to maintain ESCs in the pluripotent state and the transcription factors that comprise their gene regulatory network. However, little is known about how ESCs exit the pluripotent state and begin the process of differentiation. We aim to understand the molecular events associated with this process via an experiment-model cycle.
12:00
Intrinsic and extrinsic regulation of epithelial organ growth
Abstract
The revolution in molecular biology within the last few decades has led to the identification of multiple, diverse inputs into the mechanisms governing the measurement and regulation of organ size. In general, organ size is controlled by both intrinsic, genetic mechanisms as well as extrinsic, physiological factors. Examples of the former include the spatiotemporal regulation of organ size by morphogen gradients, and instances of the latter include the regulation of organ size by endocrine hormones, oxygen availability and nutritional status. However, integrated model platforms, either of in vitro experimental systems amenable to high-resolution imaging or in silico computational models that incorporate both extrinsic and intrinsic mechanisms are lacking. Here, I will discuss collaborative efforts to bridge the gap between traditional assays employed in developmental biology and computational models through quantitative approaches. In particular, we have developed quantitative image analysis techniques for confocal microscopy data to inform computational models – a critical task in efforts to better understand conserved mechanisms of crosstalk between growth regulatory pathways. Currently, these quantitative approaches are being applied to develop integrated models of epithelial growth in the embryonic Drosophila epidermis and the adolescent wing imaginal disc, due to the wealth of previous genetic knowledge for the system. An integrated model of intrinsic and extrinsic growth control is expected to inspire new approaches in tissue engineering and regenerative medicine.
12:00
11:00
Turbulent transport at rough surfaces with geophysical applications
Point defects in liquid crystals.
Abstract
We study liquid crystal point defects in 2D domains. We employ Landau-de
Gennes theory and provide a simplified description of global minimizers
of Landau- de Gennes energy under homeothropic boundary conditions. We
also provide explicit solutions describing defects of various strength
under Lyuksutov's constraint.
Modeling flocks and prices: jumping particles with an attractive interaction (Joint work with Miklos Racz and Balint Toth)
15:30
G-equivariant open-closed TCFTs
Abstract
Open 2d TCFTs correspond to cyclic A-infinity algebras, and Costello showed
that any open theory has a universal extension to an open-closed theory in
which the closed state space (the value of the functor on a circle) is the
Hochschild homology of the open algebra. We will give a G-equivariant
generalization of this theorem, meaning that the surfaces are now equipped
with principal G-bundles. Equivariant Hochschild homology and a new ribbon
graph decomposition of the moduli space of surfaces with G-bundles are the
principal ingredients. This is joint work with Ramses Fernandez-Valencia.
14:15
Finite-state approximation of polynomial preserving processes
Abstract
Abstract: Polynomial preserving processes are defined as time-homogeneous Markov jump-diffusions whose generator leaves the space of polynomials of any fixed degree invariant. The moments of their transition distributions are polynomials in the initial state. The coefficients defining this relationship are given as solutions of a system of nested linear ordinary differential equations. Polynomial processes include affine processes, whose transition functions admit an exponential-affine characteristic function. These processes are attractive for financial modeling because of their tractability and robustness. In this work we study approximations of polynomial preserving processes with finite-state Markov processes via a moment-matching methodology. This approximation aims to exploit the defining property of polynomial preserving processes in order to reduce the complexity of the implementation of such models. More precisely, we study sufficient conditions for the existence of finite-state Markov processes that match the moments of a given polynomial preserving process. We first construct discrete time finite-state Markov processes that match moments of arbitrary order. This discrete time construction relies on the existence of long-run moments for the polynomial process and cubature methods over these moments. In the second part we give a characterization theorem for the existence of a continuous time finite-state Markov process that matches the moments of a given polynomial preserving process. This theorem illustrates the complexity of the problem in continuous time by combining algebraic and geometric considerations. We show the impossibility of constructing in general such a process for polynomial preserving diffusions, for high order moments and for sufficiently many points in the state space. We provide however a positive result by showing that the construction is possible when one considers finite-state Markov chains on lifted versions of the state space. This is joint work with Damir Filipovic and Martin Larsson.
Hexagon functions and six-particle amplitudes in N=4 super Yang-Mills
Abstract
Icosahedral clusters: the stem cell of the solid state?
Abstract
Recent experimental work has determined the atomic structure of a quasicrystalline Cd-Yb alloy. It highlights the elegant role of polyhedra with icosahedral symmetry. Other work suggests that while chunks of periodic crystals and disordered glass predominate in the solid state, there are many hints of icosahedral clusters. This talk is based on a recent Mathematical Intelligencer article on quasicrystals with Marjorie Senechal.
The seminar will be followed by a drinks reception and forms part of a longer PDE and CoV related Workshop.
To register for the seminar and drinks reception go to http://doodle.com/acw6bbsp9dt5bcwb
SPECIAL EVENT: Climate Symposium (Oxford Climate Research Network)
Mathematics and energy policy. Markets or central control power
Abstract
This talk is intended to explain the link between some relatively straightforward mathematical concepts, in terms of linear programming and optimisation over a convex set of feasible solutions, and questions for the organisation of the power sector and hence for energy policy.
Both markets and centralised control systems should in theory optimise the use of the current stock of generation assets and ensure electricity is generated at least cost, by ranking plant in ascending order of short run marginal cost (SRMC), sometimes known as merit order operation. Wholesale markets, in principle at least, replicate exactly what would happen in a perfect but centrally calculated optimal dispatch of plant. This happens because the SRMC of each individual plant is “discovered” through the market and results in a price equal to “system marginal cost” (SMC), which is just high enough to incentivise the most costly plant required to meet the actual load.
More generally, defining the conditions for this to work - “decentralised prices replicate perfect central planning” - is of great interest to economists. Quite apart from any ideological implications, it also helps to define possible sources of market failure. There is an extensive literature on this, but we can explain why it has appeared to work so well, and so obviously, for merit order operation, and then consider whether the conditions underpinning its success will continue to apply in the future.
The big simplifying assumptions, regarded as an adequate approximation to reality, behind most current power markets are the following:
• Each optimisation period can be considered independent of all past and future periods.
• The only relevant costs are well defined short term operating costs, essentially fuel.
• (Fossil) plant is (infinitely) flexible, and costs vary continuously and linearly with output.
• Non-fossil plant has hitherto been intra-marginal, and hence has little impact
The merit order is essentially very simple linear programming, with the dual value of the main constraint equating to the “correct” market price. Unfortunately the simplifying assumptions cease to apply as we move towards types of plant (and consumer demand) with much more complex constraints and cost structures. These include major inflexibilities, stochastic elements, and storage, and many non-linearities. Possible consequences include:
• Single period optimisation, as a concept underlying the market or central control, will need to be abandoned. Multi period optimisation will be required.
• Algorithms much more complicated than simple merit order will be needed, embracing non-linearities and complex constraints.
• Mathematically there is no longer a “dual” price, and the conditions for decentralisation are broken. There is no obvious means of calculating what the price “ought” to be, or even knowing that a meaningful price exists.
The remaining questions are clear. The theory suggests that current market structures may be broken, but how do we assess or show when and how much this might matter?
Basic examples in deformation quantisation
Abstract
Following last week's talk on Beilinson-Bernstein localisation theorem, we give basic notions in deformation quantisation explaining how this theorem can be interpreted as a quantised version of the Springer resolution. Having attended last week's talk will be useful but not necessary.
Isogeny classes of abelian varieties and weakly special subvarieties
Abstract
For Logic Seminar: Note change of time and place.
The effect of boundary conditions on linear and nonlinear waves
Abstract
In this talk, I will discuss the effect of boundary conditions on the solvability of PDEs that have formally an integrable structure, in the
sense of possessing a Lax pair. Many of these PDEs arise in wave propagation phenomena, and boundary value problems for these models are very important in applications. I will discuss the extent to which general approaches that are successful for solving the initial value problem extend to the solution of boundary value problem.
I will survey the solution of specific examples of integrable PDE, linear and nonlinear. The linear theory is joint work with David Smith. For the nonlinear case, I will discuss boundary conditions that yield boundary value problems that are fully integrable, in particular recent joint results with Thanasis Fokas and Jonatan Lenells on the solution of boundary value problems for the elliptic sine-Gordon equation.
Algorithmic Trading with Learning
Abstract
We propose a model where an algorithmic trader takes a view on the distribution of prices at a future date and then decides how to trade in the direction of her predictions using the optimal mix of market and limit orders. As time goes by, the trader learns from changes in prices and updates her predictions to tweak her strategy. Compared to a trader that cannot learn from market dynamics or form a view of the market, the algorithmic trader's profits are higher and more certain. Even though the trader executes a strategy based on a directional view, the sources of profits are both from making the spread as well as capital appreciation of inventories. Higher volatility of prices considerably impairs the trader's ability to learn from price innovations, but this adverse effect can be circumvented by learning from a collection of assets that co-move.
Kullback-Leibler Approximation Of Probability Measures
Abstract
Many problems in the physical sciences
require the determination of an unknown
function from a finite set of indirect measurements.
Examples include oceanography, oil recovery,
water resource management and weather forecasting.
The Bayesian approach to these problems
is natural for many reasons, including the
under-determined and ill-posed nature of the inversion,
the noise in the data and the uncertainty in
the differential equation models used to describe
complex mutiscale physics. The object of interest
in the Bayesian approach is the posterior
probability distribution on the unknown field [1].
\\
\\
However the Bayesian approach presents a
computationally formidable task as it
results in the need to probe a probability
measure on separable Banach space. Monte
Carlo Markov Chain methods (MCMC) may be
used to achieve this [2], but can be
prohibitively expensive. In this talk I
will discuss approximation of probability measures
by a Gaussian measure, looking for the closest
approximation with respect to the Kullback-Leibler
divergence. This methodology is widely
used in machine-learning [3]. In the context of
target measures on separable Banach space
which themselves have density with respect to
a Gaussian, I will show how to make sense of the
resulting problem in the calculus of variations [4].
Furthermore I will show how the approximate
Gaussians can be used to speed-up MCMC
sampling of the posterior distribution [5].
\\
\\
[1] A.M. Stuart. "Inverse problems: a Bayesian
perspective." Acta Numerica 19(2010) and
http://arxiv.org/abs/1302.6989
\\
[2] S.L.Cotter, G.O.Roberts, A.M. Stuart and D. White,
"MCMC methods for functions: modifying old algorithms
to make them faster". Statistical Science 28(2013).
http://arxiv.org/abs/1202.0709
\\
[3] C.M. Bishop, "Pattern recognition and machine learning".
Springer, 2006.
\\
[4] F.J. Pinski G. Simpson A.M. Stuart H. Weber, "Kullback-Leibler
Approximations for measures on infinite dimensional spaces."
http://arxiv.org/abs/1310.7845
\\
[5] F.J. Pinski G. Simpson A.M. Stuart H. Weber, "Algorithms
for Kullback-Leibler approximation of probability measures in
infinite dimensions." In preparation.
11:00
'Defining p-henselian valuations'
Abstract
(Joint work with Jochen Koenigsmann) Admitting a p-henselian
valuation is a weaker assumption on a field than admitting a henselian
valuation. Unlike henselianity, p-henselianity is an elementary property
in the language of rings. We are interested in the question when a field
admits a non-trivial 0-definable p-henselian valuation (in the language
of rings). They often then give rise to 0-definable henselian
valuations. In this talk, we will give a classification of elementary
classes of fields in which the canonical p-henselian valuation is
uniformly 0-definable. This leads to the new phenomenon of p-adically
(pre-)Euclidean fields.
A survey of derivator K-theory
Abstract
The theory of derivators is an approach to homotopical algebra
that focuses on the existence of homotopy Kan extensions. Homotopy
theories (e.g. model categories) typically give rise to derivators by
considering the homotopy categories of all diagrams categories
simultaneously. A general problem is to understand how faithfully the
derivator actually represents the homotopy theory. In this talk, I will
discuss this problem in connection with algebraic K-theory, and give a
survey of the results around the problem of recovering the K-theory of a
good Waldhausen category from the structure of the associated derivator.
10:30
Modularity and Galois Representations
Abstract
The modularity theorem saying that all (semistable) elliptic curves are modular was one of the two crucial parts in the proof of Fermat's last theorem. In this talk I will explain what elliptic curves being 'modular' means and how an alternative definition can be given in terms of Galois representations. I will then state some of the conjectures of the Langlands program which in some sense generalise the modularity theorem.
Maximal subgroups of exceptional groups of Lie type and morphisms of algebraic groups
Abstract
The maximal subgroups of the exceptional groups of Lie type
have been studied for many years, and have many applications, for
example in permutation group theory and in generation of finite
groups. In this talk I will survey what is currently known about the
maximal subgroups of exceptional groups, and our recent work on this
topic. We explore the connection with extending morphisms from finite
groups to algebraic groups.
16:00
“Why there are no 3-headed monsters, resolving some problems with brain tumours, divorce prediction and how to save marriages”
Abstract
“Understanding the generation and control of pattern and form is still a challenging and major problem in the biomedical sciences. I shall describe three very different problems. First I shall briefly describe the development and application of the mechanical theory of morphogenesis and the discovery of morphogenetic laws in limb development and how it was used to move evolution backwards. I shall then describe a surprisingly informative model, now used clinically, for quantifying the growth of brain tumours, enhancing imaging techniques and quantifying individual patient treatment protocols prior to their use. Among other things, it is used to estimate patient life expectancy and explain why some patients live longer than others with the same treatment protocols. Finally I shall describe an example from the social sciences which quantifies marital interaction that is used to predict marital stability and divorce. From a large study of newly married couples it had a 94% accuracy. I shall show how it has helped design a new scientific marital therapy which is currently used in clinical practice.”
Factorization homology is a fully extended TFT
Abstract
We will start with a recollection on factorization algebras and factorization homology. We will then explain what fully extended TFTs are, after Jacob Lurie. And finally we will see how factorization homology can be turned into a fully extended TFT. This is a joint work with my student Claudia Scheimbauer.
15:30
"Stochastic Petri nets, chemical reaction networks and Feynman diagrams"
Abstract
Nature and the world of human technology are full of
networks. People like to draw diagrams of networks: flow charts,
electrical circuit diagrams, signal flow diagrams, Bayesian networks,
Feynman diagrams and the like. Mathematically-minded people know that
in principle these diagrams fit into a common framework: category
theory. But we are still far from a unified theory of networks.
14:15
Lagrangian structures on derived mapping stacks
Abstract
We will explain how the result of Pantev-Toën-Vaquié-Vezzosi, about shifted symplectic structures on mapping stacks, can be extended to relative mapping stacks and Lagrangian structures. We will also provide applications in ordinary symplectic geometry and topological field theories.
Towards realistic performance for iterative methods on shared memory machines
Abstract
This talk introduces a random linear model to investigate the memory bandwidth barrier effect on current shared memory computers. Based on the fact that floating-point operations can be hidden by implicit compiling techniques, the runtime for memory intensive applications can be modelled by memory reference time plus a random term. The random term due to cache conflicts, data reuse and other environmental factors is proportional to memory reference volume. Statistical techniques are used to quantify the random term and the runtime performance parameters. Numerical results based on thousands representative matrices from various applications are presented, compared, analysed and validated to confirm the proposed model. The model shows that a realistic and fair metric for performance of iterative methods and other memory intensive applications should consider the memory bandwidth capability and memory efficiency.
Euler-Maclaurin and Newton-Gregory Interpolants
Abstract
The Euler-Maclaurin formula is a quadrature rule based on corrections to the trapezoid rule using odd derivatives at the end-points of the function being integrated. It appears that no one has ever thought about a related function approximation that will give us the Euler-Maclaurin quadrature rule, i.e., just like we can derive Newton-Cotes quadrature by integrating polynomial approximations of the function, we investigate, what function approximation will integrate exactly to give the corresponding Euler-Maclaurin quadrature. It turns out, that the right function approximation is a combination of a trigonometric interpolant and a polynomial.
To make the method more practical, we also look at the closely related Newton-Gregory quadrature, which is very similar to the Euler-Maclaurin formula but instead of derivatives, uses finite differences. Following almost the same procedure, we find another mixed function approximation, derivative free, whose exact integration yields the Newton-Gregory quadrature rule.
Elliptic and parabolic systems with general growth
Abstract
Motivated by integrals of the Calculus of Variations considered in
Nonlinear Elasticity, we study mathematical models which do not fit in
the classical existence and regularity theory for elliptic and
parabolic Partial Differential Equations. We consider general
nonlinearities with non-standard p,q-growth, both in the elliptic and
in the parabolic contexts. In particular, we introduce the notion of
"variational solution/parabolic minimizer" for a class of
Cauchy-Dirichlet problems related to systems of parabolic equations.
The elliptic curve discrete logarithm problem
Abstract
The elliptic curve discrete logarithm problem (ECDLP) is commonly believed to be much harder than its finite field counterpart, resulting in smaller cryptography key sizes. In this talk, we review recent results suggesting that ECDLP is not as hard as previously expected in the case of composite fields.
We first recall how Semaev's summation polynomials can be used to build index calculus algorithms for elliptic curves over composite fields. These ideas due to Pierrick Gaudry and Claus Diem reduce ECDLP over composite fields to the resolution of polynomial systems of equations over the base field.
We then argue that the particular structure of these systems makes them much easier to solve than generic systems of equations. In fact, the systems involved here can be seen as natural extensions of the well-known HFE systems, and many theoretical arguments and experimental results from HFE literature can be generalized to these systems as well.
Finally, we consider the application of this heuristic analysis to a particular ECDLP index calculus algorithm due to Claus Diem. As a main consequence, we provide evidence that ECDLP can be solved in heuristic subexponential time over composite fields. We conclude the talk with concrete complexity estimates for binary curves and perspectives for furture works.
The talk is based on joint works with Jean-Charles Faugère, Timothy Hodges, Yung-Ju Huang, Ludovic Perret, Jean-Jacques Quisquater, Guénaël Renault, Jacob Schlatter, Naoyuki Shinohara, Tsuyoshi Takagi
Cobordism categories, bivariant A-theory and the A-theory characteristic
Abstract
The A-theory characteristic of a fibration is a
map to Waldhausen's algebraic K-theory of spaces which
can be regarded as a parametrized Euler characteristic of
the fibers. Regarding the classifying space of the cobordism
category as a moduli space of smooth manifolds, stable under
extensions by cobordisms, it is natural to ask whether the
A-theory characteristic can be extended to the cobordism
category. A candidate such extension was proposed by Bökstedt
and Madsen who defined an infinite loop map from the d-dimensional
cobordism category to the algebraic K-theory of BO(d). I will
discuss the connections between this map, the A-theory
characteristic and the smooth Riemann-Roch theorem of Dwyer,
Weiss and Williams.
14:15
The geometry of constant mean curvature disks embedded in R^3.
Abstract
In this talk I will discuss results on the geometry of constant mean curvature (H\neq 0) disks embedded in R^3. Among other
things I will prove radius and curvature estimates for such disks. It then follows from the radius estimate that the only complete, simply connected surface embedded in R^3 with constant mean curvature is the round sphere. This is joint work with Bill Meeks.
14:00
Generalised metrisable spaces and the normal Moore space conjecture
Abstract
We will introduce a few class of generalised metrisable
properties; that is, properties that hold of all metrisable spaces that
can be used to generalise results and are in some sense 'close' to
metrisability. In particular, we will discuss Moore spaces and the
independence of the normal Moore space conjecture - Is every normal
Moore space metrisable?
On black hole thermodynamics from super Yang-Mills
Abstract
Regularity and singularity of area-minimizing currents
Abstract
The Plateau's problem, named after the Belgian physicist J. Plateau, is a classic in the calculus of variations and regards minimizing the area among all surfaces spanning a given contour. Although Plateau's original concern were $2$-dimensional surfaces in the $3$-dimensional space, generations of mathematicians have considered such problem in its generality. A successful existence theory, that of integral currents, was developed by De Giorgi in the case of hypersurfaces in the fifties and by Federer and Fleming in the general case in the sixties. When dealing with hypersurfaces, the minimizers found in this way are rather regular: the corresponding regularity theory has been the achievement of several mathematicians in the 60es, 70es and 80es (De Giorgi, Fleming, Almgren, Simons, Bombieri, Giusti, Simon among others).
In codimension higher than one, a phenomenon which is absent for hypersurfaces, namely that of branching, causes very serious problems: a famous theorem of Wirtinger and Federer shows that any holomorphic subvariety in $\mathbb C^n$ is indeed an area-minimizing current. A celebrated monograph of Almgren solved the issue at the beginning of the 80es, proving that the singular set of a general area-minimizing (integral) current has (real) codimension at least 2. However, his original (typewritten) manuscript was more than 1700 pages long. In a recent series of works with Emanuele Spadaro we have given a substantially shorter and simpler version of Almgren's theory, building upon large portions of his program but also bringing some new ideas from partial differential equations, metric analysis and metric geometry. In this talk I will try to give a feeling for the difficulties in the proof and how they can be overcome.
CALF: A period map for global derived stacks
Abstract
In the sixties Griffiths constructed a holomorphic map, known as the local period map, which relates the classification of smooth projective varieties to the associated Hodge structures. Fiorenza and Manetti have recently described it in terms of Schlessinger's deformation functors and, together with Martinengo, have started to look at it in the context of Derived Deformation Theory. In this talk we propose a rigorous way to lift such an extended version of Griffiths period map to a morphism of derived deformation functors and use this to construct a period morphism for global derived stacks.