The kinetics of ice formation
Abstract
In {\it Was sind und was sollen die Zahlen?} (1888), Dedekind proves the Recursion Theorem (Theorem 126), and applies it to establish the categoricity of his axioms for arithmetic (Theorem 132). It is essential to these results that mathematical induction is formulated using second-order quantification, and if the second-order quantifier ranges over all subsets of the first-order domain (full second-order quantification), the categoricity result shows that, to within isomorphism, only one structure satisfies these axioms. However, the proof of categoricity is correct for a wide class of non-full Henkin models of second-order quantification. In light of this fact, can the proof of second-order categoricity be taken to establish that the second-order axioms of arithmetic characterize a unique structure?
In this talk, I will describe how the eigenvalues of the Atkin operator on overconvergent modular forms might be related to the classical study of the Laplacian on certain manifolds. The goal is to phrase everything geometrically, so as to maximally engage the audience in discussion on possible approaches to study the spectral flow of this operator.
The ocean is populated by an intense geostrophic eddy field with a dominant energy-containing scale on the order of 100 km at midlatitudes. Ocean climate models are unlikely routinely to resolve geostrophic eddies for the foreseeable future and thus development and validation of improved parameterisations is a vital task. Moreover, development and validation of improved eddy parameterizations is an excellent strategy for testing and advancing our understanding of how geostrophic ocean eddies impact the large-scale circulation.
A new mathematical framework for parameterising ocean eddy fluxes is developed that is consistent with conservation of energy and momentum while retaining the symmetries of the original eddy fluxes. The framework involves rewriting the residual-mean eddy force, or equivalently the eddy potential vorticity flux, as the divergence of an eddy stress tensor. A norm of this tensor is bounded by the eddy energy, allowing the components of the stress tensor to be rewritten in terms of the eddy energy and non-dimensional parameters describing the mean "shape" of the eddies. If a prognostic equation is solved for the eddy energy, the remaining unknowns are non-dimensional and bounded in magnitude by unity. Moreover, these non-dimensional geometric parameters have strong connections with classical stability theory. For example, it is shown that the new framework preserves the functional form of the Eady growth rate for linear instability, as well as an analogue of Arnold's first stability theorem. Future work to develop a full parameterisation of ocean eddies will be discussed.
Sparse matrix factorization involves a mix of regular and irregular computation, which is a particular challenge when trying to obtain high-performance on the highly parallel general-purpose computing cores available on graphics processing units (GPUs). We present a sparse multifrontal QR factorization method that meets this challenge, and is up to ten times faster than a highly optimized method on a multicore CPU. Our method is unique compared with prior methods, since it factorizes many frontal matrices in parallel, and keeps all the data transmitted between frontal matrices on the GPU. A novel bucket scheduler algorithm extends the communication-avoiding QR factorization for dense matrices, by exploiting more parallelism and by exploiting the staircase form present in the frontal matrices of a sparse multifrontal method.
This is joint work with Nuri Yeralan and Sanjay Ranka.
Dislocations are line defects in crystals, and were first posited as the carriers of plastic flow in crystals in the 1934 papers of Orowan, Polanyi and Taylor. Their hypothesis has since been experimentally verified, but many details of their behaviour remain unknown. In this talk, I present joint work with Christoph Ortner on an infinite lattice model in which screw dislocations are free to be created and annihilated. We show that configurations containing single geometrically necessary dislocations exist as global minimisers of a variational problem, and hence are globally stable equilibria amongst all finite energy perturbations.
A bit more than ten years ago, Peter Oszváth and Zoltán Szabó defined Heegaard-Floer homology, a gauge theory inspired invariant of three-manifolds that is designed to be more computable than its cousins, the Donaldson and Seiberg-Witten invariants for four-manifolds. This invariant is defined in terms of a Heegaard splitting of the three-manifold. In this talk I will show how Heegaard-Floer homology is defined (modulo the analysis that goes into it) and explain some of the directions in which people have taken this theory, such as knot theory and fitting Heegaard-Floer homology into the scheme of topological field theories.
A large class of links in $S^3$ has the property that the complement admits a complete hyperbolic metric of finite volume. But is this volume understandable from the link itself, or maybe from some nice diagram of it? Marc Lackenby in the early 2000s gave a positive answer for a class of diagrams, the alternating ones. The proof of this result involves an analysis of the JSJ decomposition of the link complement: in particular of how does it appear on the link diagram. I will tell you an outline of this proof, forgetting its most technical aspects and explaining the underlying ideas in an accessible way.
I will review several known problems on the automorphism group of finite $p$-groups and present a sketch of the proof of the the following result obtained jointly with Jon Gonz\'alez-S\'anchez:
For each prime $p$ we construct a family $\{G_i\}$ of finite $p$-groups such that $|Aut (G_i)|/|G_i|$ goes to $0$, as $i$ goes to infinity. This disproves a well-known conjecture that $|G|$ divides $|Aut(G)|$ for every non-abelian finite $p$-group $G$.
The Contou-Carrère symbol has been introduced in the 90's in the study of local analogues of autoduality of Jacobians of smooth projective curves. It is closely related to the tame symbol, the residue pairing, and the canonical central extension of loop groups. In this talk we will a discuss a K-theoretic interpretation of the Contou-Carrère symbol, which allows us to generalize this one-dimensional picture to higher dimensions. This will be achieved by studying the K-theory of Tate objects, giving rise to natural central extensions of higher loop groups by spectra. Using the K-theoretic viewpoint, we then go on to prove a reciprocity law for higher-dimensional Contou-Carrère symbols. This is joint work with O. Braunling and J. Wolfson.
Crouzeix's conjecture is an exasperating problem of linear algebra that has been open since 2004: the norm of p(A) is bounded by twice the maximum value of p on the field of values of A, where A is a square matrix and p is a polynomial (or more generally an analytic function). I'll say a few words about the conjecture and
show the beautiful proof of Pearcy in 1966 of a special case, based on a vector-valued barycentric interpolation formula.
The Tutte polynomial of a graph $G$ is a two-variable polynomial $T(G;x,y)$, which encodes much information about~$G$. The number of spanning trees in~$G$, the number of acyclic orientations of~$G$, and the partition function of the $q$-state Potts model are all specialisations of the Tutte polynomial. Jackson and Sokal have studied the sign of the Tutte polynomial, and identified regions in the $(x,y)$-plane where it is ``essentially determined'', in the sense that the sign is a function of very simple characteristics of $G$, e.g., the number of vertices and connected components of~$G$. It is natural to ask whether the sign of the Tutte polynomial is hard to compute outside of the regions where it is essentially determined. We show that the answer to this question is often an emphatic ``yes'': specifically, that determining the sign is \#P-hard. In such cases, approximating the Tutte polynomial with small relative error is also \#P-hard, since in particular the sign must be determined. In the other direction, we can ask whether the Tutte polynomial is easy to approximate in regions where the sign is essentially determined. The answer is not straightforward, but there is evidence that it often ``no''. This is joint work with Leslie Ann Goldberg (Oxford).
I will explain some ongoing work on understanding algebraic D-moldules via their reduction to positive characteristic. I will define the p-cycle of an algebraic D-module, explain the general results of Bitoun and Van Den Bergh; and then discuss a new construction of a class of algebraic D-modules with prescribed p-cycle.
Data assimilation is a particular form of state estimation. That's partly the "what". We'll also look at the how's, the why's, some who's and some where's.
The talk will recall the results of three preprints, first two authored by my former student Mickael Launay, and the final coauthored by Mickael and myself. All three works are available on arXiv. At this point it is not clear that they will ever get published (or submitted for review) but hopefully this does not make their contents less interesting. This class of interacting urn processes was introduced in Launay's thesis, in an attempt to model more realistically the memory sharing that occurs in food trail pheromone marking or in similar collective learning phenomena. An interesting critical behavior occurs already in the case of exponential reinforcement. No prior knowledge of strong urns will be assumed, and I will try to explain the reason behind the phase transition.
The coalescing Brownian flow on $\R$ is a process which was introduced by Arratia (1979) and Toth and Werner (1997), and which formally corresponds to starting coalescing Brownian motions from every space-time point. We provide a new state space and topology for this process and obtain an invariance principle for coalescing random walks. The invariance principle holds under a finite variance assumption and is thus optimal. In a series of previous works, this question was studied under a different topology, and a moment of order $3-\eps$ was necessary for the convergence to hold. Our proof relies crucially on recent work of Schramm and Smirnov on scaling limits of critical percolation in the plane. Our approach is sufficiently simple that we can handle substantially more complicated coalescing flows with little extra work -- in particular similar results are obtained in the case of coalescing Brownian motions on the Sierpinski gasket. This is the first such result where the limiting paths do not enjoy the non-crossing property.
Joint work with Christophe Garban (Lyon) and Arnab Sen (Minnesota).
This is the first of a series of talks based on Gary
Gruenhage's 'A survey of D-spaces' [1]. A space is D if for every
neighbourhood assignment we can choose a closed discrete set of points
whose assigned neighbourhoods cover the space. The mention of
neighbourhood assignments and a topological notion of smallness (that
is, of being closed and discrete) is peculiar among covering properties.
Despite being introduced in the 70's, we still don't know whether a
Lindelöf or a paracompact space must be D. In this talk, we will examine
some elementary properties of this class via extent and Lindelöf numbers.
I will explain a new formulation of Einstein’s equations in 4-dimensions using the language of gauge theory. This was also discovered independently, and with advances, by Kirill Krasnov. I will discuss the advantages and disadvantages of this new point of view over the traditional "Einstein-Hilbert" description of Einstein manifolds. In particular, it leads to natural "sphere conjectures" and also suggests ways to find new Einstein 4-manifolds. I will describe some first steps in these directions. Time permitting, I will explain how this set-up can also be seen via 6-dimensional symplectic topology and the additional benefits that brings.
I will discuss two topics. Firstly, coupling of the circadian clock and cell cycle in mammalian cells. Together with the labs of Franck Delaunay (Nice) and Bert van der Horst (Rotterdam) we have developed a pipeline involving experimental and mathematical tools that enables us to track through time the phase of the circadian clock and cell cycle in the same single cell and to extend this to whole lineages. We show that for mouse fibroblast cell cultures under natural conditions, the clock and cell cycle phase-lock in a 1:1 fashion. We show that certain perturbations knock this coupled system onto another periodic state, phase-locked but with a different winding number. We use this understanding to explain previous results. Thus our study unravels novel phase dynamics of 2 key mammalian biological oscillators. Secondly, I present a radical revision of the Nrf2 signalling system. Stress responsive signalling coordinated by Nrf2 provides an adaptive response for protection against toxic insults, oxidative stress and metabolic dysfunction. We discover that the system is an autonomous oscillator that regulates its target genes in a novel way.
Ax's theorem on the dimension of the intersection of an algebraic subvariety and a formal subgroup (Theorem 1F in "Some topics in differential algebraic geometry I...") implies Schanuel type transcendence results for a vast class of formal maps (including exp on a semi-abelian variety). Ax stated and proved this theorem in the characteristic 0 case, but the statement is meaningful for arbitrary characteristic and still implies positive characteristic transcendence results. I will discuss my work on positive characteristic version of Ax's theorem.
Motivated by the study of PDEs, we introduce the notion of a D-module on a variety X and give the basics of three perspectives on the theory: modules over the sheaf of differential operators on X; quasi-coherent modules with flat connection; and crystals on X. This talk will assume basic knowledge of algebraic geometry (such as rudimentary sheaf theory).
We discuss a simple variational principle for coherent material vortices
in two-dimensional turbulence. Vortex boundaries are sought as closed
stationary curves of the averaged Lagrangian strain. We find that
solutions to this problem are mathematically equivalent to photon spheres
around black holes in cosmology. The fluidic photon spheres satisfy
explicit differential equations whose outermost limit cycles are optimal
Lagrangian vortex boundaries. As an application, we uncover super-coherent
material eddies in the South Atlantic, which yield specific Lagrangian
transport estimates for Agulhas rings. We also describe briefly coherent
Lagrangian vortex detection to three-dimensional flows.
In this talk I will give a definition of cluster algebra and state some main results.
Moreover, I will explain how the combinatorics of certain cluster algebras can be modeled in geometric terms.
The accurate and stable numerical calculation of higher-order
derivatives of holomorphic functions (as required, e.g., in random matrix
theory to extract probabilities from a generating function) turns out to
be a surprisingly rich topic: there are connections to asymptotic analysis,
the theory of entire functions, and to algorithmic graph theory.
\textbf{James Newbury} \newline
Title: Heavy traffic diffusion approximation of the limit order book in a one-sided reduced-form model. \newline
Abstract: Motivated by a zero-intelligence approach, we try to capture the
dynamics of the best bid (or best ask) queue in a heavy traffic setting,
i.e when orders and cancellations are submitted at very high frequency.
We first prove the weak convergence of the discrete-space best bid/ask
queue to a jump-diffusion process. We then identify the limiting process
as a regenerative elastic Brownian motion with drift and random jumps to
the origin.
\newline
\textbf{Zhaoxu Hou} \newline
Title: Robust Framework In Finance: Martingale Optimal Transport and
Robust Hedging For Multiple Marginals In Continuous Time
\newline
Abstract: It is proved by Dolinsky and Soner that there is no duality
gap between the robust hedging of path-dependent European Options and a
martingale optimal problem for one marginal case. Motivated by their
work and Mykland's idea of adding a prediction set of paths (i.e.
super-replication of a contingent claim only required for paths falling
in the prediction set), we try to achieve the same type of duality
result in the setting of multiple marginals and a path constraint.
Working together with the Blue Brain Project at the EPFL, I'm trying to develop new topological methods for neural modelling. As a mathematician, however, I'm really motivated by how these questions in neuroscience can inspire new mathematics. I will introduce new work that I am doing, together with Kathryn Hess and Ran Levi, on brain plasticity and learning processes, and discuss some of the topological and geometric features that are appearing in our investigations.
I will look at the classical constructions that can be made using a straight edge and compass, I will then look at the limits of these constructions. I will then show how much further we can get with Origami, explaining how it is possible to trisect an angle or double a cube. Compasses not supplied.
Thoughts on the Burnside problem
Quasimaps provide compactifications, depending on a stability parameter epsilon, for moduli spaces of maps from nonsingular algebraic curves to a large class of GIT quotients. These compactifications enjoy good properties and in particular they carry virtual fundamental classes. As the parameter epsilon varies, the resulting invariants are related by wall-crossing formulas. I will present some of these formulas in genus zero, and will explain why they can be viewed as generalizations (in several directions) of Givental's toric mirror theorems. I will also describe extensions of wall-crossing to higher genus, and (time permitting) to orbifold GIT targets as well.
The talk is based on joint works with Bumsig Kim, and partly also with Daewoong Cheong and with Davesh Maulik.
Hessians of functionals of PDE solutions have important applications in PDE-constrained optimisation (Newton methods) and uncertainty quantification (for accelerating high-dimensional Bayesian inference). With current techniques, a typical cost for one Hessian-vector product is 4-11 times the cost of the forward PDE solve: such high costs generally make their use in large-scale computations infeasible, as a Hessian solve or eigendecomposition would have costs of hundreds of PDE solves.
In this talk, we demonstrate that it is possible to exploit the common structure of the adjoint, tangent linear and second-order adjoint equations to greatly accelerate the computation of Hessian-vector products, by trading a large amount of computation for a large amount of storage. In some cases of practical interest, the cost of a Hessian-
vector product is reduced to a small fraction of the forward solve, making it feasible to employ sophisticated algorithms which depend on them.
Perfect matchings are fundamental objects of study in graph theory. There is a substantial classical theory, which cannot be directly generalised to hypergraphs unless P=NP, as it is NP-complete to determine whether a hypergraph has a perfect matching. On the other hand, the generalisation to hypergraphs is well-motivated, as many important problems can be recast in this framework, such as Ryser's conjecture on transversals in latin squares and the Erdos-Hanani conjecture on the existence of designs. We will discuss a characterisation of the perfect matching problem for uniform hypergraphs that satisfy certain density conditions (joint work with Richard Mycroft), and a polynomial time algorithm for determining whether such hypergraphs have a perfect matching (joint work with Fiachra Knox and Richard Mycroft).
Finding a sparse signal solution of an underdetermined linear system of measurements is commonly solved in compressed sensing by convexly relaxing the sparsity requirement with the help of the l1 norm. Here, we tackle instead the original nonsmooth nonconvex l0-problem formulation using projected gradient methods. Our interest is motivated by a recent surprising numerical find that despite the perceived global optimization challenge of the l0-formulation, these simple local methods when applied to it can be as effective as first-order methods for the convex l1-problem in terms of the degree of sparsity they can recover from similar levels of undersampled measurements. We attempt here to give an analytical justification in the language of asymptotic phase transitions for this observed behaviour when Gaussian measurement matrices are employed. Our approach involves novel convergence techniques that analyse the fixed points of the algorithm and an asymptotic probabilistic analysis of the convergence conditions that derives asymptotic bounds on the extreme singular values of combinatorially many submatrices of the Gaussian measurement matrix under matrix-signal independence assumptions.
This work is joint with Andrew Thompson (Duke University, USA).
Quasimaps provide compactifications, depending on a stability parameter epsilon, for moduli spaces of maps from nonsingular algebraic curves to a large class of GIT quotients. These compactifications enjoy good properties and in particular they carry virtual fundamental classes. As the parameter epsilon varies, the resulting invariants are related by wall-crossing formulas. I will present some of these formulas in genus zero, and will explain why they can be viewed as generalizations (in several directions) of Givental's toric mirror theorems. I will also describe extensions of wall-crossing to higher genus, and (time permitting) to orbifold GIT targets as well.
The talk is based on joint works with Bumsig Kim, and partly also with Daewoong Cheong and with Davesh Maulik.