16:00
16:00
16:00
Immersed surfaces in cubed three manifolds: a prescient vision.
Abstract
When Gromov defined non-positively curved cube complexes no one knew what they would be useful for.
Decades latex they played a key role in the resolution of the Virtual Haken conjecture.
In one of the early forays into experimenting with cube complexes, Aitchison, Matsumoto, and Rubinstein produced some nice results about certain "cubed" manifolds, that in retrospect look very prescient.
I will define non-positively curved cube complexes, what it means for a 3-manifold to be cubed, and discuss what all this Haken business is about.
Re-Engineering History: A Playful Demonstration
Abstract
This session will discuss how Douglas Hartree and Arthur Porter used Meccano — a child’s toy and an engineer’s tool — to build an analogue computer, the Hartree Differential Analyser in 1934. It will explore the wider historical and social context in which this model computer was rooted, before providing an opportunity to engage with the experiential aspects of the 'Kent Machine,' a historically reproduced version of Hartree and Porter's original model, which is also made from Meccano.
The 'Kent Machine' sits at a unique intersection of historical research and educational engagement, providing an alternative way of teaching STEM subjects, via a historic hands-on method. The session builds on the work and ideas expressed in Otto Sibum's reconstruction of James Joule's 'Paddle Wheel' apparatus, inviting attendees to physically re-enact the mathematical processes of mechanical integration to see how this type of analogue computer functioned in reality. The session will provide an alternative context of the history of computing by exploring the tacit knowledge that is required to reproduce and demonstrate the machine, and how it sits at the intersection between amateur and professional science.
A motivic DT/PT correspondence via Quot schemes
Abstract
Donaldson-Thomas invariants of a Calabi-Yau 3-fold Y are related to Pandharipande-Thomas invariants via a wall-crossing formula known as the DT/PT correspondence, proved by Bridgeland and Toda. The same relation holds for the “local invariants”, those encoding the contribution of a fixed smooth curve in Y. We show how to lift the local DT/PT correspondence to the motivic level and provide an explicit formula for the local motivic invariants, exploiting the critical structure on certain Quot schemes acting as our local models. Our strategy is parallel to the one used by Behrend, Bryan and Szendroi in their definition and computation of degree zero motivic DT invariants. If time permits, we discuss a further (conjectural) cohomological upgrade of the local DT/PT correspondence.
Joint work with Ben Davison.
14:30
Overview of a quotient geometry with simple geodesics for the manifold of fixed-rank positive-semidefinite matrices
Abstract
We describe the main geometric tools required to work on the manifold of fixed-rank symmetric positive-semidefinite matrices: we present expressions for the Riemannian logarithm and the injectivity radius, to complement the already known Riemannian exponential. This manifold is particularly relevant when dealing with low-rank approximations of large positive-(semi)definite matrices. The manifold is represented as a quotient of the set of full-rank rectangular matrices (endowed with the Euclidean metric) by the orthogonal group. Our results allow understanding the failure of some curve fitting algorithms, when the rank of the data is overestimated. We illustrate these observations on a dataset made of covariance matrices characterizing a wind field.
Partition universality of G(n,p) for degenerate graphs
The r-colour size-Ramsey number of a graph G is the minimum number of edges of a graph H such that any r-colouring of the edges of H has a monochromatic G-copy. Random graphs play an important role in the study of size-Ramsey numbers. Using random graphs, we establish a new bound on the size-Ramsey number of D-degenerate graphs with bounded maximum degree.
In the talk I will summarise what is known about size-Ramsey numbers, explain the connection to random graphs and their so-called partition universality, and outline which methods we use in our proof.
Based on joint work with Peter Allen.
14:00
Computing multiple local minima of topology optimisation problems
Abstract
Topology optimisation finds the optimal material distribution of a fluid or solid in a domain, subject to PDE and volume constraints. There are many formulations and we opt for the density approach which results in a PDE, volume and inequality constrained, non-convex, infinite-dimensional optimisation problem without a priori knowledge of a good initial guess. Such problems can exhibit many local minima or even no minima. In practice, heuristics are used to obtain the global minimum, but these can fail even in the simplest of cases. In this talk, we will present an algorithm that solves such problems and systematically discovers as many of these local minima as possible along the way.
Contagion maps for spreading dynamics and manifold learning
Abstract
Spreading processes on geometric networks are often influenced by a network’s underlying spatial structure, and it is insightful to study the extent to which a spreading process follows that structure. In particular, considering a threshold contagion on a network whose nodes are embedded in a manifold and which has both 'geometric edges' that respect the geometry of the underlying manifold, as well as 'non-geometric edges' that are not constrained by the geometry of the underlying manifold, one can ask whether the contagion propagates as a wave front along the underlying geometry, or jumps via long non-geometric edges to remote areas of the network.
Taylor et al. developed a methodology aimed at determining the spreading behaviour of threshold contagion models on such 'noisy geometric networks' [1]. This methodology is inspired by nonlinear dimensionality reduction and is centred around a so-called 'contagion map' from the network’s nodes to a point cloud in high dimensional space. The structure of this point cloud reflects the spreading behaviour of the contagion. We apply this methodology to a family of noisy-geometric networks that can be construed as being embedded in a torus, and are able to identify a region in the parameter space where the contagion propagates predominantly via wave front propagation. This consolidates contagion map as both a tool for investigating spreading behaviour on spatial network, as well as a manifold learning technique.
[1] D. Taylor, F. Klimm, H. A. Harrington, M. Kramar, K. Mischaikow, M. A. Porter, and P. J. Mucha. Topological data analysis of contagion maps for examining spreading processes on networks. Nature Communications, 6(7723) (2015)
Dark Matter, Modified Gravity - Or What?
Abstract
In this talk I will explain (a) what observations speak for the
hypothesis of dark matter, (b) what observations speak for
the hypothesis of modified gravity, and (c) why it is a mistake
to insist that either hypothesis on its own must
explain all the available data. The right explanation, I will argue,
is instead a suitable combination of dark matter and modified
gravity, which can be realized by the idea that dark matter
has a superfluid phase.
On Serre's Uniformity Conjecture
Abstract
Given a prime p and an elliptic curve E (say over Q), one can associate a "mod p Galois representation" of the absolute Galois group of Q by considering the natural action on p-torsion points of E.
In 1972, Serre showed that if the endomorphism ring of E is "minimal", then there exists a prime P(E) such that for all p>P(E), the mod p Galois representation is surjective. This raised an immediate question (now known as Serre's uniformity conjecture) on whether P(E) can be bounded as E ranges over elliptic curves over Q with minimal endomorphism rings.
I'll sketch a proof of this result, the current status of the conjecture, and (time permitting) some extensions of this result (e.g. to abelian varieties with appropriately analogous endomorphism rings).
On some computable quasiconvex multiwell functions
Abstract
The translation method for constructing quasiconvex lower bound of a given function in the calculus of variations and the notion of compensated convex transforms for tightly approximate functions in Euclidean spaces will be briefly reviewed. By applying the upper compensated convex transform to the finite maximum function we will construct computable quasiconvex functions with finitely many point wells contained in a subspace with rank-one matrices. The complexity for evaluating the constructed quasiconvex functions is O(k log k) with k the number of wells involved. If time allows, some new applications of compensated convexity will be briefly discussed.
15:45
The Witt vectors with coefficients
Abstract
We will introduce the Witt vectors of a ring with coefficients in a bimodule and use them to calculate the components of the Hill-Hopkins-Ravenel norm for cyclic p-groups. This algebraic construction generalizes Hesselholt's Witt vectors for non-commutative rings and Kaledin's polynomial Witt vectors over perfect fields. We will discuss applications to the characteristic polynomial over non-commutative rings and to the Dieudonné determinant. This is all joint work with Krause, Nikolaus and Patchkoria.
On a probabilistic interpretation of the parabolic-parabolic Keller Segel equations
Abstract
The Keller Segel model for chemotaxis is a two-dimensional system of parabolic or elliptic PDEs.
Motivated by the study of the fully parabolic model using probabilistic methods, we give rise to a non linear SDE of McKean-Vlasov type with a highly non standard and singular interaction. Indeed, the drift of the equation involves all the past of one dimensional time marginal distributions of the process in a singular way. In terms of approximations by particle systems, an interesting and, to the best of our knowledge, new and challenging difficulty arises: at each time each particle interacts with all the past of the other ones by means of a highly singular space-time kernel.
In this talk, we will analyse the above probabilistic interpretation in $d=1$ and $d=2$.
A decomposition of the Brownian excursion
Abstract
We discuss a realizationwise correspondence between a Brownian excursion (conditioned to reach height one) and a triple consisting of
(1) the local time profile of the excursion,
(2) an array of independent time-homogeneous Poisson processes on the real line, and
(3) a fair coin tossing sequence, where (2) and (3) encode the ordering by height respectively the left-right ordering of the subexcursions.
The three components turn out to be independent, with (1) giving a time change that is responsible for the time-homogeneity of the Poisson processes.
By the Ray-Knight theorem, (1) is the excursion of a Feller branching diffusion; thus the metric structure associated with (2), which generates the so-called lookdown space, can be seen as representing the genealogy underlying the Feller branching diffusion.
Because of the independence of the three components, up to a time change the distribution of this genealogy does not change under a conditioning on the local time profile. This gives also a natural access to genealogies of continuum populations under competition, whose population size is modeled e.g. by the Fellerbranching diffusion with a logistic drift.
The lecture is based on joint work with Stephan Gufler and Goetz Kersting.
Green's function estimates and the Poisson equation
Abstract
The Green's function of the Laplace operator has been widely studied in geometric analysis. Manifolds admitting a positive Green's function are called nonparabolic. By Li and Yau, sharp pointwise decay estimates are known for the Green's function on nonparabolic manifolds that have nonnegative Ricci
curvature. The situation is more delicate when curvature is not nonnegative everywhere. While pointwise decay estimates are generally not possible in this
case, we have obtained sharp integral decay estimates for the Green's function on manifolds admitting a Poincare inequality and an appropriate (negative) lower bound on Ricci curvature. This has applications to solving the Poisson equation, and to the study of the structure at infinity of such manifolds.
12:45
The Holographic Dual of Strongly γ-deformed N=4 SYM Theory
Abstract
We present a first-principles derivation of a weak-strong duality between the four-dimensional fishnet theory in the planar limit and a discretized string-like model living in AdS5. At strong coupling, the dual description becomes classical and we demonstrate explicitly the classical integrability of the model. We test our results by reproducing the strong coupling limit of the 4-point correlator computed before non-perturbatively from the conformal partial wave expansion. Next, by applying the canonical quantization procedure with constraints, we show that the model describes a quantum integrable chain of particles propagating in AdS5. Finally, we reveal a discrete reparametrization symmetry of the model and reproduce the spectrum when known analytically. Due to the simplicity of our model, it could provide an ideal playground for holography. Furthermore, since the fishnet model and N=4 SYM theory are continuously linked our consideration could shed light on the derivation of AdS/CFT for the latter. This talk is based on recent work with Amit Sever.
North Meets South
Abstract
Speaker: Joseph Keir (North)
Title: Dispersion (or not) in nonlinear wave equations
Abstract: Wave equations are ubiquitous in physics, playing central roles in fields as diverse as fluid dynamics, electromagnetism and general relativity. In many cases of these wave equations are nonlinear, and consequently can exhibit dramatically different behaviour when their solutions become large. Interestingly, they can also exhibit differences when given arbitrarily small initial data: in some cases, the nonlinearities drive solutions to grow larger and even to blow up in a finite time, while in other cases solutions disperse just like the linear case. The precise conditions on the nonlinearity which discriminate between these two cases are unknown, but in this talk I will present a conjecture regarding where this border lies, along with some conditions which are sufficient to guarantee dispersion.
Speaker: Priya Subramanian (South)
Title: What happens when an applied mathematician uses algebraic geometry?
Abstract: A regular situation that an applied mathematician faces is to obtain the equilibria of a set of differential equations that govern a system of interest. A number of techniques can help at this point to simplify the equations, which reduce the problem to that of finding equilibria of coupled polynomial equations. I want to talk about how homotopy methods developed in computational algebraic geometry can solve for all solutions of coupled polynomial equations non-iteratively using an example pattern forming system. Finally, I will end with some thoughts on what other 'nails' we might use this new shiny hammer on.
Simplicial Mixture Models - Fitting topology to data
Abstract
Lines and planes can be fitted to data by minimising the sum of squared distances from the data to the geometric object. But what about fitting objects from topology such as simplicial complexes? I will present a method of fitting topological objects to data using a maximum likelihood approach, generalising the sum of squared distances. A simplicial mixture model (SMM) is specified by a set of vertex positions and a weighted set of simplices between them. The fitting process uses the expectation-maximisation (EM) algorithm to iteratively improve the parameters.
Remarkably, if we allow degenerate simplices then any distribution in Euclidean space can be approximated arbitrarily closely using a SMM with only a small number of vertices. This theorem is proved using a form of kernel density estimation on the n-simplex.
The role of ice shelves for marine ice sheet stability
The West Antarctic Ice Sheet is a marine ice sheet that rests on a bed below sea level. The stability of a marine ice sheet and its contribution to future sea level rise are controlled by the dynamics of the grounding line, where the grounded ice sheet transitions into a floating ice shelf. Recent observations suggest that Antarctic ice shelves experience widespread thinning due to contact with warming ocean waters, but quantifying the effect of these changes on marine ice sheet stability and extent remains a major challenge for both observational and modelling studies. In this talk, I show that grounding line stability of laterally confined marine ice sheets and outlet glaciers is governed by ice shelf dynamics, in particular calving front and melting conditions. I will discuss the implications of this dependence for projections of the future evolution of the West Antarctic Ice Sheet.
Banish imposter feelings (and trust you belong!)
Abstract
How can it be that so many clever, competent and capable people can feel that they are just one step away from being exposed as a complete fraud? Despite evidence that they are performing well they can still have that lurking fear that at any moment someone is going to tap them on the shoulder and say "We need to have a chat". If you've ever felt like this, or you feel like this right now, then this Friday@2 session might be of interest to you. We'll explore what "Imposter Feelings" are, why we get them and steps you can start to take to help yourself and others. This event is likely to be of interest to undergraduates and MSc students at all stages.
An efficient approach to inverse sensitivity problems
Algebra, Geometry and Topology of ERK Enzyme Kinetics
Abstract
In this talk I will analyse ERK time course data by developing mathematical models of enzyme kinetics. I will present how we can use differential algebra and geometry for model identifiability, and topological data analysis to study these the dynamics of ERK. This work is joint with Lewis Marsh, Emilie Dufresne, Helen Byrne and Stanislav Shvartsman.
Financial modelling and utilisation of a diverse range of data sets in oil markets
Abstract
We will present three problems that we are interested in:
Forecast of volatility both at the instrument and portfolio level by combining a model based approach with data driven research
We will deal with additional complications that arise in case of instruments that are highly correlated and/or with low volumes and open interest.
Test if volatility forecast improves metrics or can be used to derive alpha in our trading book.
Price predication using physical oil grades data
Hypothesis:
Physical markets are most reflective of true fundamentals. Derivative markets can deviate from fundamentals (and hence physical markets) over short term time horizons but eventually converge back. These dislocations would represent potential trading opportunities.
The problem:
Can we use the rich data from the physical market prices to predict price changes in the derivative markets?
Solution would explore lead/lag relationships amongst a dataset of highly correlated features. Also explore feature interdependencies and non-linearities.
The prediction could be in the form of a price target for the derivative (‘fair value’), a simple direction without magnitude, or a probabilistic range of outcomes.
Modelling oil balances by satellite data
The flow of oil around the world from being extracted, refined, transported and consumed, forms a very large dynamic network. At both regular and irregular intervals, we can make noisy measurements of the amount of oil at certain points in the network.
In addition, we have general macro-economic information about the supply and demand of oil in certain regions.
Based on that information, with general information about the connections between nodes in the network i.e. the typical rate of transfer, one can build a general model for how oil flows through the network.
We would like to build a probabilistic model on the network, representing our belief about the amount of oil stored at each of our nodes, which we refer to as balances.
We want to focus on particular parts of the network where our beliefs can be augmented by satellite data, which can be done by focusing on a sub network containing nodes that satellite measurements can be applied to.
16:00
Number fields with prescribed norms
Abstract
Let G be a finite abelian group, let k be a number field, and let x be an element of k. We count Galois extensions K/k with Galois group G such that x is a norm from K/k. In particular, we show that such extensions always exist. This is joint work with Christopher Frei and Daniel Loughran.
Liquid droplets on a surface
Abstract
The talk will begin with an introduction to the science of what determines the behaviour of a liquid on a on a surface and giving an overview of some of the different theories that can be used to describe the shape and structure of the liquid in the drop. These include microscopic density functional theory (DFT), which describes the liquid structure on the scale of the individual liquid molecules, and mesoscopic thin film equation (PDE) and kinetic Monte-Carlo models. A DFT based method for calculating the binding potential ?(h) for a film of liquid on a solid surface, where h is the thickness of the liquid film, will be presented. The form of ?(h) determines whether or not the liquid wets the surface. Calculating drop profiles using both DFT and also from inputting ?(h) into the mesoscopic theory and comparing quantities such as the contact angle and the shape of the drops, we find good agreement between the two methods, validating the coarse-graining. The talk will conclude with a discussion of some recent work on modelling evaporating drops with applications to inkjet printing.
Sensitivity Analysis of the Utility Maximization Problem with Respect to Model Perturbations
Abstract
First, we will give a brief overview of the asymptotic analysis results in the context of optimal investment. Then, we will focus on the sensitivity of the expected utility maximization problem in a continuous semimartingale market with respect to small changes in the market price of risk. Assuming that the preferences of a rational economic agent are modeled by a general utility function, we obtain a second-order expansion of the value function, a first-order approximation of the terminal wealth, and construct trading strategies that match the indirect utility function up to the second order. If a risk-tolerance wealth process exists, using it as numeraire and under an appropriate change of measure, we reduce the approximation problem to a Kunita–Watanabe decomposition. Then we discuss possible extensions and special situations, in particular, the power utility case and models that admit closed-form solutions. The central part of this talk is based on the joint work with Mihai Sirbu.
A posteriori error analysis for domain decomposition
Abstract
Domain decomposition methods are widely employed for the numerical solution of partial differential equations on parallel computers. We develop an adjoint-based a posteriori error analysis for overlapping multiplicative Schwarz domain decomposition and for overlapping additive Schwarz. In both cases the numerical error in a user-specified functional of the solution (quantity of interest), is decomposed into a component that arises due to the spatial discretization and a component that results from of the finite iteration between the subdomains. The spatial discretization error can be further decomposed in to the errors arising on each subdomain. This decomposition of the total error can then be used as part of a two-stage approach to construct a solution strategy that efficiently reduces the error in the quantity of interest.
Chern-Simons theory and TQFT's: Part II
A new Federer-type characterization of sets of finite perimeter
Abstract
Federer’s characterization, which is a central result in the theory of functions of bounded variation, states that a set is of finite perimeter if and only if n−1-dimensional Hausdorff measure of the set's measure-theoretic boundary is finite. The measure-theoretic boundary consists of those points where both the set and its complement have positive upper density. I show that the characterization remains true if the measure-theoretic boundary is replaced by a smaller boundary consisting of those points where the lower densities of both the set and its complement are at least a given positive constant.
11:30
Functional Modular Zilber-Pink with Derivatives
Abstract
I will present Pila's Modular Zilber-Pink with Derivatives (MZPD) conjecture, which is a Zilber-Pink type statement for the j-function and its derivatives, and discuss some weak and functional/differential analogues. In particular, I will define special varieties in each setting and explain the relationship between them. I will then show how one can prove the aforementioned weak/functional/differential MZPD statements using the Ax-Schanuel theorem for the j-function and its derivatives and some basic complex analytic geometry. Note that I gave a similar talk in Oxford last year (where I discussed a differential MZPD conjecture and proved it assuming an Existential Closedness conjecture for j), but this talk is going to be significantly different from that one (the approach presented in this talk will be mostly complex analytic rather than differential algebraic, and the results will be unconditional).
16:00
JSJ Decompositions of Groups
Abstract
A graph of groups decomposition is a way of splitting a group into smaller and hopefully simpler groups. A natural thing to try and do is to keep splitting until you can't split anymore, and then argue that this decomposition is unique. This is the idea behind JSJ decompositions, although, as we shall see, the strength of the uniqueness statement for such a decomposition varies depending on the class of groups that we restrict our edge groups to
Hilbert schemes of points of ADE surface singularities
Abstract
I will discuss some recent results around Hilbert schemes of points on singular surfaces, obtained in joint work with Craw, Gammelgaard and Gyenge, and their connection to combinatorics (of coloured partitions) and representation theory (of affine Lie algebras and related algebras such as their W-algebra).
Some new perspectives on moments of random matrices
Abstract
The study of 'moments' of random matrices (expectations of traces of powers of the matrix) is a rich and interesting subject, with fascinating connections to enumerative geometry, as discovered by Harer and Zagier in the 1980’s. I will give some background on this and then describe some recent work which offers some new perspectives (and new results). This talk is based on joint work with Fabio Deelan Cunden, Francesco Mezzadri and Nick Simm.
14:30
Parameter Optimization in a Global Ocean Biogeochemical Model
Abstract
Ocean biogeochemical models used in climate change predictions are very computationally expensive and heavily parameterised. With derivatives too costly to compute, we optimise the parameters within one such model using derivative-free algorithms with the aim of finding a good optimum in the fewest possible function evaluations. We compare the performance of the evolutionary algorithm CMA-ES which is a stochastic global optimization method requiring more function evaluations, to the Py-BOBYQA and DFO-LS algorithms which are local derivative-free solvers requiring fewer evaluations. We also use initial Latin Hypercube sampling to then provide DFO-LS with a good starting point, in an attempt to find the global optimum with a local solver. This is joint work with Coralia Cartis and Samar Khatiwala.
Axiomatizability and profinite groups
Abstract
A mathematical structure is `axiomatizable' if it is completely determined by some family of sentences in a suitable first-order language. This idea has been explored for various kinds of structure, but I will concentrate on groups. There are some general results (not many) about which groups are or are not axiomatizable; recently there has been some interest in the sharper concept of 'finitely axiomatizable' or FA - that is, when only a finite set of sentences (equivalently, a single sentence) is allowed.
While an infinite group cannot be FA, every finite group is so, obviously. A profinite group is kind of in between: it is infinite (indeed, uncountable), but compact as a topological group; and these groups share many properties of finite groups, though sometimes for rather subtle reasons. I will discuss some recent work with Andre Nies and Katrin Tent where we prove that certain kinds of profinite group are FA among profinite groups. The methods involve a little model theory, and quite a lot of group theory.
Combinatorial discrepancy and a problem of J.E. Littlewood
Given a collection of subsets of a set X, the basic problem in combinatorial discrepancy theory is to find an assignment of 1,-1 to the elements of X so that the sums over each of the given sets is as small as possible. I will discuss how the sort of combinatorial reasoning used to think about problems in combinatorial discrepancy can be used to solve an old conjecture of J.E. Littlewood on the existence of ``flat Littlewood polynomials''.
This talk is based on joint work with Paul Balister, Bela Bollobas, Rob Morris and Marius Tiba.
14:00
Globally convergent least-squares optimisation methods for variational data assimilation
Abstract
The variational data assimilation (VarDA) problem is usually solved using a method equivalent to Gauss-Newton (GN) to obtain the initial conditions for a numerical weather forecast. However, GN is not globally convergent and if poorly initialised, may diverge such as when a long time window is used in VarDA; a desirable feature that allows the use of more satellite data. To overcome this, we apply two globally convergent GN variants (line search & regularisation) to the long window VarDA problem and show when they locate a more accurate solution versus GN within the time and cost available.
Joint work with Coralia Cartis, Amos S. Lawless, Nancy K. Nichols.
Dimensionality reduction techniques for global optimization
Abstract
We consider the problem of global minimization with bound constraints. The problem is known to be intractable for large dimensions due to the exponential increase in the computational time for a linear increase in the dimension (also known as the “curse of dimensionality”). In this talk, we demonstrate that such challenges can be overcome for functions with low effective dimensionality — functions which are constant along certain linear subspaces. Such functions can often be found in applications, for example, in hyper-parameter optimization for neural networks, heuristic algorithms for combinatorial optimization problems and complex engineering simulations.
Extending the idea of random subspace embeddings in Wang et al. (2013), we introduce a new framework (called REGO) compatible with any global min- imization algorithm. Within REGO, a new low-dimensional problem is for- mulated with bound constraints in the reduced space. We provide probabilistic bounds for the success of REGO; these results indicate that the success is depen- dent upon the dimension of the embedded subspace and the intrinsic dimension of the function, but independent of the ambient dimension. Numerical results show that high success rates can be achieved with only one embedding and that rates are independent of the ambient dimension of the problem.
Quantum Chaos in Perspective
Abstract
I will review some of the major research themes in Quantum Chaos over the past 50 years, and some of the questions currently attracting attention in the mathematics and physics literatures.
Population distribution as pattern formation on landscapes
Abstract
Cities and their inter-connected transport networks form part of the fundamental infrastructure developed by human societies. Their organisation reflects a complex interplay between many natural and social factors, including inter alia natural resources, landscape, and climate on the one hand, combined with business, commerce, politics, diplomacy and culture on the other. Nevertheless, despite this complexity, there has been some success in capturing key aspects of city growth and network formation in relatively simple models that include non-linear positive feedback loops. However, these models are typically embedded in an idealised, homogeneous space, leading to regularly-spaced, lattice-like distributions arising from Turing-type pattern formation. Here we argue that the geographical landscape plays a much more dominant, but neglected role in pattern formation. To examine this hypothesis, we evaluate the weighted distance between locations based on a least cost path across the natural terrain, determined from high-resolution digital topographic databases for Italy. These weights are included in a co-evolving, dynamical model of both population aggregation in cities, and movement via an evolving transport network. We compare the results from the stationary state of the system with current population distributions from census data, and show a reasonable fit, both qualitatively and quantitatively, compared with models in homogeneous space. Thus we infer that that addition of weighted topography from the natural landscape to these models is both necessary and almost sufficient to reproduce the majority of the real-world spatial pattern of city sizes and locations in this example.
What is Arakelov Geometry?
Abstract
Arakelov geometry studies schemes X over ℤ, together with the Hermitian complex geometry of X(ℂ).
Most notably, it has been used to give a proof of Mordell's conjecture (Faltings's Theorem) by Paul Vojta; curves of genus greater than 1 have at most finitely many rational points.
In this talk, we'll introduce some of the ideas behind Arakelov theory, and show how many results in Arakelov theory are analogous—with additional structure—to classic results such as intersection theory and Riemann Roch.
An optimal transport formulation of the Einstein equations of general relativity
Abstract
In the seminar I will present a recent work joint with S. Suhr (Bochum) giving an optimal transport formulation of the full Einstein equations of general relativity, linking the (Ricci) curvature of a space-time with the cosmological constant and the energy-momentum tensor. Such an optimal transport formulation is in terms of convexity/concavity properties of the Shannon-Bolzmann entropy along curves of probability measures extremizing suitable optimal transport costs. The result gives a new connection between general relativity and optimal transport; moreover it gives a mathematical reinforcement of the strong link between general relativity and thermodynamics/information theory that emerged in the physics literature of the last years.
15:45
The Euler characteristic of Out(F_n) and renormalized topological field theory
Abstract
I will report on recent joint work with Karen Vogtmann on the Euler characteristic of $Out(F_n)$ and the moduli space of graphs. A similar study has been performed in the seminal 1986 work of Harer and Zagier on the Euler characteristic of the mapping class group and the moduli space of curves. I will review a topological field theory proof, due to Kontsevich, of Harer and Zagier´s result and illustrate how an analogous `renormalized` topological field theory argument can be applied to $Out(F_n)$.
Scaling limits for planar aggregation with subcritical fluctuations
Abstract
Planar random growth processes occur widely in the physical world. Examples include diffusion-limited aggregation (DLA) for mineral deposition and the Eden model for biological cell growth. One approach to mathematically modelling such processes is to represent the randomly growing clusters as compositions of conformal mappings. In 1998, Hastings and Levitov proposed one such family of models, which includes versions of the physical processes described above. An intriguing property of their model is a conjectured phase transition between models that converge to growing disks, and 'turbulent' non-disk like models. In this talk I will describe a natural generalisation of the Hastings-Levitov family in which the location of each successive particle is distributed according to the density of harmonic measure on the cluster boundary, raised to some power. In recent joint work with Norris and Silvestri, we show that when this power lies within a particular range, the macroscopic shape of the cluster converges to a disk, but that as the power approaches the edge of this range the fluctuations approach a critical point, which is a limit of stability. This phase transition in fluctuations can be interpreted as the beginnings of a macroscopic phase transition from disks to non-disks analogous to that present in the Hastings-Levitov family.
Real-time optimization under forward rank-dependent performance criteria: time-consistent investment under probability distortion.
Abstract
I will introduce the concept of forward rank-dependent performance processes, extending the original notion to forward criteria that incorporate probability distortions and, at the same time, accommodate “real-time” incoming market information. A fundamental challenge is how to reconcile the time-consistent nature of forward performance criteria with the time-inconsistency stemming from probability distortions. For this, I will first propose two distinct definitions, one based on the preservation of performance value and the other on the time-consistency of policies and, in turn, establish their equivalence. I will then fully characterize the viable class of probability distortion processes, providing a bifurcation-type result. This will also characterize the candidate optimal wealth process, whose structure motivates the introduction of a new, distorted measure and a related dynamic market. I will, then, build a striking correspondence between the forward rank-dependent criteria in the original market and forward criteria without probability distortions in the auxiliary market. This connection provides a direct construction method for forward rank-dependent criteria with dynamic incoming information. Furthermore, a direct by-product of our work are new results on the so-called dynamic utilities and time-inconsistent problems in the classical (backward) setting. Indeed, it turns out that open questions in the latter setting can be directly addressed by framing the classical problem as a forward one under suitable information rescaling.
Infinite geodesics on convex surfaces
Abstract
In the talk I will discuss the following result and related analytic and geometric questions: On the boundary of any convex body in the Euclidean space there exists at least one infinite geodesic.
12:45
Supersymmetric phases of N = 4 SYM at large N
Abstract
We show the existence of an infinite family of complex saddle-points at large N, for the matrix model of the superconformal index of SU(N) N = 4 super Yang-Mills theory on S3 × S1 with one chemical potential τ. The saddle-point configurations are labelled by points (m,n) on the lattice Λτ = Z τ + Z with gcd(m, n) = 1. The eigenvalues at a given saddle are uniformly distributed along a string winding (m, n) times along the (A, B) cycles of the torus C/Λτ . The action of the matrix model extended to the torus is closely related to the Bloch-Wigner elliptic dilogarithm, and its values at (m,n) saddles are determined by Fourier averages of the latter along directions of the torus. The actions of (0,1) and (1,0) agree with that of pure AdS5 and the Gutowski-Reall AdS5 black hole, respectively. The actions of the other saddles take a surprisingly simple form. Generically, they carry non vanishing entropy. The Gutowski-Reall black hole saddle dominates the canonical ensemble when τ is close to the origin, and other saddles dominate when τ approaches rational points.
The Persistence Mayer-Vietoris spectral sequence
Abstract
In this talk, linear algebra for persistence modules will be introduced, together with a generalization of persistent homology. This theory permits us to handle the Mayer-Vietoris spectral sequence for persistence modules, and solve any extension problems that might arise. The result of this approach is a distributive algorithm for computing persistent homology. That is, one can break down the underlying data into different covering subsets, compute the persistent homology for each cover, and join everything together. This approach has the added advantage that one can recover extra geometrical information related to the barcodes. This addresses the common complaint that persistent homology barcodes are 'too blind' to the geometry of the data.
Where does collaborating end and plagiarising begin?
Abstract
Despite the stereotype of the lone genius working by themselves, most professional mathematicians collaborate with others. But when you're learning maths as a student, is it OK to work with other people, or is that cheating? And if you're not used to collaborating with others, then you might feel shy about discussing your ideas when you're not confident about them. In this session, we'll explore ways in which you can get the most out of collaborations with your fellow students, whilst avoiding inadvertently passing off other people's work as your own. This session will be suitable for undergraduate and MSc students at any stage of their degree who would like to increase their confidence in collaboration. Please bring a pen or pencil!