Motivated by a problem in quasiconformal mapping, we introduce a new type of problem in complex analysis, with its roots in the mathematical physics of the Bose-Einstein condensates in superconductivity.The problem will be referred to as \emph{geometric zero packing}, and is somewhat analogous to studying Fekete point configurations.The associated quantity is a density, denoted $\rho_\C$ in the planar case, and $\rho_{\mathbb{H}}$ in the case of the hyperbolic plane.We refer to these densities as \emph{discrepancy densities for planar and hyperbolic zero packing}, respectively, as they measure the impossibility of atomizing the uniform planar and hyperbolic area measures.The universal asymptoticvariance $\Sigma^2$ associated with the boundary behavior of conformal mappings with quasiconformal extensions of small dilatation is related to one of these discrepancy densities: $\Sigma^2= 1-\rho_{\mathbb{H}}$.We obtain the estimates$2.3\times 10^{-8}<\rho_{\mathbb{H}}\le0.12087$, where the upper estimate is derived from the estimate from below on $\Sigma^2$ obtained by Astala, Ivrii, Per\"al\"a, and Prause, and the estimate from below is much more delicate.In particular, it follows that $\Sigma^2<1$, which in combination with the work of Ivrii shows that the maximal fractal dimension of quasicircles conjectured by Astala cannot be reached.Moreover, along the way, since the universal quasiconformal integral means spectrum has the asymptotics$\mathrm{B}(k,t)\sim\frac14\Sigma^2 k^2|t|^2$ for small $t$ and $k$, the conjectured formula $\mathrm{B}(k,t)=\frac14k^2|t|^2$ is not true.As for the actual numerical values of the discrepancy density $\rho_\C$, we obtain the estimate from above $\rho_\C\le0.061203\ldots$ by using the equilateral triangular planar zero packing, where the assertion that equality should hold can be attributed to Abrikosov. The values of $\rho_{\mathbb{H}}$ is expected to be somewhat close to the value of $\rho_\C$.

# Past Stochastic Analysis Seminar

One of the challenges of 21st-century science is to model the evolution of complex systems. One example of practical importance is urban structure, for which the dynamics may be described by a series of non-linear first-order ordinary differential equations. Whilst this approach provides a reasonable model of urban retail structure, it is somewhat restrictive owing to uncertainties arising in the modelling process.

We address these shortcomings by developing a statistical model of urban retail structure, based on a system of stochastic differential equations. Our model is ergodic and the invariant distribution encodes our prior knowledge of spatio-temporal interactions. We proceed by performing inference and prediction in a Bayesian setting, and explore the resulting probability distributions with a position-specific metrolpolis-adjusted Langevin algorithm.

Ambitious mathematical models of highly complex natural phenomena are challenging to analyse, and more and more computationally expensive to evaluate. This is a particularly acute problem for many tasks of interest and numerical methods will tend to be slow, due to the complexity of the models, and potentially lead to sub-optimal solutions with high levels of uncertainty which needs to be accounted for and subsequently propagated in the statistical reasoning process. This talk will introduce our contributions to an emerging area of research defining a nexus of applied mathematics, statistical science and computer science, called "probabilistic numerics". The aim is to consider numerical problems from a statistical viewpoint, and as such provide numerical methods for which numerical error can be quantified and controlled in a probabilistic manner. This philosophy will be illustrated on problems ranging from predictive policing via crime modelling to computer vision, where probabilistic numerical methods provide a rich and essential quantification of the uncertainty associated with such models and their computation.

Identifying correlations within multiple streams of high-volume time series is a general but challenging problem. A simple exact solution has cost that is linear in the dimensionality of the data, and quadratic in the number of streams. In this work, we use dimensionality reduction techniques (sketches), along with ideas derived from coding theory and fast matrix multiplication to allow fast (subquadratic) recovery of those pairs that display high correlation.

Joint work with Jacques Dark

I will give a light introduction to the theory of regularity structures and then discuss recent developments with regards to renormalization within the theory - in particular I will describe joint work with Martin Hairer where multiscale techniques from constructive field theory are adapted to provide a systematic method of obtaining needed stochastic estimates for the theory.

Abstract: Equations with small scales abound in physics and applied science. When the coefficients vary on microscopic scales, the local fluctuations average out under certain assumptions and we have the so-called homogenization phenomenon. In this talk, I will try to explain some probabilistic approaches we use to obtain the first order random fluctuations in stochastic homogenization. If homogenization is to be viewed as a law of large number type result, here we are looking for a central limit theorem. The tools we use include the Kipnis-Varadhan's method, a quantitative martingale central limit theorem and the Stein's method. Based on joint work with Jean-Christophe Mourrat.

Wave propagation in random media can be studied by multi-scale and stochastic analysis. We first consider the direct problem and show that, in a physically relevant regime of separation of scales, wave propagation is governed by a Schrodinger-type equation driven by a Brownian field. We study the associated moment equations and clarify the propagation of coherent and incoherent waves. Second, using these new results we design original methods for sensor array imaging when the medium is randomly scattering and apply them to seismic imaging and ultrasonic testing of concrete.

A randomly trapped random walk on a graph is a simple continuous time random walk in which the holding time at a given vertex is an independent sample from a probability measure determined by the trapping landscape, a collection of probability measures indexed by the vertices.

This is a time change of the simple random walk. For the constant speed continuous time random walk, the landscape has an exponential distribution with rate 1 at each vertex. For the Bouchaud trap model it has an exponential random variable at each vertex but where the rate for the exponential is chosen from a heavy tailed distribution. In one dimension the possible scaling limits are time changes of Brownian motion and include the fractional kinetics process and the Fontes-Isopi-Newman (FIN) singular diffusion. We extend this analysis to put these models in the setting of resistance forms, a framework that includes finitely ramified fractals. In particular we will construct a FIN diffusion as the limit of the Bouchaud trap model and the random conductance model on fractal graphs. We will establish heat kernel estimates for the FIN diffusion extending what is known even in the one-dimensional case.

Gaussian fields are prevalent throughout mathematics and the sciences, for instance in physics (wave-functions of high energy electrons), astronomy (cosmic microwave background radiation) and probability theory (connections to SLE, random tilings etc). Despite this, the geometry of such fields, for instance the connectivity properties of level sets, is poorly understood. In this talk I will discuss methods of extracting geometric information about levels sets of a planar Gaussian field through discrete observations of the field. In particular, I will present recent work that studies three such discretisation schemes, each tailored to extract geometric information about the levels set to a different level of precision, along with some applications.

Monte Carlo methods are one of the main tools of modern statistics and applied mathematics. They are commonly used to approximate integrals, which allows statisticians to solve many tasks of interest such as making predictions or inferring parameter values of a given model. However, the recent surge in data available to scientists has led to an increase in the complexity of mathematical models, rendering them much more computationally expensive to evaluate. This has a particular bearing on Monte Carlo methods, which will tend to be much slower due to the high computational costs.

This talk will introduce a Monte Carlo integration scheme which makes use of properties of the integrand (e.g. smoothness or periodicity) in order to obtain fast convergence rates in the number of integrand evaluations. This will allow users to obtain much more precise estimates of integrals for a given number of model evaluations. Both theoretical properties of the methodology, including convergence rates, and practical issues, such as the tuning of parameters, will be discussed. Finally, the proposed algorithm will be illustrated on a Bayesian inverse problem for a PDE model of subsurface flow.