Friday, 31 July 2020

Martin Bridson and Endre Suli elected to Academia Europaea

Oxford Mathematicians Martin Bridson and Endre Suli have been elected to Academia Europaea. The Academy seeks the advancement and propagation of excellence in scholarship in the humanities, law, the economic, social, and political sciences, mathematics, medicine, and all branches of natural and technological sciences anywhere in the world for the public benefit and for the advancement of the education of the public of all ages in the aforesaid subjects in Europe.

Martin is Whitehead Professor of Pure Mathematics in Oxford. His research interests lie in geometric group theory, low-dimensional topology, and spaces of non-positive curvature. He is also President of the Clay Mathematics Institute, a Fellow of Magdalen College and a former Head of the Mathematical Institute in Oxford.

Endre is Professor of Numerical Analyisis and a Fellow of Worcester College. His research interests include the mathematical and numerical analysis of nonlinear partial differential equations, and finite element methods.


Wednesday, 29 July 2020

Mathematical modelling of COVID-19 exit strategies

Mathematical models have been used throughout the COVID-19 pandemic to help plan public health measures. Attention is now turning to how interventions can be removed while continuing to restrict transmission. Predicting the effects of different possible COVID-19 exit strategies is an important current challenge requiring mathematical modelling, but many uncertainties remain.

In May 2020, Oxford Mathematician Robin Thompson met with other mathematical modellers and scientists online at the 'Models for an Exit Strategy' workshop, hosted by the Isaac Newton Institute in Cambridge. Two of the other researchers are also based in Oxford (Prof. Christl Donnelly and Prof. Deirdre Hollingsworth). Many of the participants are providing evidence to governments worldwide during the pandemic. The workshop therefore gave an opportunity to summarise and discuss current open questions that, if answered, will allow the effects of different exit strategies to be predicted more accurately using mathematical models.

Three main research areas were outlined as requiring attention:

First, parameters governing virus transmission must be estimated more precisely. For example, statistical methods for estimating the time-dependent reproduction number ($R_t$) must be extended to include additional features. The value of $R_t$ represents the expected number of secondary cases generated by someone infected at time t, and changes continually during any epidemic.

Second, heterogeneities in transmission must be understood more clearly. Models can be constructed that include different types of heterogeneity, including spatial heterogeneity (which can be represented in network or household models) and age-dependent transmission.

Third, there must be a concerted effort to identify data requirements for resolving current knowledge gaps, particularly (but not exclusively) in low-to-middle-income countries. Models can be used not only to make predictions using limited available data, but also to reveal which data must be collected in order for more accurate predictions to be made.

These key challenges for improving predictions of the effects of different COVID-19 exit strategies are outlined in this paper which will be published in the journal Proceedings of the Royal Society B in August 2020. The challenges that are outlined require mathematicians to work with a diversity of other scientists and policy-makers as part of a global collaborative effort. This collaboration is of critical importance for shaping public health policy to counter this pandemic and those in the future.


Fig 1 (above): the transmission risk depends on the frequency of contacts between individuals and the transmission probability per infected-susceptible contact. This graph shows the average number of daily contacts between an individual in the age group on the x-axis and a contact in the age group on the y-axis, in the UK under normal circumstances (data from Prem et al. PLoS Comp Biol 13: e1005697, 2017). Figure generated by Francesca Lovell-Read (DPhil student in Oxford Mathematics' Wolfson Centre for Mathematical Biology).

Fig 2 (above): the main goal of any COVID-19 exit strategy is to relax public health measures without risking a surge in cases (like the one shown here).


Friday, 17 July 2020

Gui-Qiang G Chen elected Fellow of the European Academy of Sciences

Oxford Mathematician Gui-Qiang G Chen has been elected Fellow of the European Academy of Sciences. 

Gui-Qiang's main research areas lie in nonlinear partial differential equations (PDEs), nonlinear analysis, and their applications to mechanics, geometry, other areas of mathematics and the other sciences.

He is Statutory Professor in the Analysis of Partial Differential Equations, Professorial Fellow of Keble College, Director, Oxford Centre for Nonlinear Partial Differential Equations (OxPDE) and Director, EPSRC Centre for Doctoral Training in Partial Differential Equations.

Friday, 17 July 2020

Cristiana De Filippis awarded Gioacchino Iapichino prize by the Italian National Academy

Oxford Mathematician Cristiana De Filippis has been awarded this year’s Gioacchino Iapichino prize in Mathematical Analysis by the Italian National Academy, the Accademia Nazionale dei Lincei. The prize recognises outstanding contributions to the field by early-career mathematicians.

Cristiana has been a postgraduate student in the Oxford Centre for Nonlinear PDEs for the past 4 years and successfully defended her DPhil thesis in June 2020. Her research interests include the Calculus of Variations and Regularity Theory.


Friday, 3 July 2020

The Erdős primitive set conjecture

A set of integers greater than 1 is primitive if no number in the set divides another. Erdős proved in 1935 that the series of $1/(n \log n)$ for $n$ running over a primitive set A is universally bounded over all choices of A. In 1988 he conjectured that the universal bound is attained for the set of prime numbers. In this research case study, Oxford's Jared Duker Lichtman describes recent progress towards this problem:

"On a basic level, number theory is the study of whole numbers, i.e., the integers $\mathbb{Z}$. Maturing over the years, the field has expanded beyond individual numbers to study sets of integers, viewed as unified objects with special properties.

A set of integers $A\subset \mathbb{Z}_{>1}$ is primitive if no number in $A$ divides another. For example, the integers in a dyadic interval $(x,2x]$ form a primitive set. Similarly the set of primes is primitive, along with the set $\mathbb{N}_k$ of numbers with exactly $k$ prime factors (with multiplicity), for each $k\ge1$. Another example is the set of perfect numbers $\{6,28,496,..\}$ (i.e. those equal to the sum of their proper divisors), which has fascinated mathematicians since antiquity.

After Euler's famous proof of the infinitude of primes, we know $\sum_p 1/p$ diverges, albeit "just barely" with \begin{align*} \sum_{p\le x}\frac{1}{p} \sim \log\log x. \end{align*} On the other hand, we know $\sum_p 1/p\log p$ converges (again "just barely"). Using the notation \begin{align*} f(A) := \sum_{n\in A}\frac{1}{n\log n}, \end{align*} we have $f(\mathbb{N}_1)<\infty$. In 1935 Erdős generalized this result considerably, proving $f(A) <\infty$ uniformly for all primitive sets $A$. In 1988 he further conjectured the maximum is attained by the primes $\mathbb N_1$:

Conjecture 1. $f(A) \leq f (\mathbb{N}_1)$ for any primitive $A$.

Note we may compute $f(\mathbb N_1) = \sum_p 1/p\log p \approx 1.6366$.

Since 1993 the best bound has been $f(A) < 1.84$, due to Erdős and Zhang. Recently, Carl Pomerance and I improved the bound to the following:

Theorem 1. $f (A) < e^\gamma \approx 1.78$ for any primitive A, where $\gamma$ is the Euler-Mascheroni constant.

Further $f(A) < f(\mathbb{N}_1)+0.000003$ if $2\in A$.

One fruitful approach towards the Erdős conjecture is to split up $A$ according to the smallest prime factor, i.e., for each prime $q$ we define $$A_q = \{ n \text{ in } A : n \text{ has smallest prime factor } q\}.$$

We say $q$ is Erdős strong if $f(A_q)\le f(q)$ for all primitive $A$. Conjecture 1 would follow if every prime is Erdős strong, since then $f(A) = \sum_q f(A_q) \le f(\mathbb{N}_1)$.

Unfortunately, we don't know whether $q=2$ is Erdős strong, but we showed that the first hundred million odd primes are all Erdős strong. And remarkably, assuming the Riemann hypothesis, over $99.999973\%$ of primes are Erdős strong.

Primitive from perfection

In modern notation, a number $n$ is perfect if $\sigma(n)=2n$ where $\sigma(n) = \sum_{d\mid n}d$ is the full sum-of-divisors function. Similarly $n$ is called deficient if $\sigma(n)/n<2$ (abundant if $>2$).

Since $\sigma(n)/n$ is multiplicative and $>1$, we see that perfect numbers form a primitive set, along with the subset of non-deficient numbers $n$ whose divisors $d\mid n$ are all deficient.

It is a classical theorem that non-deficient numbers have a well-defined, positive asymptotic density. This was originally proven with heavy analytic machinery, but Erdős found an elementary proof by using primitive non-deficient numbers (this density is now known $\approx 24.76\%$). His proof led him to introduce the notion of primitive sets and study them for their own sake.

This typified Erdős's penchant for proving major theorems by elementary methods.

A related conjecture of Banks & Martin

Recall $\mathbb{N}_k$ denotes the set of numbers with $k$ prime factors. In 1993, Zhang proved $f (\mathbb{N}_k) < f (\mathbb{N}_{1})$, which inspired Banks and Martin to predict the following:

Conjecture 2. $f (\mathbb{N}_k) < f (\mathbb{N}_{k-1})$ for each $k > 1$.

They further predicted that, for a set of primes $\mathcal Q$, \begin{align*} f\big(\mathbb{N}_k(\mathcal Q)\big) < f\big(\mathbb{N}_{k-1}(\mathcal Q)\big)\qquad\textrm{for each } k>1 \end{align*} where $A(\mathcal Q)$ denotes the numbers in $A$ composed of primes in $\mathcal Q$. Banks and Martin managed to prove this conjecture in the special case of sufficiently "sparse'' subsets $\mathcal Q$ of primes.

This result, along with Conjectures 1 & 2, illustrates the general view that $f(A)$ reflects the prime factorizations of $n\in A$ in a quite rigid way.

Beautiful though this vision of $f$ may be, it appears reality is more complicated. Recently I precisely computed the sums $f(\mathbb{N}_k)$ (see Figure 1 below) and obtained a surprising disproof of Conjecture 2!

Theorem 2. $ f( \mathbb{N}_k) > f(\mathbb{N}_6) $ for each $k\neq 6$.

Figure 1. Plot of $f(\mathbb{N}_k)$ for $k=1,2,..,10$.

I also proved $\lim_{k\to\infty} f(\mathbb{N}_k) = 1$, confirming a trend observed in the data. However, much about this data remains conjectural. For instance, the sequence $\{f(\mathbb{N}_k)\}_{k\ge6}$ appears to increase monotonically (to 1), and the rate of convergence appears to be exponential $O(2^{-k})$, while only $O(k^{\varepsilon-1/2})$ is known. Similar phenomena seem to occur when experimenting with subsets $A\subset \mathbb{N}_k$ of e.g. even, odd, and squarefree numbers.

I hope this note illustrates Erdős' conjecture spawning new lines of inquiry. For example, researchers are now studying variants of the problem in function fields $\mathbb{F}_q[x]$. Also, in forthcoming work with Chan and Pomerance, we manage to prove Conjecture 1 for 2-primitive sets $A$, i.e., no number in $A$ divides the product of 2 others.

The full Erdős conjecture has remained elusive, but working towards it has led to interesting developments. In the words of Piet Hein:

Problems worthy of attack prove their worth by fighting back.

Friday, 3 July 2020

Oxford Mathematics Online Exhibition 2020

Alongside the mathematics, the Andrew Wiles Building, home to Oxford Mathematics, has always been a venue for art, whether on canvas, sculpture, photography or even embedded in the maths itself.

However, lockdown has proved especially challenging for the creative arts with venues shut. Many have turned to online exhibitions and we felt that not only should we do the same but by so doing we could stress the connection between art and science and how both are descriptions of our world.

So we invited our locked down mathematicians to explore their mathematical creativity in a variety of media. A panel reviewed all the submissions, taking into account both the creative aspects and the mathematical component, alongside the description communicating the link.

So here is the first Oxford Mathematics Online Exhibition.









Wednesday, 1 July 2020

Andrea Mondino awarded a Whitehead Prize by the London Mathematical Society

Oxford Mathematician Andrea Mondino has been awarded a Whitehead Prize by the London Mathematical Society (LMS) in recognition of his contributions to geometric analysis in differential and metric settings and in particular for his central part in the development of the theory of metric measure spaces with Ricci curvature lower bounds.

Andrea works at the interface between Analysis and Geometry. More precisely he studies problems arising from (differential and metric) geometry by using analytic techniques such as optimal transport, functional analysis, partial differential equations, calculus of variations, gradient flows, nonlinear analysis and geometric measure theory. Although the emphasis of his work is primarily theoretical, the topics and the techniques have profound links with applications to natural sciences (mainly physics and biology) and economics.

Friday, 26 June 2020

Ulrike Tillmann announced as President Designate of the London Mathematical Society (LMS)

Oxford Mathematician Ulrike Tillmann has been announced as President Designate of the London Mathematical Society (LMS). 

Ulrike's research interests include Riemann surfaces and the homology of their moduli spaces. Her work on the moduli spaces of Riemann surfaces and manifolds of higher dimensions has been inspired by problems in quantum physics and string theory. More recently her work has broadened into areas of data science.

Ulrike is also well-known for her many contributions to the broader mathematical community, serving on a range of scientific boards including membership of the Council of the Royal Society. She will take over from the current LMS President (and Oxford Mathematician) Jon Keating in November 2021.

Saturday, 20 June 2020

Hawking Points in the Cosmic Microwave Background - a challenge to the concept of Inflation

For thirty years Oxford Mathematician Roger Penrose has challenged one of the key planks of Cosmology, namely the concept of Inflation, now over 40 years old, according to which our universe expanded at an enormous rate immediately after the Big Bang. Instead, fifteen years ago, Penrose proposed a counter-concept of Conformal Cyclic Cosmology by which Inflation is moved to before the Big Bang and which introduces the idea of preceding aeons. The concept has been disputed by most physicists, but Roger and colleagues believe that new evidence has come to light which requires closer inspection and argument - the research is published today in the Monthly Notices of the Royal Astronomical Society (MNRAS). 

Recent analysis of the Cosmic Microwave Background (CMB) by Roger, Daniel An, Krzysztof Meissner and Pawel Nurowski has revealed, both in the Planck and WMAP satellite data (at 99.98% confidence), a powerful signal that had never been noticed previously, namely numerous circular spots $\sim 8$ times the diameter of the full moon. The brightest six (Figure 1) are $\sim 30$ times the average CMB temperature variations seen at precisely the same locations in the Planck and WMAP data. These spots were overlooked previously owing to a belief that the very early exponentially expanding inflationary phase of standard cosmology should have obliterated any such features.





(Figure 1: CMB sky, marking 6 most prominent raised-temperature circular spots, found both in Planck and WMAP data; argued to be results of Hawking radiation from supermassive black holes in a previous aeon)

There are alternative universe models without inflation, but most encounter fundamental difficulties in not accounting for CMB features normally explained by inflation. However, Conformal Cyclic Cosmology (CCC) does so, by displacing 'inflation' to before the Big Bang - as the exponentially expanding remote future of an earlier cosmic aeon. This 'aeon' is a universe epoch, resembling what we currently perceive to be the entire history (without inflation) of our Universe. In CCC, there is an infinite succession of such aeons, each having a big-bang origin which is the conformal continuation or the exponentially expanding remote future of the preceding aeon (Figure 2). Conformal geometry allows for stretching or squashing of the metric structure, and is the geometry respected by a physics without mass (such as Maxwell's electromagnetism). This applies both to the remote future and big bang of each aeon, so the matching of aeon to aeon makes geometrical sense - and also physical sense because the conformal squashing of the cold low-density remote future matches the conformal stretching of the hot dense big bang of the subsequent aeon.


(Figure 2: Cartoon of conformal cyclic cosmology: each aeon's big bang arises from the conformally compressed remote future of its preceding aeon)

The exceptions to this smooth conformal matching are the supermassive black holes in an aeon's remote future, which would each have almost completely swallowed its surrounding galactic cluster, before eventually evaporating away entirely into Hawking radiation (after perhaps $10^{106}$ years). However, by conformal squashing, all this radiated energy comes through into the succeeding aeon at a single 'Hawking point.' The emerging photons then scatter within an expanding region, but do not become free until $\sim 380000$ years later, when finally appearing in the CMB of that subsequent aeon. This spread-out region would look to us like a disc ∼ 4° across, i.e. $\sim 8$ times the diameter of our full moon, an effect that we appear to be actually seeing in our own CMB sky.

Roger talks about his work in this November 2018 Oxford Mathematics Public Lecture.

Wednesday, 17 June 2020

Strange exponents in the "birthday paradox" for divisors

Ben Green and collaborators discover that the well-known "birthday paradox" has its equivalent in the divisors of a typical integer.

"The well-known "birthday paradox'' states that if you have 23 or more people in a room - something difficult to achieve nowadays without a very large room - then the chances are better than 50:50 that some pair of them will share a birthday. If we could have a party of 70 or more people, the chance of this happening rises to 99.9 percent.

It turns out that there is a similar phenomenon for the divisors of a "typical'' integer. Let $X$ be large, select an integer $n$ at random from the numbers $\{1,\dots, X\}$, and write down its divisors. These are distinct numbers, so no two of them will be the same, but it turns out that with high probability some pair of them will be close together. (The same caveat is necessary in the birthday problem if you look at the precise time the people in the room were born, rather than just the day.)

What do we mean by "close together''? One interpretation is that there are two divisors $d$ and $d'$ of $n$ lying within a factor two of one another, say $d < d' < 2d$.

Whilst the analysis of the birthday paradox is quite elementary, this turns out to be a very difficult result to prove. In fact, the statement that a random integer almost surely has two distinct divisors within a factor of two of one another is a celebrated result of Maier and Tenenbaum, published in 1985. It had been an open question of Erdős for over thirty years when they solved it.

It turns out that the divisors of a random integer are much more bunched together than the birthdays of random people in a room. Recently, in joint work with Kevin Ford and Dimitris Koukoulopoulos, I investigated just how many near coincidences there must be. Given a number $n$, the Hooley $\Delta$-function $\Delta(n)$ is defined to be the maximum number of divisors of $n$, all within a factor of two of one another. The result of Maier and Tenenbaum is that $\Delta(n) \geq 2$ with high probability.

We obtained a new lower bound for $\Delta(n)$, valid for almost every integer $n$, and assembled a good deal of evidence (but so far no proof) that it is also the correct upper bound. This bound has one of the most complicated exponents I have ever seen in a number theory problem: $\Delta(n) \geq (\log \log n)^{\eta}$, where $\eta \approx 0.35332277270132346711$ is defined to be $\frac{\log 2}{\log(2/\rho)}$, where $\rho$ satisfies the equation \[ \frac{1}{1 - \rho/2} = \log 2 + \sum_{j = 1}^{\infty} \frac{1}{2^j} \log \Big(\frac{a_{j+1} + a_j^{\rho}}{a_{j+1} - a_j^{\rho}} \Big),\] where the sequence $a_j$ is defined by $a_1 = 2$, $a_2 = 2 + 2^{\rho}$ and $a_j = a_{j-1}^2 + a_{j-1}^{\rho} - a_{j-2}^{2\rho}$ for $j \geq 3$.

In fact, the definition of $\rho$ is so complicated that it's a nontrivial analysis exercise to confirm that a number satisfying the equations here even exists.

Our paper, which is 88 pages long, takes us to some surprising areas of maths -  we begin by removing most of the number theory from the problem, turning it into a question about Poisson random variables. Then, we convert that into a curious optimisation problem involving measures on the discrete cube in $\mathbb{R}^n$ and their distribution on linear subspaces. To solve this, we use a lot of properties of entropy."