The Anile-ECMI Prize is given to a young researcher for an excellent PhD thesis in industrial mathematics successfully submitted at a European university. It was established in honour of Professor Angelo Marcello Anile (1948-2007) of Catania, Italy and consists of a monetary prize of 2500 Euros and an invitation to give a talk at the ECMI 2021 conference on Wedneday 14 April.

In her DPhil (PhD), Bernadette investigated the use of topological data analysis for biological data. She developed methods to quantify the unique features of tumour blood vessel networks. Using persistent homology on experimental data from different imaging modalities, she validated known treatment effects on the networks and showed how the effects of radiation treatments alter the vascular structure.

In her thesis, Bernadette further applied persistent homology to functional networks from neuroscience experiments. To overcome computational challenges that are a major limitation in applications of persistent homology to real-world data, she researched the use of local computations of persistent homology and her results indicate that these can be used for outlier-robust subsampling from large and noisy data sets. In addition, she demonstrated that such computations can detect points located near geometric anomalies in data sampled from intersecting manifolds. This work has recently been published in Proceedings of the National Academy of Sciences of the United States of America (PNAS).

Bernadette developed her research in close collaboration with Oxford Mathematicians Heather Harrington, Jared Tanner, Vidit Nanda, and Helen Byrne as well as Mason Porter (UCLA), biological collaborators from Oxford Radiation Oncology and industrial researchers from Roche.

In her current postdoc at Oxford Mathematics' Centre for Topological Data Analysis, Bernadette is looking at applying persistent homology to quantify the output from mathematical models of angiogenesis.

Oxford Mathematician Ben Green on a tale of conjectures, mistaken assumptions and eventual solutions: a tale of mathematics.

"The famous discrete mathematician Ron Graham sadly passed away last year. I did not know him well, but I had the pleasure of meeting him a few times. On the first such occasion, in Vancouver in 2004, he mentioned one of his favourite open questions over lunch. This concerns the size of certain "van der Waerden numbers", a kind of arithmetic variant of graph Ramsey numbers.

Fix a positive integer $k \geq 3$, and let $N(k)$ be the smallest value of $N$ such that the following is true: however you colour the integers $\{1,\dots,N\}$ red and blue, there is always either (i) a blue arithmetic progression of length 3 or (ii) a red arithmetic progression of length $k$ (or both). That there exists such a value of $N(k)$ is not a trivial fact, but it is a consequence of a celebrated theorem of van der Waerden from the 1920s. In fact, it is now known that $N(k)$ grows at most exponentially as a function of $k$.

What about the true value? There is cast-iron numerical data up to about $k = 20$, and conjectured values up to about $k = 40$, obtained with computer searches. The numerics very strongly suggest that $N(k)$ is roughly $k^2$. For instance, it is believed that $N(20) = 389$ and $N(30) = 903$. The question Ron Graham asked me in 2004 was this: is it true that $N(k) < Ck^2$ for some absolute constant $C$, and for all $k$? When Ron asked me this question I immediately told him that I thought the answer was no, and I thought I would send him a proof within a few days. My reasoning was as follows. There are well-known examples of (for instance) subsets of $\{1,\dots, k^{10}\}$ of size $k^{9.9}$ with no three-term arithmetic progressions, provided $k$ is sufficiently large. Take a set like this, and colour it blue. Colour the remaining points red. There are quite a lot of red points but, unless something unexpected happens, one would still not anticipate red arithmetic progressions of length much longer than about $k^{0.1}$, and certainly nowhere near as long as $k$. (One can actually make this rigorous: if the blue points were a random subset of size $k^{9.9}$, then almost surely (as $k \rightarrow \infty$) what I just wrote is true.)

Unfortunately, something unexpected does happen. For all the well-known examples of large sets free of three-term progressions, their complements contain extremely long arithmetic progressions. In particular, none of these examples provide a disproof of Ron Graham's conjecture. I apologized to Ron for not believing his conjecture, and to make amends I repeated it myself in print.

However, in recent work I have shown that the conjecture is, after all, false. Not only is $N(k)$ not bounded by a quadratic, but in fact it is not bounded by any polynomial. It grows at least as quickly as roughly $e^{(\log k)^{4/3}}$.

The red-blue colouring which shows this is rather elaborate. Very roughly, one sets up a discrete-time dynamical system on a high-dimensional torus given by an irrational rotation. On the torus one takes a large, randomly-chosen, collection of very thin ellipsoidal annuli, all with the same eccentricity, but with this eccentricity also chosen randomly. Then, we colour $n$ blue if $n$ is a return time of our dynamical system to this set of annuli. All other $n$ are coloured red.

By far the hardest part of the proof is to show that (with high probability) this colouring does not contain long red progressions. This occupies about 50 pages and uses tools from harmonic analysis, the geometry of numbers, combinatorics and random matrix theory.

Despite this new result, the gap between the known upper and lower bounds for $N(k)$ remains close to exponential and seems likely to remain so for the foreseeable future."

It is a cliche that crises create opportunities. But they certainly demand innovation (and a lot of hard work). In Oxford Mathematics, in line with many others departments and universities, we have had to switch from in-person teaching to online in most cases. This has been 100% the case in terms of undergraduate lectures which normally take place in large lecture theatres where a whiteboard, a marker pen (or two or three) and a mathematician take centre stage.

However, the online world is a much more varied place. Lectures tend to be shorter (though the courses are the same length), some lecturers write as they go using tablets while some use pre-prepared slides. Some are in shot, some are not. However, as you can see from the image above and the lecture below, some lecturers, in this case André Henriques (and also Artur Ekert in other lectures), are trying different approaches, taking advantage of latest technologies. The lightboard is not new, but this might be its teaching moment.

Then again, the important thing is that the teaching is up to scratch. You can judge for yourself via the lecture below. You can also watch a range of student lectures on our YouTube channel as we show what we do and the increasing variety of ways we do it.

The Oxford University Society for Industrial and Applied Mathematics Student Chapter 3 Minute Thesis Competition saw 10 of our postgraduates present their latest research to a panel of our judges. Topics included Langland's Grand Unified Theory; Quantum irreversibility; and using magnets and maths to deliver stem cell therapy.

You can watch the competition via the video below.

To gain an insight in to mathematical student life under lockdown, we asked Oxford Mathematics and St Peter's College 2nd Year Undergraduate Matt Antrobus to provide us with one-minute updates over the course of last term.

So he did in a very personable and honest way, describing the maths he is doing, how he is doing it and how much work is involved. Matt also reflects on the stark fact that over half his time in Oxford has been under the cloud of Covid.

During the early growth of the brain, an extraordinary process takes place where axons, neurons, and nerves extend, grow, and connect to form an intricate network that will be used for all brain activities and cognitive processes. A fundamental scientific question is to understand the laws that these growing cells follow to find their correct target.

A well-known observation is that multiple axons bundle and migrate together towards other neurons to make connections, sometimes very far from the cells where they originate. During this trip, the tip of each axon can only rely upon its near environment to find its path. For example, chemical guidance (chemotaxis) is a major modality of axon navigation: when a "smell" is sensed, the axon tip pulls itself on the substrate to move and elongate the trailing axon towards or away from the signal (1).

Recently, it has been demonstrated that the mechanical environment (stiff or soft) is also critical. In particular, the ability of an axon to progress depends on the stiffness of the substrate (2), potentially allowing for durotaxis, i.e., migration along stiffness gradients (3). In a recent paper published in Phys. Rev. Lett, Oxford Mathematicians Hadrien Oliveri and Alain Goriely in a joint work with neurophysicist Kristian Franze found a surprising connection between classic optic ray theory and axonal migration.

They started with the simple question: if each axon feels the stiffness of the substrate, what will be the overall group behaviour of the bundle? What path will a bundle follow if each axon produces a different force, depending on medium stiffness? Working in the theory of growing filaments (4), they modelled the path of a tip-growing bundle subject to differential traction forces. Then, they considered an idealised system where a bundle moves from a soft to a hard medium. In each separate domain the bundle follows a straight trajectory. However, when the axons approach the interface (say with incident angle $\theta_1$) part of the bundle will grow on the hard domain, while some axons are still on the soft domain. This results in a torque that forces the bundle to turn until all axons have passed the interface, at which point the bundle stops turning and follows a new straight trajectory ($\theta_2$, Fig 1).

FIgure 1: The Snell law of axon durotaxis.

The theory leads to a surprising relationship: $$ n_1 \sin\theta_1 \simeq n_2 \sin \theta_2 $$ where $n_i$ are related to the stiffness of each medium. The same mathematical relationship appears in a completely different setting. In optics, if $n_1$ and $n_2$ are the refractive indices and $\theta_1,\theta_2$ the angle that a light ray makes with the interface, this law is known as Snell's law (or Descartes's law in French). It governs the deflection of light rays at the interface between two refractive media (for example air and water) and is a consequence of Maxwell's equations for electromagnetic radiations. This law explains, for instance, why a wood stick appears broken when partially submerged in water.

Using this theory, the authors showed how durotaxis can be used to explain the guidance of xenopus retinal ganglion cell axons, originated in the retina (3, Fig 2).

This analogy between the path of a light ray and the path of axons in the developing brain is potentially powerful. Indeed, we know from the theory of optics that ingenious devices such as lenses, mirrors, optical guides, collimators, binoculars, periscopes, telescopes and microscopes can be built to control the path of light rays and collect information. Similarly, the authors show how one can design substrate with different stiffness that would induce lensing effects, or create the equivalent of an optic fibre with a soft corridor to guide axons during development or during regeneration. Their new work provides a foundation for a general theory of axon guidance and control.

Figure 2: the role of tissue stiffness in Xenopus lævis visual system development. Left: cartoon showing the path of the axons from the retina, through the optic nerve to the optic tectum. Right: numerical simulation of retinal ganglion cell axons undergoing a sharp caudal turn at their arrival in the mid-diencephalon ($N=5$ representative trajectories). The authors show how durotaxis has the potential to contribute to this turn.

[3] D. E. Koser, A. J. Thompson, S. K. Foster, A. Dwivedy, E. K. Pillai, G. K. Sheridan, H. Svoboda, M. Viana, L. da Fontoura Costa, J. Guck, C. E. Holt, and K. Franze, “Mechanosensing is critical for axon growth in the developing brain,” Nature neuroscience, vol. 19, no. 12, p. 1592, 2016.

Round up, the Oxford Mathematics Annual Newsletter, is a calculated attempt to describe our lives, mathematical and non-mathematical, over the past 12 months. From a summary of some of our research into the Coronavirus to a moving tribute to Peter Neumann by Martin Bridson, via articles on diversity, fantasy football and of course our Nobel Prize winner (pictured), it throws a little light, we hope, on what we did during the year that was 2020.

The Newsletter goes out to over 12,000 Oxford Mathematics alumni around the world and to anyone else who may be interested of course. Arguably the most expressive part of the Newsletter is a wordless photo montage. Why not have a look?

In the 1680s there was a coffee house in London by the name of "Lloyd's". This place, catering to early maritime insurers, lent its name to the nascent English insurance market place, the now famous Lloyd's of London.

Less than a decade later, the English insurance market was thrown into crisis. A fleet of French privateers attacked an Anglo-Dutch merchant fleet in the battle of Lagos in 1693, causing estimated losses of around 1 million British pounds, more than a percent of the English GDP at the time. 33 insurers went bankrupt, a significant part of the industry.

The nascent sector had failed to diversify their risks sufficiently. The probability of risk events like this one is hard to predict; it follows a heavy-tailed distribution. This is still the case today. We also still have unexpected risk events - take the Covid-19 pandemic. Of course, the industry is more mature today, the sector more diverse. But there are still bottlenecks, where everyone may bet on the same card, and where the chance to diversify is limited. There are, for instance, only three important providers of professional risk models, RMS, EQECAT, and AIR - and in many cases only one risk model is used. Risk models are important for catastrophe insurance, i.e. insurance against catastrophe events, hurricanes, flooding, earthquakes, etc., where the distribution and expectation of damages is dominated by large but rare events (heavy-tailed damage distributions). Every risk model is inaccurate, so everyone is wrong occasionally - but it is really bad if everyone is wrong at the same time, giving rise to a kind of systemic risk unique to insurance.

In our paper, published in the Journal of Economic Interaction and Coordination, we, Oxford Mathematicians Torsten Heinrich and Doyne Farmer and Oxford Researcher and former Oxford Mathematics Postdoc Juan Sabuco, assess this type of systemic risk from model homogeneity. We propose an agent-based model for this purpose. Agent-based models (ABM) represent heterogeneous agents and their interactions directly and can be simulated to study the behavior of such systems while retaining much of the original systems' complexity. This makes them well-suited to study catastrophe insurance with its heavy-tailed damage distributions. The figure shows the structure of the ABM, its agents, and illustrates the type of catastrophic risks insured by the sector the model represents.

We use our ABM to run 400 simulations each in four different scenarios, first with firms using one risk model (the lowest level of risk model diversity), then with firms using two, three, and four risk models respectively. We designed our risk models to be imperfect; in other words, true to real life, they inevitably fail to accurately predict catastrophes. Our four risk-models are intentionally imperfect, but in different ways. They underestimate and overestimate different types of perils and thus fail to accurately predict the risks faced by different geographical locations.

Our results confirmed worries that the industry currently uses dangerously few risk models. Moreover, we were able to quantify the impact of this: compared to risk model homogeneity (one risk model), settings with four risk models, for instance, allow around 20% more insurance firms to survive in the industry (on average), the number of non-insured risks to be halved, and available capital to be increased by 50%. Our ABM can also be used to investigate other phenomena in catastrophe insurance.

Recent events in connection with the Covid-19 pandemic highlight why systemic risk in catastrophe insurance is an important issue: its impacts on catastrophe insurance are varied, ranging from business interruption and delays in logistics (shipping etc.) to claims resulting from Covid-19 deaths or from hospitals' professional liability to bankruptcies in the tourism and hospitality sector. However, we have not experienced pandemics of this scale in modern times. Not only could risks resulting for catastrophe insurance not have been foreseen, there are no previous data with which these risks could have been estimated. Inaccurate risk predictions are unavoidable, losses are unavoidable, but diversity in modeling may prevent a bankruptcy cascade and the collapse of the insurance system in such cases.

Oxford Mathematcian Clemens Koppensteiner talks about his work on the geometry and topology of compactifications.

"It is not much of an exaggeration to say that in geometry all of the most beautiful and powerful statements we make are about compact spaces, i.e., spaces where all "points at infinity" are included. For example, one of Euclid's postulates (dealing with non-compact space) states that any two non-identical lines meet in exactly one point - except when they are parallel. On the other hand, in projective geometry (dealing with compact space) one has the much more satisfactory statement that any two non-identical lines meet in exactly one point - no exceptions needed.

Image above: the complex plane can be compactified by wrapping it around a sphere and adding a point at the infinity at the north pole. The unit circle gets identified with the equator, while straight lines through the origin become meridians going through the new point $\infty$. The result is called the Riemann sphere.

For this reason, when making geometric arguments we often have to replace a non-compact space (say, the Euclidean plane) by a compact space (say, the projective plane) in a process called compactification. We then apply our powerful theorems to the compact space, and in the end restrict back to the original non-compact space.

When we are moreover interested in objects "living" on our spaces (e.g., functions, vector bundles or sheaves), we need to understand how these objects interact with the chosen compactification. In particular, we need a way to extend these objects to the new "points at infinity" in a controlled way.

In my research, I am particularly interested in objects with a notion of differentiation. Basic examples would be sets of functions, but usually I am working with vector bundles with a connection and their generalizations to D-modules. However, even for functions there are many "nonstandard" ways to differentiate them. For example, if $f(z)$ is function on the complement of the origin and $\lambda$ is any fixed number, we could set
\[
\frac{\mathrm{d}}{\mathrm{dz}} f(z) := f'(z) + \frac{\lambda}{z}f(z).
\]
One quickly checks that this definition still satisfies a version of the product rule and hence may really be called a type of "differentiation". If we now want to extend this differentiation rule to the origin, we run into the problem that $\frac{1}{z}$ does not make sense for $z=0$. One thus has to extend such a differentiation rule to so-called logarithmic connections, that is functions (or more generally vector bundles) with an action of the logarithmic differential operator $z\frac{\mathrm{d}}{\mathrm{d}z}$:
\[
z\frac{\mathrm{d}}{\mathrm{dz}} f(z) := zf'(z) + \lambda f(z).
\]
The term "logarithmic" comes from the fact that $z\frac{\mathrm{d}}{\mathrm{d}z}$ is dual to the logarithmic differential $\mathrm{d}\log z = \frac{\mathrm{d}z}{z}$.

Objects with a rule of differentiation arise in many contexts, so as mathematicians we want to classify them. Restricting to complex manifolds (or complex algebraic varieties), this is done by the famous Riemann-Hilbert correspondence: integrable connections (resp. regular holonomic D-modules) are the same as local systems (resp. perverse sheaves) on the manifold. Here one might think of "integrable connections" as collections of twisted higher-dimensional functions with differentiation and "regular holonomic D-modules" as everything that can be obtained from integrable connections under certain natural operations. Local systems and perverse sheaves are objects describing the topology of the manifold. The Riemann-Hilbert correspondence is quite miraculous: it builds a bridge between the (very rigid) geometry of complex manifolds and the (very flexible) underlying topology of the manifold.

Image above: The Kato--Nakayama space of the plane replaces the origin by a circle. Each point of the added circle corresponds to a direction starting at the origin. One calls $\mathrm{KN}(\mathbb{C})$ the oriented real blow-up of the plane at the origin. In higher dimensions the topology becomes more complicated.

The corresponding theorem for logarithmic connections is more intricate: the geometry of the compactification turns out to be related to the topology of an auxiliary space, called the Kato-Nakayama space after its inventors. A full classification for logarithmic connections was achieved by Ogus. However, the classification of "regular holonomic logarithmic D-modules" is still open - in fact until recently it was not even known what such a thing should really be (despite logarithmic D-modules already being used in the 80s). A solution to this classification problem will also finally shed light on the old question of how to correctly understand the topology of compactifications.

In a series of papers we are building a theory of logarithmic D-modules ready for applications and work towards the following logarithmic Riemann-Hilbert conjecture.

Conjecture: The logarithmic de Rham functor $\widetilde{DR}_X$ gives an equivalence between the derived category of regular holonomic D-modules on a smooth log variety $X$ and a derived category of "constructible sheaves" on the Kato-Nakayama space of $X$, such that the following properties hold:

- The equivalence extends Ogus's Riemann-Hilbert correspondence for integrable log connections and reduces to the classical Riemann-Hilbert correspondence when the log structure is trivial.

- $\widetilde{DR}_X$ sends the standard t-structure to a perverse t-structure.

- There is a theory of singular support for constructible sheaves on the Kato-Nakayama space and it matches the theory of characteristic varieties for coherent log D-modules via the de Rham functor.

- Holonomic log D-modules have a natural filtration, which agrees with the Kashiwara-Malgrange V-filtration whenever the latter is defined. The de Rham functor matches this filtration with the intrinsic grading of constructible sheaves on the Kato-Nakayama space."

Our 'Fantastic Voyage' through Oxford Mathematics Student Lectures brings us to four 3rd Year lectures by Dominic Joyce on Topological Surfaces. These lectures are shown pretty much as they are seen by the students (they use a different platform with a few more features but the lectures are the same) as we all get to grips with the online world. Lectures on Linear Algebra, Integral transforms, Networks, Set Theory, Maths History and much more will be shown over the next few weeks.

Below is the fourth lecture of the course, but you can watch all four of Dominic's lectures via the Playlist as well as over 30 other student lectures on the YouTube Channel.

Incidentally 'Fantastic Voyage' is a classic bit of 60s sci-fi about a submarine crew who are shrunk to microscopic size and venture into the body of an injured scientist to repair damage to his brain. It's a tenous link but we like it so, if you are interested, check it out.