News

Thursday, 18 March 2021

Round up: the Oxford Mathematics Annual Newsletter

Round up, the Oxford Mathematics Annual Newsletter, is a calculated attempt to describe our lives, mathematical and non-mathematical, over the past 12 months. From a summary of some of our research into the Coronavirus to a moving tribute to Peter Neumann by Martin Bridson, via articles on diversity, fantasy football and of course our Nobel Prize winner (pictured), it throws a little light, we hope, on what we did during the year that was 2020.

The Newsletter goes out to over 12,000 Oxford Mathematics alumni around the world and to anyone else who may be interested of course. Arguably the most expressive part of the Newsletter is a wordless photo montage. Why not have a look?

Wednesday, 17 March 2021

An agent-based simulation of the insurance industry: the problem of risk model homogeneity

Figure illustrating the structure of the model and the risk event scenario

In the 1680s there was a coffee house in London by the name of "Lloyd's". This place, catering to early maritime insurers, lent its name to the nascent English insurance market place, the now famous Lloyd's of London.

Less than a decade later, the English insurance market was thrown into crisis. A fleet of French privateers attacked an Anglo-Dutch merchant fleet in the battle of Lagos in 1693, causing estimated losses of around 1 million British pounds, more than a percent of the English GDP at the time. 33 insurers went bankrupt, a significant part of the industry.

The nascent sector had failed to diversify their risks sufficiently. The probability of risk events like this one is hard to predict; it follows a heavy-tailed distribution. This is still the case today. We also still have unexpected risk events - take the Covid-19 pandemic. Of course, the industry is more mature today, the sector more diverse. But there are still bottlenecks, where everyone may bet on the same card, and where the chance to diversify is limited. There are, for instance, only three important providers of professional risk models, RMS, EQECAT, and AIR - and in many cases only one risk model is used. Risk models are important for catastrophe insurance, i.e. insurance against catastrophe events, hurricanes, flooding, earthquakes, etc., where the distribution and expectation of damages is dominated by large but rare events (heavy-tailed damage distributions). Every risk model is inaccurate, so everyone is wrong occasionally - but it is really bad if everyone is wrong at the same time, giving rise to a kind of systemic risk unique to insurance.

In our paper, published in the Journal of Economic Interaction and Coordination, we, Oxford Mathematicians Torsten Heinrich and Doyne Farmer and Oxford Researcher and former Oxford Mathematics Postdoc Juan Sabuco, assess this type of systemic risk from model homogeneity. We propose an agent-based model for this purpose. Agent-based models (ABM) represent heterogeneous agents and their interactions directly and can be simulated to study the behavior of such systems while retaining much of the original systems' complexity. This makes them well-suited to study catastrophe insurance with its heavy-tailed damage distributions. The figure shows the structure of the ABM, its agents, and illustrates the type of catastrophic risks insured by the sector the model represents.

We use our ABM to run 400 simulations each in four different scenarios, first with firms using one risk model (the lowest level of risk model diversity), then with firms using two, three, and four risk models respectively. We designed our risk models to be imperfect; in other words, true to real life, they inevitably fail to accurately predict catastrophes. Our four risk-models are intentionally imperfect, but in different ways. They underestimate and overestimate different types of perils and thus fail to accurately predict the risks faced by different geographical locations.

Our results confirmed worries that the industry currently uses dangerously few risk models. Moreover, we were able to quantify the impact of this: compared to risk model homogeneity (one risk model), settings with four risk models, for instance, allow around 20% more insurance firms to survive in the industry (on average), the number of non-insured risks to be halved, and available capital to be increased by 50%. Our ABM can also be used to investigate other phenomena in catastrophe insurance.

Recent events in connection with the Covid-19 pandemic highlight why systemic risk in catastrophe insurance is an important issue: its impacts on catastrophe insurance are varied, ranging from business interruption and delays in logistics (shipping etc.) to claims resulting from Covid-19 deaths or from hospitals' professional liability to bankruptcies in the tourism and hospitality sector. However, we have not experienced pandemics of this scale in modern times. Not only could risks resulting for catastrophe insurance not have been foreseen, there are no previous data with which these risks could have been estimated. Inaccurate risk predictions are unavoidable, losses are unavoidable, but diversity in modeling may prevent a bankruptcy cascade and the collapse of the insurance system in such cases.

Monday, 15 March 2021

Logarithmic Riemann-Hilbert Correspondences

Oxford Mathematcian Clemens Koppensteiner talks about his work on the geometry and topology of compactifications.

"It is not much of an exaggeration to say that in geometry all of the most beautiful and powerful statements we make are about compact spaces, i.e., spaces where all "points at infinity" are included. For example, one of Euclid's postulates (dealing with non-compact space) states that any two non-identical lines meet in exactly one point - except when they are parallel. On the other hand, in projective geometry (dealing with compact space) one has the much more satisfactory statement that any two non-identical lines meet in exactly one point - no exceptions needed.

 

Image above: the complex plane can be compactified by wrapping it around a sphere and adding a point at the infinity at the north pole. The unit circle gets identified with the equator, while straight lines through the origin become meridians going through the new point $\infty$. The result is called the Riemann sphere.

For this reason, when making geometric arguments we often have to replace a non-compact space (say, the Euclidean plane) by a compact space (say, the projective plane) in a process called compactification. We then apply our powerful theorems to the compact space, and in the end restrict back to the original non-compact space.

When we are moreover interested in objects "living" on our spaces (e.g., functions, vector bundles or sheaves), we need to understand how these objects interact with the chosen compactification. In particular, we need a way to extend these objects to the new "points at infinity" in a controlled way.

In my research, I am particularly interested in objects with a notion of differentiation. Basic examples would be sets of functions, but usually I am working with vector bundles with a connection and their generalizations to D-modules. However, even for functions there are many "nonstandard" ways to differentiate them. For example, if $f(z)$ is function on the complement of the origin and $\lambda$ is any fixed number, we could set
\[
\frac{\mathrm{d}}{\mathrm{dz}} f(z) := f'(z) + \frac{\lambda}{z}f(z).
\]
One quickly checks that this definition still satisfies a version of the product rule and hence may really be called a type of "differentiation". If we now want to extend this differentiation rule to the origin, we run into the problem that $\frac{1}{z}$ does not make sense for $z=0$. One thus has to extend such a differentiation rule to so-called logarithmic connections, that is functions (or more generally vector bundles) with an action of the logarithmic differential operator $z\frac{\mathrm{d}}{\mathrm{d}z}$:
\[
z\frac{\mathrm{d}}{\mathrm{dz}} f(z) := zf'(z) + \lambda f(z).
\]
The term "logarithmic" comes from the fact that $z\frac{\mathrm{d}}{\mathrm{d}z}$ is dual to the logarithmic differential $\mathrm{d}\log z = \frac{\mathrm{d}z}{z}$.

Objects with a rule of differentiation arise in many contexts, so as mathematicians we want to classify them. Restricting to complex manifolds (or complex algebraic varieties), this is done by the famous Riemann-Hilbert correspondence: integrable connections (resp. regular holonomic D-modules) are the same as local systems (resp. perverse sheaves) on the manifold. Here one might think of "integrable connections" as collections of twisted higher-dimensional functions with differentiation and "regular holonomic D-modules" as everything that can be obtained from integrable connections under certain natural operations. Local systems and perverse sheaves are objects describing the topology of the manifold. The Riemann-Hilbert correspondence is quite miraculous: it builds a bridge between the (very rigid) geometry of complex manifolds and the (very flexible) underlying topology of the manifold.


Image above: The Kato--Nakayama space of the plane replaces the origin by a circle. Each point of the added circle corresponds to a direction starting at the origin. One calls $\mathrm{KN}(\mathbb{C})$ the oriented real blow-up of the plane at the origin. In higher dimensions the topology becomes more complicated.

The corresponding theorem for logarithmic connections is more intricate: the geometry of the compactification turns out to be related to the topology of an auxiliary space, called the Kato-Nakayama space after its inventors. A full classification for logarithmic connections was achieved by Ogus. However, the classification of "regular holonomic logarithmic D-modules" is still open - in fact until recently it was not even known what such a thing should really be (despite logarithmic D-modules already being used in the 80s). A solution to this classification problem will also finally shed light on the old question of how to correctly understand the topology of compactifications.

In a series of papers we are building a theory of logarithmic D-modules ready for applications and work towards the following logarithmic Riemann-Hilbert conjecture.

ConjectureThe logarithmic de Rham functor $\widetilde{DR}_X$ gives an equivalence between the derived category of regular holonomic D-modules on a smooth log variety $X$ and a derived category of "constructible sheaves" on the Kato-Nakayama space of $X$, such that the following properties hold:

- The equivalence extends Ogus's Riemann-Hilbert correspondence for integrable log connections and reduces to the classical  Riemann-Hilbert correspondence when the log structure is trivial.

- $\widetilde{DR}_X$ sends the standard t-structure to a perverse t-structure.

- There is a theory of singular support for constructible sheaves on the Kato-Nakayama space and it matches the theory of characteristic varieties for coherent log D-modules via the de Rham functor.

- Holonomic log D-modules have a natural filtration, which agrees with the Kashiwara-Malgrange V-filtration whenever the latter is defined. The de Rham functor matches this filtration with the intrinsic grading of constructible sheaves on the Kato-Nakayama space."

References:
Kato, K., Nakayama, C., Log Betti cohomology, log étale cohomology, and log de Rham cohomology of log schemes over $\mathbb{C}$ Kodai Math. J. 22 (1999), no. 2, 161-186.
Koppensteiner, C., Talpo, M., Holonomic and perverse logarithmic D-modules. Adv. Math. 346 (2019), 510-545. 
Koppensteiner, C., The de Rham functor for logarithmic D-modules. Selecta Math. (N.S.) 26 (2020), no. 3, Paper No. 49. 
Ogus, A., On the logarithmic Riemann-Hilbert correspondence. Doc. Math. 2003, Extra Vol. Kazuya Kato's fiftieth birthday, 655-724.

Sunday, 14 March 2021

Geometry of Surfaces - 4 more Oxford Mathematics Student Lectures

Our 'Fantastic Voyage' through Oxford Mathematics Student Lectures brings us to four 3rd Year lectures by Dominic Joyce on Topological Surfaces. These lectures are shown pretty much as they are seen by the students (they use a different platform with a few more features but the lectures are the same) as we all get to grips with the online world. Lectures on Linear Algebra, Integral transforms, Networks, Set Theory, Maths History and much more will be shown over the next few weeks.

Below is the fourth lecture of the course, but you can watch all four of Dominic's lectures via the Playlist as well as over 30 other student lectures on the YouTube Channel.

Incidentally 'Fantastic Voyage' is a classic bit of 60s sci-fi about a submarine crew who are shrunk to microscopic size and venture into the body of an injured scientist to repair damage to his brain. It's a tenous link but we like it so, if you are interested, check it out.

 

Friday, 12 March 2021

Oxford Mathematics Public Lecture. From one extreme to another: the statistics of extreme events - Jon Keating

Oxford Mathematics Public Lecture
Tuesday 16 March 2021
5.00-6.00pm

Jon Keating will discuss the statistics of rare, extreme events in various contexts, including: evaluating performance at the Olympics; explaining how glasses freeze; illustrating why computers are more effective than expected at learning; and understanding the Riemann zeta-function, the mathematical object that encodes the mysterious distribution of the prime numbers. 

Jon Keating is Sedleian Professor of Natural Philosophy in the University of Oxford and a Fellow of The Queen's College.

Watch live (no need to register and it will stay up afterwards):
Oxford Mathematics Twitter
Oxford Mathematics Facebook
Oxford Mathematics Livestream
Oxford Mathematics YouTube

The Oxford Mathematics Public Lectures are generously supported by XTX Markets.

Friday, 12 March 2021

Bacterial quorum sensing in fluid flow

By pooling resources between cells, colonies of bacteria can exhibit behaviours far beyond the capabilities of an individual bacterium. For example, bacterial populations can encase themselves in a self-generated polymer matrix that shelters cells in the core of the population from the external environment. Such communities are termed “bacterial biofilms”, and show increased tolerance to antimicrobial treatments such as antibiotics. In order to develop medical therapies that circumvent this communal bacterial resistance, it is vital to understand how bacterial cells communicate with one another and work together to display such collective behaviours.

Bacteria use intercellular signalling, or quorum sensing (QS), to share information and respond collectively to aspects of their surroundings. Each bacterium produces QS signalling molecules (autoinducers) that diffuse between cells, providing information about the cellular density of the bacterial colony. However, since autoinducers are exposed to the external environment, they are susceptible to removal by external fluid flow, which is a ubiquitous feature of bacterial habitats ranging from the lungs, gut, and nasal passage to rivers, lakes, and oceans. A key open question is: how are bacteria able to communicate effectively in dynamic flow environments?

Answering this question has been the focus of a collaboration between Oxford Mathematician Mohit Dalwadi and Philip Pearce (Harvard Medical School). In a paper recently published in the Proceedings of the National Academy of Sciences, they developed a mathematical model to understand how bacteria use features of their genetic network to maximise the information gained from QS signalling in the presence of fluid flow (Figure 1).

The mathematical model that the duo developed allowed them to explore and quantify how emergent colony behaviours depend on fluid flow. They found that an external flow can suppress QS completely, but that positive genetic feedback between the local autoinducer concentration and its production by each bacterium causes the entire bacterial population to become flooded with signalling molecules if the cell density is above a critical value (Figure 2). By identifying the presence of a bifurcation in the underlying mathematical system, they were able to quantify the critical density in terms of the physical and genetic system parameters, such as the speed of the external flow, the height of the cell layer, and the strength of genetic feedback. Thus, their mathematical analysis yielded a simple and transparent mathematical relationship that linked these various parameters at the critical cell density.

By applying the mathematical relationship that they derived, the duo were able to investigate several different features of bacterial signalling. For example, they explained recent experimental results that showed how QS is promoted in crevices or pores, where bacteria are sheltered from external fluid flow (Figure 3). Furthermore, they showed how a bacterial population can distinguish between changes in cell density and flow rate, by combining different autoinducer signals (Figure 4).

In environments with an oscillatory or noisy flow, such as the lungs or the gut, QS has the potential downside to the bacteria of triggering a premature commitment to multicellular behaviours with a high energy cost (for example, generating extracellular matrix). However, by undertaking a dynamic analysis of the mathematical system, Dalwadi and Pearce found that the properties of the bifurcation at the critical cell density imply that QS signalling carries an inherent robustness to noise  (Figure 5) – the stronger the positive genetic feedback in the system, the longer it takes for bacteria to transition to their “activated” state, in which they exhibit multicellular behaviours. This means that, through the genetic properties of their QS signalling system, a bacterial population can sense the average strength of a dynamically oscillating or noisy fluid flow. Overall, by increasing our fundamental understanding of bacterial signalling, this study will help scientists to develop new treatments to bacterial infections that interfere with the formation of bacterial biofilms.

Figure 1: The LuxIR positive feedback system.

 

Figure 2: At a critical density, a small change in bacterial density can cause the concentration of autoinducers to suddenly increase. This is the onset of QS activation.

 

Figure 3: In complex geometries, positive feedback causes robust QS activation which is initiated in deeper crevices and downstream.

 

Figure 4: Bacteria can distinguish between changes in density and flow by measuring the activation of just two different autoinducers, splitting parameter space into four regions A, B, C, and D.

 

Figure 5: There is an inherent delay associated with dynamically crossing the bifurcation that marks the onset of QS activation. This filters out incursions into the activated region that are shorter than the inherent delay time.

 

Thursday, 11 March 2021

What's been going on at the Oxford Online Maths Club?

In the bleak, school-less midwinter, James Munro and his student crew have been keeping the maths going for high school students who want to step aside from the curriculum for an hour or so and peek round the corner at University Maths. Cue novels, (yes there is literature as well), dragons and your favourite graph.

The Oxford Online Maths Club is live and free for everyone, wherever you are, every Thursday at 16:30pm UK time. There are maths problems, puzzles, mini-lectures, and Q&A via the chat. It’s interactive, casual, and relaxed, with an emphasis on solving problems, building fluency and enjoying mathematics.

Join the club

Monday, 8 March 2021

Georgia Brennan wins Silver Medal at STEM for Britain 2021

Oxford Mathematician Georgia Brennan has won a silver medal in the Mathematical Sciences category at STEM for Britain 2021 for her poster (extract in the image) on 'Mathematically Modelling Clearance in Alzheimer’s Disease: A Mathematical Drug Trial for the UK’s Protein Pandemic'.

STEM for Britain 2021 is a major scientific poster competition and exhibition which has been held in Parliament since 1997 (online this year), and is organised by the Parliamentary & Scientific Committee. Its aim is to give members of both Houses of Parliament an insight into the outstanding research work being undertaken in UK universities by early-career researchers.

Wednesday, 3 March 2021

UNIQ 2021 - A Digital Summer School for Maths

Since 2010 UNIQ has been providing in person and, since 2018,  digital Summer Schools for State School students in the UK. As a free access programme we prioritise students with good grades from backgrounds that are under-represented at Oxford and other highly selective universities.

231 UNIQ 2020 students have now received offers from the University of Oxford and we look forward to welcoming them here as Oxford undergraduates in September 2021. Each year 1 in 3 UNIQ students who apply to Oxford get offered a place, as compared to 1 in 5 state school students.

This year we are merging UNIQ Digital with the online summer school to offer one UNIQ programme to 2,500 students. UNIQ 2021 takes into account the disrupted learning students have suffered over the past year: the programme starts in April and offers sustained support for students over several months. 

Oxford Mathematics together with Oxford Statistics will once again be a big part of UNIQ this year. Our main lectures are on Matrices & Markov Chains. So why not Enter the Matrix? (And if you don't know how to enter then you haven't been born, quite literally if you are in Year 12...).

Find out lots more and how to apply.

Monday, 1 March 2021

Machine learning with neural controlled differential equations

Oxford Mathematician Patrick Kidger writes about combining the mathematics of differential equations with the machine learning of neural networks to produce cutting-edge models for time series.

What is a neural differential equation?

Differential equations and neural networks are two dominant modelling paradigms, ubiquitous throughout science and technology respectively. Differential equations have been used for centuries to model widespread phenomena from the motion of a pendulum to the spread of a disease through a population. Meanwhile, over the past decade, neural networks have swept the globe as a means of tackling diverse tasks such as image recognition and natural language processing.

Interest has recently focused on combining these into a hybrid approach, dubbed neural differential equations. These embed a neural network as the vector field (the right hand side) of a differential equation - and then possibly embed that differential equation inside a larger neural network! For example, we may consider the initial value problem

$z(0) = z_0, \qquad \frac{\mathrm{d}z}{\mathrm{d}t}(t) = f_\theta(t, z(t))$

where $z_0$ is some input or observation, and $f_\theta$ is a neural network, and the output of the model may for example be taken to be $z(T)$ for some $T > 0$.

This is called a neural ordinary differential equation. At first glance one may be forgiven for believing this is an awkward hybridisation: a chimera of two very different approaches. It is not so!

Fitting parameterised differential equations to data has long been a cornerstone of mathematical modelling. The only difference now is that the parameterisation of the right hand side (the $f_\theta$) is a neural network learnt from data, rather than a theoretical one...derived from data (via the human designing it).

Meanwhile, it turns out that many standard neural networks may actually be interpreted as approximations to neural differential equations: in fact it seems that it is actually because of this that many neural networks work as well as they do. (Those doing traditional differential equation modelling are unsurprised. They've been using differential equations all this time precisely because they're such good models.)

Neural differential equations have applications to both deep learning and traditional mathematical modelling. They offer memory efficiency, the ability to handle irregular data, strong priors on model space, high capacity function approximation, and draw on a deep well of theory on both sides.

Neural controlled differential equations

Against this backdrop, we consider the specific problem of modelling functions of time series. For example, we might observe a sequence of observations representing the vital signs of a patient in a hospital (heart rate, laboratory measurements, and so on). Is the patient healthy or not? We would like to build a model that determines this automatically. (Perhaps to automatically and rapidly alert a doctor if something seems amiss.)

Of course, there's a few ways of accomplishing this. In line with the theme of this article, the one we're going to introduce is a model of the following form:

$\mathrm{d}z(t) = f_\theta(t, z(t)) \,\mathrm{d}X(t)$

This is a neural controlled differential equation. If we had a "$\mathrm{d}t$" on the right hand side, instead of the "$\mathrm{d}X(t)$", then this would just be the neural ordinary differential equation we saw above. Having a "$\mathrm{d}X(t)$" instead means that the differential equation can change in response to the input $X$, which is a continuous-time path representing how the observations (of heart rate etc.) change over time.

If you're not familiar with this notation, then just pretend you can "divide by $\mathrm{d}t$" (or $\mathrm{d}X(t)$) and you get the equation we had earlier. Check out the paper [1] for a less hand-wavy explanation of what's really going on here.

Changes in $X$ will create changes in $z$. By training this model (picking a good $f_\theta$), we can arrange it so that if something happens in $X$ - for example a patient's health starts deteriorating - then we can produce a desired change in $z$ - which can be used to call a doctor.

Neural controlled differential equations are actually the continuous-time limit of recurrent neural networks. (Which would often be the typical way to approach this problem.) By pushing to the continuous-time limit we can get improved memory efficiency, can more easily handle irregular data... and also produce something theoretically beautiful! In keeping with what we argued earlier, it seems that recurrent neural networks often work because they look like neural controlled differential equations. Indeed the two most popular types of recurrent neural networks - GRUs and LSTMs - are explicitly designed to have features that make them look like differential equations. Not a coincidence! Understanding these relations will help us build better and better models as we go into the future.

Further reading:

This has been a short introduction to the nascent, fascinating field of neural differential equations. If you'd like to find out more about neural controlled differential equations, then check out [1]. For an idea of some of the things you can do with neural differential equations (like generating pictures, or modelling physics), then [2] has some nice examples. And for the paper that kickstarted the field in its modern form, check out [3].

 

[1] Kidger, Morrill, Foster, Lyons, Neural Controlled Differential Equations for Irregular Time Series, Neural Information Processing Systems 2020

[2] Kidger, Chen, Lyons, "Hey, that's not an ODE": Faster ODE Adjoints with 12 Lines of Code, arXiv 2021

[3] Chen, Rubanova, Bettencourt, Duvenaud, Neural Ordinary Differential Equations, Neural Information Processing Systems 2018

Pages