News

Friday, 18 May 2018

Using mathematical modelling to identify future diagnoses of Alzheimer's disease

Oxford Mathematician Paul Moore talks about his application of mathematical tools to identify who will be affected with Alzheimer's.

"Alzheimer's disease is a brain disorder which progressively affects cognition and results in an impairment in the ability to perform daily activities.  It is the most common form of dementia in older people affecting about 6% of the population aged over 65 and it increases in incidence with age. The initial stage of Alzheimer's disease is characterised by memory loss, and this is the usual presenting symptom. 

Psychiatrists would like to predict which individuals will develop the condition both for selecting participants for clinical trials and for finding which variables are used in prediction because this gives insights into the disease process. These variables might be individual characteristics such as age and genetic status or the results of brain scans and cognitive tests.  The graph shows some time plots of scaled brain volumes from successive MRI scans of an individual who has Alzheimer's disease.  The whole brain volume is shown as the blue markers, the hippocampus is marked in red and the entorhinus in yellow. The diamonds at the foot of the graph represent diagnosis points where the red diamonds are a diagnosis of Alzheimer's disease.  The trend in time seems to be downwards, but this feature might also be found in many healthy people as they age.  So our research question is: can we distinguish the changes of relative brain volumes in people who are healthy from those who will subsequently be diagnosed with Alzheimer's disease?

One possibility is to put the data points directly into a deep learning method like a neural network.  This approach might give accurate predictions, but it would not be easy to see which variables are important and how they change with respect to each other.  The method we use is to think of the way the variables change against each other over time as a path in Euclidean space and to characterise that path as a vector which uniquely identifies it.  This path signature was introduced by K.T.Chen in 1958 and has recently proved to be highly successful in machine learning applications.  It generates interpretable features and it can distinguish the time ordering of events: whether variable a or variable b changes value first.  Our results show that the hippocampus is shrinking abnormally fast in people who are subsequently diagnosed with Alzheimer’s disease - a finding that is already known from clinical research.  We are now expanding the number of brain regions that we investigate to improve the accuracy of our models and to learn more about the underlying process of this deadly disease."

Tuesday, 15 May 2018

What do mathematicians do on Saturday nights?

Doing anything Saturday night? Well, if you are an Oxford Mathematician you might just be rushing around London learning to ballroom dance or trying to get your head around the sound wave patterns of a theremin or perhaps cracking a safe or two.

Why? The answer is Midnight Madness, a series of challenges which lead participants on an intellectual treasure hunt around London. Starting at 8pm on Saturday (May 19th), the madness lasts until high noon on Sunday. The Oxford team, together with colleagues from University College London will be competing against the brightest minds in the City. Will the academics prove superior or will years of mathematics have left them soft and contemplative against the sharp intellectual elbows of their opponents? The Oxford and UCL team has been selected via rigorous mathematical assessment (sort of) and features ageing Professors, puzzles gurus, as well as nimble (we hope) graduate students and our brilliant Head of IT. Hours have been spent coming up with a team name and logo (see image). But will it be enough? 

Midnight Madness is in aid of Raise Your Hands, which supports small, effective charities that improve the lives of children across the UK.

Monday, 14 May 2018

Following up Turing - how reaction-diffusion models generate complex patterns

In a seminal 1952 paper, Alan Turing mathematically demonstrated that two reacting chemicals in a spatially uniform mixture could give rise to patterns due to molecular movement, or diffusion. This is a particularly striking result, as diffusion is considered to be a stabilizing mechanism, driving systems towards uniformity (think of a drop of dye spreading in water). Turing's idea was that this mechanism may underlie how patterning occurs in organisms, as it provides a way for the spontaneous formation of spatial patterns even in systems without any heterogeneity at all. This is reminiscent of Darwin's closing remarks in On the Origin of Species: "[F]rom so simple a beginning endless forms most beautiful and most wonderful[...]''

Since this pioneering work on morphogenesis (how organisms develop), a considerable amount of research has explored the tremendous power of reaction-diffusion models to generate patterns. Despite the success of the theory at capturing the attention of mathematicians, biologists, physicists, and chemists for many decades, there are still things we do not know which are necessary to apply these models to processes in developmental biology. An important example of this, presaged by Turing himself, is that most patterning processes of interest do not emerge from spatial homogeneity, but instead evolve in complex environments, and especially from previous patterns. To quote Turing, "Most of an organism, most of the time is developing from one pattern into another, rather than from homogeneity into a pattern."

Recently, researchers Andrew Krause and Eamonn Gaffney from Oxford Mathematics together with colleagues from the University of Cardiff, and the Czech Technical University in Prague have been considering Turing's theory to try and explain certain aspects of the patterning of whiskers on mice. They ran simulations of a reaction-diffusion system known to generate spot patterns, but varied some of the model parameters in space, in order to capture some of the observed variation in mice whiskers (in terms of their size and spacing). The figure above shows an example of these spots with variation in space from this model, as well as an example of the arrangement of whiskers in a typical mouse. This work is still in progress, but it has already led to several fundamental insights into these kinds of systems.

In particular, the researchers found novel patterns which occur in space and time that can be attributed purely to spatial heterogeneity, rather than things which mathematicians are more familiar with that lead to oscillations. Moreover, these oscillating patterns appear robustly in a wide range of different kinds of chemical systems, leading the researchers to think they might be ubiquitous in reaction-diffusion systems. An example of these spatiotemporal oscillations is in the other figure above, where spikes in one spatial dimension are created, move across the spatial domain, and are destroyed, with this pattern repeating periodically in time. The way in which the oscillation period changes as parameters are varied depends crucially on the entire complicated state of the system, contrary to many important models in thermodynamics which display 'universality'.

The researchers hypothesized that it is the interaction of nonlinear reaction and diffusion in a spatially varying medium with an 'open' system that allows for this behaviour. It also calls into question much of the work that has been done using homogeneous models - as Turing himself has said, we anticipate real chemical systems to be in complicated spatial environments. Importantly, very small gradients can lead to moving patterns. Does this occur biologically? Well there are animals who change patterns over time e.g. tapir (slowly changing in time) and flamboyant cuttlefish (rapidly changing in time), but most organisms with spots or stripes, such as zebras or tigers, do not present any kind of oscillations in their coats after birth. The recent work asks many more questions than it answers, both biologically and mathematically. While Turing's original ideas are over 60 years old, he knew that many of his ideas would keep scientists and mathematicians busy for a long time to come, saying "We can only see a short distance ahead, but we can see plenty there that needs to be done.'' The findings appear in the journal Physical Review E.

Figures in rotation:

Reaction-Diffusion model in two spatial dimensions which predicts varying sizes and wavelength between spots.

Typical mouse whisker arrangement (Source: Arrangement of whiskers on the rat's face. Credit: Yan S. W. Yu, Matthew M. Graff, Chris S. Bresee, Yan B. Man, Mitra J. Z. Hartmann (2016) Whiskers aid anemotaxis in rats, Science Advances.)

The chemical concentration over time and space.

Monday, 14 May 2018

Flagging corruption in Government contracting in Africa

Public procurement – or government contracting – is critical to development, accounting for as much as 50% of government spending in developing countries. The procurement process is known to be highly prone to corruption, however corruption is difficult to detect or measure. A recent project led by the University of Oxford in collaboration with Sussex University and Government Transparency Institute has been using and implementing new methodologies for analysing large open public procurement datasets to detect ‘red flags’ that could indicate risks of corruption. Now, researchers from Oxford Mathematics are supporting the delivery of workshops in Africa to share these new methodologies and software tools with anti-corruption groups and researchers to enable them to analyse corruption risks in public procurement data.

Danny Parsons from the African Maths Initiative and Postdoctoral Research Assistant with Prof Balazs Szendroi at the Mathematical Institute in Oxford and Dr Elizabeth David-Barrett (Sussex University) delivered a 2-day workshop at the African Institute of Mathematical Sciences (AIMS), Ghana on Analysing Public Procurement Data for Corruption Risks. This workshop came out of an earlier collaboration between Dr David-Barrett, Dr Mihaly Fazekas (Government Transparency Institute), Prof Szendroi and Danny Parsons on data driven approaches to measuring corruption risks in government contracting. During that project Danny Parsons worked on implementing new methodologies for detecting corruption risks into an open source front end to the R statistics language, to make it easier for researchers in political science, civil society organisations and anti-corruption agencies to detect patterns of corruption risk in public procurement data. In this latest workshop in Ghana, which brought together students and researchers in mathematical sciences and political science as well as civil society groups, Danny showed participants how they could use these recently developed software tools to investigate "red flag" indicators of corruption risk in large open public procurement data. The event highlighted the potential impact this could have on the fight against corruption in Africa - freely available software tools tailored to public procurement data and a growing movement towards governments opening up their data. Interestingly the workshop was picked up by local media (the Ghana News Agency and the Ghana Times) which stressed its relevance to ongoing discussions in Ghana around open government data and in particular the Right to Information Bill.

Wednesday, 9 May 2018

Andreas Sojmark awarded the Bar-Ilan Young Researcher Prize in Financial Mathematics

Oxford Mathematician Andreas Sojmark, a DPhil student in the EPSRC Centre for Doctoral Training in Partial Differential Equations has been awarded the Bar-Ilan Young Researcher Prize in Financial Mathematics. The prize is awarded to a PhD student or early career postdoctoral researcher for an outstanding paper in financial mathematics submitted for the Third Bar-Ilan Conference in Financial Mathematics.

Andreas' paper `An SPDE model for systemic risk with endogenous contagion' will be presented at the conference at the end of May.

Tuesday, 8 May 2018

The twist and turns of curved objects - Oxford Mathematics research investigates the stability and robustness of everted spherical caps

Everyday life tells us that curved objects may have two stable states: a contact lens (or the spherical cap obtained by cutting a tennis ball, see picture) can be turned ‘inside out’. Heuristically, this is because the act of turning the object inside out keeps the central line of the object the same length (the centreline does not stretch significantly). Such deformations are called ‘isometries’ and the ‘turning inside out’ (or everted) isometry of a thin shell is often referred to as mirror buckling.

However, mirror buckling is only strictly an isometry for objects with a vanishing thickness: an object with small, but finite thickness bends and stretches slightly at its outer edge (see second figure). Depending on its size, this bent region can even prevent the object from having two stable states – if the shell is too ‘shallow’, it will not stay in the everted shape but will ‘snap’ back to the natural state.

The rapid snapping between these two states is used to create striking children’s toys, while the Venus Flytrap plant uses an analogous mechanism to catch flies unaware. Surprisingly, however, the conditions under which two stable states exist has not been characterized, even for a spherical shell. In a recent study, Oxford Mathematicians Matteo Taffetani and Dominic Vella with colleagues from Boston University investigated when a spherical shell may exist in this everted state, together with the robustness of the everted state to poking. One surprising result of their analysis is that, though bistability is possible only for shells that are ‘deep enough’, the transition can be understood quantitatively using a mathematical model that exploits the shallowness of a shell.

The study of when the everted state exists provides one perspective on mirror buckling. However, it is also known that very thin shells (which are expected to remain close to isometry) can form polygonal buckles on being poked (think of a ‘broken’ ping pong ball). To gain new understanding of this instability, and how it interacts with snap-through, the authors then studied how robust the everted state is to poking: will it buckle or snap-through first? They found that even when once buckled polygonally, the purely axisymmetric theory gives a good account of when snap-through occurs, suggesting that the underlying mirror buckled solution, while not ultimately attained in this limit, heavily influences the stability of the whole shell structure.

 

Tuesday, 8 May 2018

Tricks of the Tour - optimizing the breakaway position in cycle races using mathematical modelling

Cycling science is a lucrative and competitive industry in which small advantages are often the difference between winning and losing. For example, the 2017 Tour de France was won by a margin of less than one minute for a total race time of more than 86 hours. Such incremental improvements in performance come from a wide range of specialists, including sports scientists, engineers, and dieticians. How can mathematics assist us?

Long-distance cycle races, such as a Tour de France stage, typically follow a prescribed pattern: riders cycle together as a main group, or peloton, for the majority of the race before a solo rider, or small group of riders, makes a break from the peloton, usually relatively close to the finish line. The main reason for this behaviour is that cycling in a group reduces the air resistance that is experienced by a cyclist. With energy savings of up to around a third when cycling in the peloton compared with riding solo, it is energetically favourable to stay with the main field for the majority of the race. However, if a cyclist wishes to win a race or a Tour stage then they must decide on when to make a break. In doing so, the rider must provide an additional pedal force to offset the effects of air resistance that would otherwise be mitigated by riding in the peloton. However, the cyclist will not be able to sustain this extra force indefinitely, with fatigue effects coming into play. As a result, a conflict emerges: if the cyclist breaks away too soon then they risk fatigue effects kicking in before the finish line and being caught by the peloton. On the other hand, if the cyclist breaks too late then they reduce their chance of a large winning margin.

So Oxford Mathematicians Ian Griffiths and Lewis Gaul and Stuart Thomson from MIT asked the question: ‘for a given course profile and rider statistics, what is the optimum time to make a breakaway that maximizes the finish time ahead of the peloton?’

To answer the question, a mathematical model is derived for the cycling dynamics, appealing to Newton’s Second Law, which captures the advantage of riding in the peloton to reduce aerodynamic drag and the physical limitations (due to fatigue) on the force that can be provided by the leg muscles. The effect of concentration of potassium ions in the muscle cells is also a strong factor in the fatigue of the muscles: this is responsible for the pain you experience in your legs after a period of exertion, and is what sets a rider’s baseline level of exertion. The model derived captures the evolution of force output over time due to all of these effects and is applied to a breakaway situation to understand how the muscles respond after a rider exerts a force above their sustainable level.

Asymptotic techniques are used that exploit the fact that the course may be divided into sections within which variations from a mean course gradient are typically small. This leads to analytical solutions that bypass the need for performing complex numerical parameter sweeps. The asymptotic solutions provide a method to draw direct relationships between the values of physical parameters and the time taken to cover a set distance.

The model serves to frame intuitive results in a quantitative way. For instance, it is expected that a breakaway is more likely to succeed on a climb stage, as speeds are lower and so the energy penalty from wind resistance when cycling alone is reduced. The theory confirms this observation while also providing a measure of precisely how much more advantageous a breakaway on a hill climb would be. For multiple stage races the theory can even identify which stages are best to make a breakaway and when it is better to stay in the peloton for the entire stage to conserve energy. The resulting theory could allow a cycle team to identify the strategy and exact breakaway position during each stage in advance of a major race, with very little effort. Such prior information could provide the necessary edge required to secure the marginal gains required to win a race.

While it is clear that winning a Tour de France stage involves a great deal of preparation, physical fitness and, ultimately, luck on the day, mathematics can provide a fundamental underpinning for the race dynamics that can guide strategies to increase the chance of such wins.

Monday, 7 May 2018

Do stochastic systems converge to a well-defined limit? Oxford Mathematics Research investigates

Oxford Mathematician Ilya Chevyrev talks about his research into using stochastic analysis to understand complex systems.

"Stochastic analysis sits on the boundary between probability theory and analysis. It is often a useful tool in studying complex systems subject to noise. Such systems appear frequently in the financial markets, statistical physics, mathematical biological, etc., and it becomes extremely important to determine their statistical properties: can the noise cause the system to blow-up or collapse? How quickly do small perturbations propagate? Is there an equilibrium state? Is it always reached? Due to the fundamental importance of such questions, mathematicians from many fields have devised methods to address them, ranging from the analysis of partial differential equations (PDEs) to game theory.

A question one often encounters is whether a family of stochastic systems converges to a well-defined limit. For example, consider the Glauber dynamics of the Ising-Kac model: we are given a two dimensional lattice $\epsilon \mathbb{Z}^2$ with spacing $\epsilon > 0$. At each site we place a ferromagnet carrying an up or down spin (i.e. we consider a function $\sigma : \epsilon \mathbb{Z}^2 \to \{-1,1\}$). As time evolves, the ferromagnets interact according to some prescribed dynamics. Sending the lattice spacing $\epsilon \to 0$ and rescaling the dynamics appropriately, one is interested in whether the process converges to a non-trivial limit (for this exact example, see this article of Mourrat-Weber).

It turns out that for a wide class of models, one can describe (or at least expect to describe) the limit as a stochastic PDE of the form \[ \partial_t u = \mathcal{L} u + F(u,\nabla u, \xi) \] where $\mathcal{L}$ is an elliptic operator, $F$ is a non-linearity which can depend on the solution $u$ and its derivatives $\nabla u$, and $\xi$ is the noise term. A difficulty one often encounters in studying such equations is that they are classically ill-posed. This means that, given a typical realisation of $\xi$, there exist no function spaces in which we can solve for $u$ using e.g. a fixed point argument. Whenever this occurs, we call the equation singular. The fundamental obstacle, which is also typically encountered in quantum field theory (QFT), is that there is no canonical way to define products of distributions.

An example of a singular SPDE with motivations from QFT is the dynamical $\Phi^4_3$ model \[ \partial_t u = \Delta u - u^3 + \xi \] posed in $(1+3)$-dimensions $u : [0,T]\times \mathbb{R}^3$. Here $\xi$ is a space-time white noise on $ \mathbb{R}^4$ (a random distribution). The noise is sufficiently irregular that we expect the solution $u$ to belong to a space of distributions (not functions!), rendering the cubic term $u^3$ ill-posed. This has further ramifications if one takes approximations of the equation: substituting $\xi$ by a smoothed out version $\xi_\epsilon$ so that $\xi_\epsilon \to \xi$ as $\epsilon \to 0$, the corresponding classical smooth solutions $u_\epsilon$ do not converge to a non-trivial limit as $\epsilon \to 0$.

Starting with the famous KPZ equation, the last five years have seen much progress in providing a solution theory to singular SPDEs. The theories of regularity structures and paracontrolled distributions have been particularly successful at this task. An important feature of any such solution theory is the need for renormalization: smooth/lattice approximations of the equation converge only after appropriate counterterms are added to the equation. In the $\Phi^4_3$ example, this means that there exists a diverging family of constants $(C_\epsilon)_{\epsilon > 0}$ such that solutions to the renormalised PDEs \[ \partial_t u_\epsilon = \Delta u_\epsilon - u_\epsilon^3 + \xi_\epsilon + C_\epsilon u_\epsilon \] converge to a non-trivial limit. It is this limit which one calls the solution of the original $\Phi^4_3$ SPDE.

In a recent paper with Bruned, Chandra and Hairer (Imperial College London) we developed a systematic method to determine the counterterms needed to solve a very general class of SPDEs. Combined with other recent results in regularity structures, particularly with a version of the BPHZ renormalization scheme from perturbative QFT, this essentially provides a robust method to solve general systems of semi-linear SPDEs which are subcritical (this last constraint is known in QFT as super-renormalizability). The fundamental technique behind our approach is algebraic, motivated in particular by pre-Lie algebras."

Wednesday, 2 May 2018

The ‘shear’ brilliance of low head hydropower

The generation of electricity from elevated water sources has been the subject of much scientific research over the last century. Typically, in order to produce cost-effective energy, hydropower stations require large flow rates of water across large pressure drops. Although there are many low head sites around the UK, including numerous river weirs and potential tidal sites, the pursuit of low head hydropower is often avoided because it is uneconomic. Thus the UK, and other relatively flat countries miss out on hydropower due to the lack of sufficient elevated water sources.

In his DPhil project, Oxford Mathematician Graham Benham, has been studying a novel type of low head hydropower generation which uses the Venturi effect to amplify the pressure drop across a turbine. The Venturi effect is similar to a mechanical gearing system. Instead of a turbine dealing with the full flow and a low head, it deals with a reduced flow and an amplified head, thereby allowing for much cheaper electricity. However, the hydropower efficiency depends on how the turbine wake mixes together with the main pipe flow – that is the key to understanding the mixing process.

Mixing occurs in a thin turbulent region of fluid called a shear layer, or a mixing layer. In their recently published research, Oxford Mathematicians Graham Benham, Ian Hewitt and Colin Please, as well as Oxford physicist Alfonso Castrejon-Pita, present a simple mathematical model for the development of such shear layers inside a pipe. The model is based on the assumption that the flow can be divided into a number of thin regions, and this agrees well with both laboratory experiments and computational turbulence modelling. Specifically the model is used to solve a shape optimisation problem, which enables the design of the Venturi to produce the maximum amount of electricity from low head hydropower.

The image above shows the assembly of VerdErg's Venturi-Enhanced Turbine Technology (VETT). VerdErg is a British renewable energy company that has patented VETT. The image was taken from Innovate UK.

Tuesday, 1 May 2018

Inaugural András Gács Award given to Oxford Mathematician Gergely Röst

A new mathematical award has been established in Hungary to honour the memory of talented Hungarian mathematician András Gács (1969-2009), a man famed for his popularity among students and his capacity to inspire the young. The committee of the András Gács Award aimed to reward young mathematicians (under the age of 46), who not only excelled in research, but also motivated students to pursue mathematics. Oxford Mathematician Gergely Röst, a Research Fellow of the Wolfson Centre for Mathematical Biology, was one of the first two awardees. For nearly a decade Gergely has prepared the students of the Universtiy of Szeged for various international mathematics competitions. One of these is the National Scientific Students' Associations Conference, which is a biannual national contest of student research projects with more than 5000 participants. Gergely supervised a prize winning project in applied mathematics for four years in a row (2011, 2013, 2015, 2017).

The award ceremony took place in Budapest, in the Ceremonial Hall of the Eötvös Loránd University (ELTE), during the traditional yearly Mathematician’s Concert. 

Pages