News

Monday, 7 May 2018

Do stochastic systems converge to a well-defined limit? Oxford Mathematics Research investigates

Oxford Mathematician Ilya Chevyrev talks about his research into using stochastic analysis to understand complex systems.

"Stochastic analysis sits on the boundary between probability theory and analysis. It is often a useful tool in studying complex systems subject to noise. Such systems appear frequently in the financial markets, statistical physics, mathematical biological, etc., and it becomes extremely important to determine their statistical properties: can the noise cause the system to blow-up or collapse? How quickly do small perturbations propagate? Is there an equilibrium state? Is it always reached? Due to the fundamental importance of such questions, mathematicians from many fields have devised methods to address them, ranging from the analysis of partial differential equations (PDEs) to game theory.

A question one often encounters is whether a family of stochastic systems converges to a well-defined limit. For example, consider the Glauber dynamics of the Ising-Kac model: we are given a two dimensional lattice $\epsilon \mathbb{Z}^2$ with spacing $\epsilon > 0$. At each site we place a ferromagnet carrying an up or down spin (i.e. we consider a function $\sigma : \epsilon \mathbb{Z}^2 \to \{-1,1\}$). As time evolves, the ferromagnets interact according to some prescribed dynamics. Sending the lattice spacing $\epsilon \to 0$ and rescaling the dynamics appropriately, one is interested in whether the process converges to a non-trivial limit (for this exact example, see this article of Mourrat-Weber).

It turns out that for a wide class of models, one can describe (or at least expect to describe) the limit as a stochastic PDE of the form \[ \partial_t u = \mathcal{L} u + F(u,\nabla u, \xi) \] where $\mathcal{L}$ is an elliptic operator, $F$ is a non-linearity which can depend on the solution $u$ and its derivatives $\nabla u$, and $\xi$ is the noise term. A difficulty one often encounters in studying such equations is that they are classically ill-posed. This means that, given a typical realisation of $\xi$, there exist no function spaces in which we can solve for $u$ using e.g. a fixed point argument. Whenever this occurs, we call the equation singular. The fundamental obstacle, which is also typically encountered in quantum field theory (QFT), is that there is no canonical way to define products of distributions.

An example of a singular SPDE with motivations from QFT is the dynamical $\Phi^4_3$ model \[ \partial_t u = \Delta u - u^3 + \xi \] posed in $(1+3)$-dimensions $u : [0,T]\times \mathbb{R}^3$. Here $\xi$ is a space-time white noise on $ \mathbb{R}^4$ (a random distribution). The noise is sufficiently irregular that we expect the solution $u$ to belong to a space of distributions (not functions!), rendering the cubic term $u^3$ ill-posed. This has further ramifications if one takes approximations of the equation: substituting $\xi$ by a smoothed out version $\xi_\epsilon$ so that $\xi_\epsilon \to \xi$ as $\epsilon \to 0$, the corresponding classical smooth solutions $u_\epsilon$ do not converge to a non-trivial limit as $\epsilon \to 0$.

Starting with the famous KPZ equation, the last five years have seen much progress in providing a solution theory to singular SPDEs. The theories of regularity structures and paracontrolled distributions have been particularly successful at this task. An important feature of any such solution theory is the need for renormalization: smooth/lattice approximations of the equation converge only after appropriate counterterms are added to the equation. In the $\Phi^4_3$ example, this means that there exists a diverging family of constants $(C_\epsilon)_{\epsilon > 0}$ such that solutions to the renormalised PDEs \[ \partial_t u_\epsilon = \Delta u_\epsilon - u_\epsilon^3 + \xi_\epsilon + C_\epsilon u_\epsilon \] converge to a non-trivial limit. It is this limit which one calls the solution of the original $\Phi^4_3$ SPDE.

In a recent paper with Bruned, Chandra and Hairer (Imperial College London) we developed a systematic method to determine the counterterms needed to solve a very general class of SPDEs. Combined with other recent results in regularity structures, particularly with a version of the BPHZ renormalization scheme from perturbative QFT, this essentially provides a robust method to solve general systems of semi-linear SPDEs which are subcritical (this last constraint is known in QFT as super-renormalizability). The fundamental technique behind our approach is algebraic, motivated in particular by pre-Lie algebras."

Wednesday, 2 May 2018

The ‘shear’ brilliance of low head hydropower

The generation of electricity from elevated water sources has been the subject of much scientific research over the last century. Typically, in order to produce cost-effective energy, hydropower stations require large flow rates of water across large pressure drops. Although there are many low head sites around the UK, including numerous river weirs and potential tidal sites, the pursuit of low head hydropower is often avoided because it is uneconomic. Thus the UK, and other relatively flat countries miss out on hydropower due to the lack of sufficient elevated water sources.

In his DPhil project, Oxford Mathematician Graham Benham, has been studying a novel type of low head hydropower generation which uses the Venturi effect to amplify the pressure drop across a turbine. The Venturi effect is similar to a mechanical gearing system. Instead of a turbine dealing with the full flow and a low head, it deals with a reduced flow and an amplified head, thereby allowing for much cheaper electricity. However, the hydropower efficiency depends on how the turbine wake mixes together with the main pipe flow – that is the key to understanding the mixing process.

Mixing occurs in a thin turbulent region of fluid called a shear layer, or a mixing layer. In their recently published research, Oxford Mathematicians Graham Benham, Ian Hewitt and Colin Please, as well as Oxford physicist Alfonso Castrejon-Pita, present a simple mathematical model for the development of such shear layers inside a pipe. The model is based on the assumption that the flow can be divided into a number of thin regions, and this agrees well with both laboratory experiments and computational turbulence modelling. Specifically the model is used to solve a shape optimisation problem, which enables the design of the Venturi to produce the maximum amount of electricity from low head hydropower.

The image above shows the assembly of VerdErg's Venturi-Enhanced Turbine Technology (VETT). VerdErg is a British renewable energy company that has patented VETT. The image was taken from Innovate UK.

Tuesday, 1 May 2018

Inaugural András Gács Award given to Oxford Mathematician Gergely Röst

A new mathematical award has been established in Hungary to honour the memory of talented Hungarian mathematician András Gács (1969-2009), a man famed for his popularity among students and his capacity to inspire the young. The committee of the András Gács Award aimed to reward young mathematicians (under the age of 46), who not only excelled in research, but also motivated students to pursue mathematics. Oxford Mathematician Gergely Röst, a Research Fellow of the Wolfson Centre for Mathematical Biology, was one of the first two awardees. For nearly a decade Gergely has prepared the students of the Universtiy of Szeged for various international mathematics competitions. One of these is the National Scientific Students' Associations Conference, which is a biannual national contest of student research projects with more than 5000 participants. Gergely supervised a prize winning project in applied mathematics for four years in a row (2011, 2013, 2015, 2017).

The award ceremony took place in Budapest, in the Ceremonial Hall of the Eötvös Loránd University (ELTE), during the traditional yearly Mathematician’s Concert. 

Thursday, 19 April 2018

Jochen Kursawe awarded the Reinhart Heinrich Prize

Former Oxford Mathematician Jochen Kursawe, now in the Faculty of Biology, Medicine and Health, University of Manchester, has been awarded the Reinhart Heinrich Prize for his thesis on quantitative approaches to investigating epithelial morphogenesis. Jochen worked with Oxford Mathematician Ruth Baker and former Oxford colleague Alex Fletcher, now in the University of Sheffield, on the research.

The Reinhart Heinrich Prize is awarded annually by the European Society for Mathematical and Theoretical Biology (ESMTB).

Friday, 13 April 2018

Incorporating stress-assisted diffusion in cardiac models

Oxford Mathematician Ricardo Ruiz Baier, in collaboration mainly with the biomedical engineer Alessio Gizzi from Campus Bio-Medico, Rome, have come up with a new class of models that couple diffusion and mechanical stress and which are specifically tailored to the study of cardiac electromechanics. 

Cardiac tissue is a complex multiscale medium constituted by highly interconnected units (cardiomyocytes, the cardiac cells) which have remarkable structural and functional properties. Cardiomyocytes are excitable and deformable cells. Inside them, plasma membrane proteins and intracellular organelles all depend on the current mechanical state of the (macroscopic) tissue. Special structures, such as ion channels or gap junctions, rule the passage of charged particles throughout the cell as well as between different cells and their behaviour can be described by reaction-diffusion systems. All these mechanisms work in synchronisation to conform the coordinated contraction and pumping function of the heart.

During the cardiac cycle, mechanical deformation undoubtedly affects the electrical impulses that modulate muscle contraction, and also modifies the properties of the substrate where the electrical wave propagates. These multiscale interactions are commonly referred to as the mechano-electric feedback (MEF). Theoretical and clinical studies have been contributing to the systematic investigation of MEF effects for over a century; however, several open questions still remain. For example, and focusing on the cellular level, it is still now not completely understood what is the effective contribution of stretch-activated ion channels and what is the most appropriate way to describe them. In addition, and focusing on the organ scale, the clinical relevance of MEF in patients with heart diseases remains an open issue, specifically in relation to how MEF mechanisms translate into ECGs.

The idea of coupling mechanical stress directly as a mechanism to modify diffusive properties has been exploited for several decades by focusing on the context of dilute solutes in a solid, but remarkable similarities exist between these fundamental processes and the propagation of voltage membrane within cardiac tissue. Indeed, on a macroscopically rigid matrix, the propagating membrane voltage can be regarded as a continuum field undergoing slow diffusion.  

The approach described above basically generalizes Fick's diffusion by using the classical Euler's axioms of continuously distributed matter. An important part of the project, now under development, deals with the stability of the governing partial differential equations, the existence and uniqueness of weak solutions, and the formulation of mixed-primal and fully mixed discretisations needed to compute numerical solutions in an accurate, robust, and efficient manner. Some of the challenges involved relate to strong nonlinearities, heterogeneity, anisotropy, and the very different spatio-temporal scales present in the model. The construction and analysis of the proposed models and methods requires advanced techniques from abstract mathematics, the interpretation of the obtained solutions necessitates a clear understanding of the underlying bio-physical mechanisms, and the implementation (carried out exploiting modern computational architectures) depends on sophisticated tools from computer science. 
 
Other applications of a similar framework are encountered in quite different scenarios, for instance in the modelling of lithium ion batteries. Oxford visiting student Bryan Gomez (from Concepcion, Chile, co-supervised by Ruiz Baier and Gabriel Gatica) is currently looking at the fixed-point solvability and regularity of weak solutions, as well as the construction and analysis of finite element methods tailored for this kind of coupled problems (see also a different perspective focusing on homogenisation and asymptotic analysis, carried out by Oxford Mathematicians Jon Chapman, Alain Goriely, and Colin Please). 

Friday, 13 April 2018

How do node attributes mix in large-scale networks? Oxford Mathematics Research investigates

In this collaboration with researchers from the University of Louvain, Renaud Lambiotte from Oxford Mathematics explores the mixing of node attributes in large-scale networks.

A central theme of network science is the heterogeneity present in real-life systems. Take an element, called a node, and its number of connections, called its degree, for instance. Many systems do not have a characteristic degree for the nodes, as they are made of a few highly connected nodes, i.e. hubs, and a majority of poorly connected nodes. Networks are also well-known to be small-world in a majority of contexts, as a few links are typically sufficient to connect any pair of nodes. For instance, the Erdős number of Renaud Lambiotte is 3, as he co-authored a paper with Vincent D. Blondel, who co-authored with Harold S. Shapiro, who co-authored with Paul Erdős. 3 links are sufficient to reach Paul Erdős in the co-authorship network.

 

      

 

Because of their small-worldness, it is often implicitly assumed that node attributes (for instance, the age or gender of an individual in a social network) are homogeneously mixed in a network and that different regions exhibit the same behaviour. The contribution of this work is to show that this not the case in a variety of systems. Here, the authors focus on assortativity, a network analogue of correlation used to describe how the presence and absence of edges co-varies with the properties of nodes. The authors design a method to characterise the heterogeneity and local variations of assortativity within a network. The left-hand figure (please click to enlarge) for instance, illustrates an analogy to the classical Anscombe’s quartet, with 5 networks having the same number of nodes, number of links and average assortativity, but different local mixing patterns. The method developed by the authors is based on the notion of random walk with restart and allows them to define localized metrics of assortativity in the network. The method is tested on various biological, ecological and social networks, and reveals rich mixing patterns that would be obscured by summarising assortativity with a single statistic. As an example, the right-hand figure shows the local assortativity of gender in a sample of Facebook friendships. One observes that different regions of the graph exhibit strikingly different patterns, confirming that a single variable, e.g. global assortativity, would provide a poor description of the system.

For a more detailed description of the work please click here.

Tuesday, 10 April 2018

Ada Lovelace - the Making of a Computer Scientist. The latest book from Oxford Mathematics

Our latest book features the remarkable story of Ada Lovelace, often considered the world’s first computer programmer, as told in a new book co-written by Oxford Mathematicians Christopher Hollings and Ursula Martin together with colleague Adrian Rice from Randolph-Macon College.

A sheet of apparent doodles of dots and lines lay unrecognised in the Bodleian Library until Ursula Martin spotted what it was - a conversation between Ada Lovelace and Charles Babbage about finding patterns in networks, a very early forerunner of the sophisticated computer techniques used today by the likes of Google and Facebook. It is just one of the remarkable mathematical images to be found in the new book, 'Ada Lovelace: The Making of a Computer Scientist'.

Ada, Countess of Lovelace (1815–1852) was the daughter of poet Lord Byron and his highly educated wife, Anne Isabella. Active in Victorian London's social and scientific elite alongside Mary Somerville, Michael Faraday and Charles Dickens, Ada Lovelace became fascinated by the computing machines devised by Charles Babbage.  A table of mathematical formulae sometimes called the ‘first programme’ occurs in her 1843 paper about his most ambitious invention, his unbuilt ‘Analytical Engine.’

Ada Lovelace had no access to formal school or university education but studied science and mathematics from a young age. This book uses previously unpublished archival material to explore her precocious childhood: her ideas for a steam-powered flying horse, pages from her mathematical notebooks, and penetrating questions about the science of rainbows. A remarkable correspondence course with the eminent mathematician Augustus De Morgan shows her developing into a gifted, perceptive and knowledgeable mathematician, not afraid to challenge her teacher over controversial ideas.

 “Lovelace’s far sighted remarks about whether the machine might think, or compose music, still resonate today,” said Professor Martin. “This book shows how Ada Lovelace, with astonishing prescience, learned the maths she needed to understand the principles behind modern computing.”

Ada Lovelace: The Making of a Computer Scientist, by Christopher Hollings, Ursula Martin and Adrian Rice will be launched on 16th April 2018 by Bodleian Library Publishing, in partnership with the Clay Mathematics Institute.  

The page of doodles is on display until February 2019 as part of the Bodleian Library’s exhibition 'Sappho to Suffrage: women who dared.'

Ursula Martin will be speaking at the Hay Festival and Edinburgh Book Festival.

Monday, 9 April 2018

The contact-free knot - Oxford Mathematics Research explains

Knots are widespread, universal physical structures, from shoelaces to Celtic decoration to the many variants familiar to sailors. They are often simple to construct and aesthetically appealing, yet remain topologically and mechanically quite complex.

Knots are also common in biopolymers such as DNA and proteins, with significant and often detrimental effects, and biological mechanisms also exist for 'unknotting'.

There are numerous types of questions when studying knots. From a topological standpoint, fundamental issues include knot classification and equivalence of different knot descriptions. In continuum mechanics and elasticity, a knot is a physical structure with finite thickness, and aspects of interest include the strength, stability, equilibrium shape, and dynamic behaviour of a knotted filament. Such aspects are strongly connected to points/regions of self-contact, at which distant points push against each other.

Consider a simple hand-held experiment: take a strip of paper or flexible wire, tie it into a standard but loose knot (an open trefoil), and you will observe 2 isolated points of self-contact surrounding an interval of self-contact. Now add twist by rotating the ends, change the end-to-end distance by bringing your hands closer or further apart, and combine with small transverse displacements, i.e. shifting the end. For certain materials and with a little finesse, all points of contact can be removed.

Such configurations – contact-free, knotted, and mechanically stable – have never been described before, and Oxford Mathematician Derek Moulton and colleagues sought to understand and characterise them in terms of the underlying geometry and mechanics. To do so, they turned to the Kirchhoff equations for elastic rods, a set of 18 nonlinear differential equations that describe the balance of forces and moments as well as the geometrical shape of a thin and long elastic material. These equations admit an incredibly rich and non-unique solution space. A small modification to these equations yields the 'ribbon equations', more appropriate for a strip of paper and with a similarly complex solution space.

The goal was to find configurations within this solution space that satisfy the conditions of being contact-free, mechanically stable, and knotted. This was a bit like finding a needle in a haystack, but after applying some numerical tricks they showed that in fact such configurations exist as theoretical solutions of the full nonlinear 18D system; they then categorised the space of 'good knots' in terms of the 3 experimental measures: end-rotation, end-displacement, and end-shift. The numerical study was complemented with an asymptotic analysis of a perturbed 'double ring' solution; the idea being that knotted solutions can be found in the neighbourhood of a planar circle that overlaps itself exactly once.

The analysis suggests that the transverse displacement is a necessary component for generating contact-free knots. While the researchers only considered the "simplest" trefoil knot, they conjecture that toroidal knots of increasing genus can be stabilised in a contact-free state.

For a fuller explanation of the team's work please click here.

Thursday, 5 April 2018

The Oxford Maths Festival 28-29 April 2018

Bringing together talks, workshops, hands-on activities and walking tours, the Oxford Maths Festival is an extravaganza of all the wonderful curiosities mathematics holds. Board games, sport, risk and the wisdom of crowds courtesy of Marcus du Sautoy are all on the menu.

Over two days you can immerse yourself in a wide range of events, with something for everyone, no matter what your age or prior mathematical experience. 

All events are free to attend. Some require pre-booking. For the entire programme, please click here.

Tuesday, 3 April 2018

Alain Goriely and Mike Giles made SIAM Fellows

Oxford Mathematicians Alain Goriely and Mike Giles have been made Fellows of the Society for Industrial and Applied Mathematics (SIAM). Alain is recognised for his "contributions to nonlinear elasticity and theories of biological growth" while Mike receives his Fellowship for his "contributions to numerical analysis and scientific computing, particularly concerning adjoint methods, stochastic simulation, and Multilevel Monte Carlo."

Alain is Professor of Mathematical Modelling in the University of Oxford where he is Director of the Oxford Centre for Industrial and Applied Mathematics (OCIAM) and Co-Director of the International Brain Mechanics and Trauma Lab (IBMTL). He is an applied mathematician with broad interests in mathematics, mechanics, sciences, and engineering. His current research also include the modelling of new photovoltaic devices, the modelling of cancer and the mechanics of the human brain. He is author of the recently published Applied Mathematics: A Very Short Introduction. Alain is also the founder of the successful Oxford Mathematics Public Lecture series. You can watch his recent Public Lecture, 'Can Mathematics Understand the Brain' here.

Mike is Professor of Scientific Computing in the University of Oxford. After working at MIT and the Oxford University Computing Laboratory on computational fluid dynamics applied to the analysis and design of gas turbines, he moved into computational finance and research on Monte Carlo methods for a variety of applications. His research focuses on improving the accuracy, efficiency and analysis of Monte Carlo methods. He is also interested in various aspects of scientific computing, including high performance parallel computing and has been working on the exploitation of GPUs (graphics processors) for a variety of financial, scientific and engineering applications.

Pages