News

Tuesday, 8 May 2018

The twist and turns of curved objects - Oxford Mathematics research investigates the stability and robustness of everted spherical caps

Everyday life tells us that curved objects may have two stable states: a contact lens (or the spherical cap obtained by cutting a tennis ball, see picture) can be turned ‘inside out’. Heuristically, this is because the act of turning the object inside out keeps the central line of the object the same length (the centreline does not stretch significantly). Such deformations are called ‘isometries’ and the ‘turning inside out’ (or everted) isometry of a thin shell is often referred to as mirror buckling.

However, mirror buckling is only strictly an isometry for objects with a vanishing thickness: an object with small, but finite thickness bends and stretches slightly at its outer edge (see second figure). Depending on its size, this bent region can even prevent the object from having two stable states – if the shell is too ‘shallow’, it will not stay in the everted shape but will ‘snap’ back to the natural state.

The rapid snapping between these two states is used to create striking children’s toys, while the Venus Flytrap plant uses an analogous mechanism to catch flies unaware. Surprisingly, however, the conditions under which two stable states exist has not been characterized, even for a spherical shell. In a recent study, Oxford Mathematicians Matteo Taffetani and Dominic Vella with colleagues from Boston University investigated when a spherical shell may exist in this everted state, together with the robustness of the everted state to poking. One surprising result of their analysis is that, though bistability is possible only for shells that are ‘deep enough’, the transition can be understood quantitatively using a mathematical model that exploits the shallowness of a shell.

The study of when the everted state exists provides one perspective on mirror buckling. However, it is also known that very thin shells (which are expected to remain close to isometry) can form polygonal buckles on being poked (think of a ‘broken’ ping pong ball). To gain new understanding of this instability, and how it interacts with snap-through, the authors then studied how robust the everted state is to poking: will it buckle or snap-through first? They found that even when once buckled polygonally, the purely axisymmetric theory gives a good account of when snap-through occurs, suggesting that the underlying mirror buckled solution, while not ultimately attained in this limit, heavily influences the stability of the whole shell structure.

 

Tuesday, 8 May 2018

Tricks of the Tour - optimizing the breakaway position in cycle races using mathematical modelling

Cycling science is a lucrative and competitive industry in which small advantages are often the difference between winning and losing. For example, the 2017 Tour de France was won by a margin of less than one minute for a total race time of more than 86 hours. Such incremental improvements in performance come from a wide range of specialists, including sports scientists, engineers, and dieticians. How can mathematics assist us?

Long-distance cycle races, such as a Tour de France stage, typically follow a prescribed pattern: riders cycle together as a main group, or peloton, for the majority of the race before a solo rider, or small group of riders, makes a break from the peloton, usually relatively close to the finish line. The main reason for this behaviour is that cycling in a group reduces the air resistance that is experienced by a cyclist. With energy savings of up to around a third when cycling in the peloton compared with riding solo, it is energetically favourable to stay with the main field for the majority of the race. However, if a cyclist wishes to win a race or a Tour stage then they must decide on when to make a break. In doing so, the rider must provide an additional pedal force to offset the effects of air resistance that would otherwise be mitigated by riding in the peloton. However, the cyclist will not be able to sustain this extra force indefinitely, with fatigue effects coming into play. As a result, a conflict emerges: if the cyclist breaks away too soon then they risk fatigue effects kicking in before the finish line and being caught by the peloton. On the other hand, if the cyclist breaks too late then they reduce their chance of a large winning margin.

So Oxford Mathematicians Ian Griffiths and Lewis Gaul and Stuart Thomson from MIT asked the question: ‘for a given course profile and rider statistics, what is the optimum time to make a breakaway that maximizes the finish time ahead of the peloton?’

To answer the question, a mathematical model is derived for the cycling dynamics, appealing to Newton’s Second Law, which captures the advantage of riding in the peloton to reduce aerodynamic drag and the physical limitations (due to fatigue) on the force that can be provided by the leg muscles. The effect of concentration of potassium ions in the muscle cells is also a strong factor in the fatigue of the muscles: this is responsible for the pain you experience in your legs after a period of exertion, and is what sets a rider’s baseline level of exertion. The model derived captures the evolution of force output over time due to all of these effects and is applied to a breakaway situation to understand how the muscles respond after a rider exerts a force above their sustainable level.

Asymptotic techniques are used that exploit the fact that the course may be divided into sections within which variations from a mean course gradient are typically small. This leads to analytical solutions that bypass the need for performing complex numerical parameter sweeps. The asymptotic solutions provide a method to draw direct relationships between the values of physical parameters and the time taken to cover a set distance.

The model serves to frame intuitive results in a quantitative way. For instance, it is expected that a breakaway is more likely to succeed on a climb stage, as speeds are lower and so the energy penalty from wind resistance when cycling alone is reduced. The theory confirms this observation while also providing a measure of precisely how much more advantageous a breakaway on a hill climb would be. For multiple stage races the theory can even identify which stages are best to make a breakaway and when it is better to stay in the peloton for the entire stage to conserve energy. The resulting theory could allow a cycle team to identify the strategy and exact breakaway position during each stage in advance of a major race, with very little effort. Such prior information could provide the necessary edge required to secure the marginal gains required to win a race.

While it is clear that winning a Tour de France stage involves a great deal of preparation, physical fitness and, ultimately, luck on the day, mathematics can provide a fundamental underpinning for the race dynamics that can guide strategies to increase the chance of such wins.

Monday, 7 May 2018

Do stochastic systems converge to a well-defined limit? Oxford Mathematics Research investigates

Oxford Mathematician Ilya Chevyrev talks about his research into using stochastic analysis to understand complex systems.

"Stochastic analysis sits on the boundary between probability theory and analysis. It is often a useful tool in studying complex systems subject to noise. Such systems appear frequently in the financial markets, statistical physics, mathematical biological, etc., and it becomes extremely important to determine their statistical properties: can the noise cause the system to blow-up or collapse? How quickly do small perturbations propagate? Is there an equilibrium state? Is it always reached? Due to the fundamental importance of such questions, mathematicians from many fields have devised methods to address them, ranging from the analysis of partial differential equations (PDEs) to game theory.

A question one often encounters is whether a family of stochastic systems converges to a well-defined limit. For example, consider the Glauber dynamics of the Ising-Kac model: we are given a two dimensional lattice $\epsilon \mathbb{Z}^2$ with spacing $\epsilon > 0$. At each site we place a ferromagnet carrying an up or down spin (i.e. we consider a function $\sigma : \epsilon \mathbb{Z}^2 \to \{-1,1\}$). As time evolves, the ferromagnets interact according to some prescribed dynamics. Sending the lattice spacing $\epsilon \to 0$ and rescaling the dynamics appropriately, one is interested in whether the process converges to a non-trivial limit (for this exact example, see this article of Mourrat-Weber).

It turns out that for a wide class of models, one can describe (or at least expect to describe) the limit as a stochastic PDE of the form \[ \partial_t u = \mathcal{L} u + F(u,\nabla u, \xi) \] where $\mathcal{L}$ is an elliptic operator, $F$ is a non-linearity which can depend on the solution $u$ and its derivatives $\nabla u$, and $\xi$ is the noise term. A difficulty one often encounters in studying such equations is that they are classically ill-posed. This means that, given a typical realisation of $\xi$, there exist no function spaces in which we can solve for $u$ using e.g. a fixed point argument. Whenever this occurs, we call the equation singular. The fundamental obstacle, which is also typically encountered in quantum field theory (QFT), is that there is no canonical way to define products of distributions.

An example of a singular SPDE with motivations from QFT is the dynamical $\Phi^4_3$ model \[ \partial_t u = \Delta u - u^3 + \xi \] posed in $(1+3)$-dimensions $u : [0,T]\times \mathbb{R}^3$. Here $\xi$ is a space-time white noise on $ \mathbb{R}^4$ (a random distribution). The noise is sufficiently irregular that we expect the solution $u$ to belong to a space of distributions (not functions!), rendering the cubic term $u^3$ ill-posed. This has further ramifications if one takes approximations of the equation: substituting $\xi$ by a smoothed out version $\xi_\epsilon$ so that $\xi_\epsilon \to \xi$ as $\epsilon \to 0$, the corresponding classical smooth solutions $u_\epsilon$ do not converge to a non-trivial limit as $\epsilon \to 0$.

Starting with the famous KPZ equation, the last five years have seen much progress in providing a solution theory to singular SPDEs. The theories of regularity structures and paracontrolled distributions have been particularly successful at this task. An important feature of any such solution theory is the need for renormalization: smooth/lattice approximations of the equation converge only after appropriate counterterms are added to the equation. In the $\Phi^4_3$ example, this means that there exists a diverging family of constants $(C_\epsilon)_{\epsilon > 0}$ such that solutions to the renormalised PDEs \[ \partial_t u_\epsilon = \Delta u_\epsilon - u_\epsilon^3 + \xi_\epsilon + C_\epsilon u_\epsilon \] converge to a non-trivial limit. It is this limit which one calls the solution of the original $\Phi^4_3$ SPDE.

In a recent paper with Bruned, Chandra and Hairer (Imperial College London) we developed a systematic method to determine the counterterms needed to solve a very general class of SPDEs. Combined with other recent results in regularity structures, particularly with a version of the BPHZ renormalization scheme from perturbative QFT, this essentially provides a robust method to solve general systems of semi-linear SPDEs which are subcritical (this last constraint is known in QFT as super-renormalizability). The fundamental technique behind our approach is algebraic, motivated in particular by pre-Lie algebras."

Wednesday, 2 May 2018

The ‘shear’ brilliance of low head hydropower

The generation of electricity from elevated water sources has been the subject of much scientific research over the last century. Typically, in order to produce cost-effective energy, hydropower stations require large flow rates of water across large pressure drops. Although there are many low head sites around the UK, including numerous river weirs and potential tidal sites, the pursuit of low head hydropower is often avoided because it is uneconomic. Thus the UK, and other relatively flat countries miss out on hydropower due to the lack of sufficient elevated water sources.

In his DPhil project, Oxford Mathematician Graham Benham, has been studying a novel type of low head hydropower generation which uses the Venturi effect to amplify the pressure drop across a turbine. The Venturi effect is similar to a mechanical gearing system. Instead of a turbine dealing with the full flow and a low head, it deals with a reduced flow and an amplified head, thereby allowing for much cheaper electricity. However, the hydropower efficiency depends on how the turbine wake mixes together with the main pipe flow – that is the key to understanding the mixing process.

Mixing occurs in a thin turbulent region of fluid called a shear layer, or a mixing layer. In their recently published research, Oxford Mathematicians Graham Benham, Ian Hewitt and Colin Please, as well as Oxford physicist Alfonso Castrejon-Pita, present a simple mathematical model for the development of such shear layers inside a pipe. The model is based on the assumption that the flow can be divided into a number of thin regions, and this agrees well with both laboratory experiments and computational turbulence modelling. Specifically the model is used to solve a shape optimisation problem, which enables the design of the Venturi to produce the maximum amount of electricity from low head hydropower.

The image above shows the assembly of VerdErg's Venturi-Enhanced Turbine Technology (VETT). VerdErg is a British renewable energy company that has patented VETT. The image was taken from Innovate UK.

Tuesday, 1 May 2018

Inaugural András Gács Award given to Oxford Mathematician Gergely Röst

A new mathematical award has been established in Hungary to honour the memory of talented Hungarian mathematician András Gács (1969-2009), a man famed for his popularity among students and his capacity to inspire the young. The committee of the András Gács Award aimed to reward young mathematicians (under the age of 46), who not only excelled in research, but also motivated students to pursue mathematics. Oxford Mathematician Gergely Röst, a Research Fellow of the Wolfson Centre for Mathematical Biology, was one of the first two awardees. For nearly a decade Gergely has prepared the students of the Universtiy of Szeged for various international mathematics competitions. One of these is the National Scientific Students' Associations Conference, which is a biannual national contest of student research projects with more than 5000 participants. Gergely supervised a prize winning project in applied mathematics for four years in a row (2011, 2013, 2015, 2017).

The award ceremony took place in Budapest, in the Ceremonial Hall of the Eötvös Loránd University (ELTE), during the traditional yearly Mathematician’s Concert. 

Thursday, 19 April 2018

Jochen Kursawe awarded the Reinhart Heinrich Prize

Former Oxford Mathematician Jochen Kursawe, now in the Faculty of Biology, Medicine and Health, University of Manchester, has been awarded the Reinhart Heinrich Prize for his thesis on quantitative approaches to investigating epithelial morphogenesis. Jochen worked with Oxford Mathematician Ruth Baker and former Oxford colleague Alex Fletcher, now in the University of Sheffield, on the research.

The Reinhart Heinrich Prize is awarded annually by the European Society for Mathematical and Theoretical Biology (ESMTB).

Friday, 13 April 2018

Incorporating stress-assisted diffusion in cardiac models

Oxford Mathematician Ricardo Ruiz Baier, in collaboration mainly with the biomedical engineer Alessio Gizzi from Campus Bio-Medico, Rome, have come up with a new class of models that couple diffusion and mechanical stress and which are specifically tailored to the study of cardiac electromechanics. 

Cardiac tissue is a complex multiscale medium constituted by highly interconnected units (cardiomyocytes, the cardiac cells) which have remarkable structural and functional properties. Cardiomyocytes are excitable and deformable cells. Inside them, plasma membrane proteins and intracellular organelles all depend on the current mechanical state of the (macroscopic) tissue. Special structures, such as ion channels or gap junctions, rule the passage of charged particles throughout the cell as well as between different cells and their behaviour can be described by reaction-diffusion systems. All these mechanisms work in synchronisation to conform the coordinated contraction and pumping function of the heart.

During the cardiac cycle, mechanical deformation undoubtedly affects the electrical impulses that modulate muscle contraction, and also modifies the properties of the substrate where the electrical wave propagates. These multiscale interactions are commonly referred to as the mechano-electric feedback (MEF). Theoretical and clinical studies have been contributing to the systematic investigation of MEF effects for over a century; however, several open questions still remain. For example, and focusing on the cellular level, it is still now not completely understood what is the effective contribution of stretch-activated ion channels and what is the most appropriate way to describe them. In addition, and focusing on the organ scale, the clinical relevance of MEF in patients with heart diseases remains an open issue, specifically in relation to how MEF mechanisms translate into ECGs.

The idea of coupling mechanical stress directly as a mechanism to modify diffusive properties has been exploited for several decades by focusing on the context of dilute solutes in a solid, but remarkable similarities exist between these fundamental processes and the propagation of voltage membrane within cardiac tissue. Indeed, on a macroscopically rigid matrix, the propagating membrane voltage can be regarded as a continuum field undergoing slow diffusion.  

The approach described above basically generalizes Fick's diffusion by using the classical Euler's axioms of continuously distributed matter. An important part of the project, now under development, deals with the stability of the governing partial differential equations, the existence and uniqueness of weak solutions, and the formulation of mixed-primal and fully mixed discretisations needed to compute numerical solutions in an accurate, robust, and efficient manner. Some of the challenges involved relate to strong nonlinearities, heterogeneity, anisotropy, and the very different spatio-temporal scales present in the model. The construction and analysis of the proposed models and methods requires advanced techniques from abstract mathematics, the interpretation of the obtained solutions necessitates a clear understanding of the underlying bio-physical mechanisms, and the implementation (carried out exploiting modern computational architectures) depends on sophisticated tools from computer science. 
 
Other applications of a similar framework are encountered in quite different scenarios, for instance in the modelling of lithium ion batteries. Oxford visiting student Bryan Gomez (from Concepcion, Chile, co-supervised by Ruiz Baier and Gabriel Gatica) is currently looking at the fixed-point solvability and regularity of weak solutions, as well as the construction and analysis of finite element methods tailored for this kind of coupled problems (see also a different perspective focusing on homogenisation and asymptotic analysis, carried out by Oxford Mathematicians Jon Chapman, Alain Goriely, and Colin Please). 

Friday, 13 April 2018

How do node attributes mix in large-scale networks? Oxford Mathematics Research investigates

In this collaboration with researchers from the University of Louvain, Renaud Lambiotte from Oxford Mathematics explores the mixing of node attributes in large-scale networks.

A central theme of network science is the heterogeneity present in real-life systems. Take an element, called a node, and its number of connections, called its degree, for instance. Many systems do not have a characteristic degree for the nodes, as they are made of a few highly connected nodes, i.e. hubs, and a majority of poorly connected nodes. Networks are also well-known to be small-world in a majority of contexts, as a few links are typically sufficient to connect any pair of nodes. For instance, the Erdős number of Renaud Lambiotte is 3, as he co-authored a paper with Vincent D. Blondel, who co-authored with Harold S. Shapiro, who co-authored with Paul Erdős. 3 links are sufficient to reach Paul Erdős in the co-authorship network.

 

      

 

Because of their small-worldness, it is often implicitly assumed that node attributes (for instance, the age or gender of an individual in a social network) are homogeneously mixed in a network and that different regions exhibit the same behaviour. The contribution of this work is to show that this not the case in a variety of systems. Here, the authors focus on assortativity, a network analogue of correlation used to describe how the presence and absence of edges co-varies with the properties of nodes. The authors design a method to characterise the heterogeneity and local variations of assortativity within a network. The left-hand figure (please click to enlarge) for instance, illustrates an analogy to the classical Anscombe’s quartet, with 5 networks having the same number of nodes, number of links and average assortativity, but different local mixing patterns. The method developed by the authors is based on the notion of random walk with restart and allows them to define localized metrics of assortativity in the network. The method is tested on various biological, ecological and social networks, and reveals rich mixing patterns that would be obscured by summarising assortativity with a single statistic. As an example, the right-hand figure shows the local assortativity of gender in a sample of Facebook friendships. One observes that different regions of the graph exhibit strikingly different patterns, confirming that a single variable, e.g. global assortativity, would provide a poor description of the system.

For a more detailed description of the work please click here.

Tuesday, 10 April 2018

Ada Lovelace - the Making of a Computer Scientist. The latest book from Oxford Mathematics

Our latest book features the remarkable story of Ada Lovelace, often considered the world’s first computer programmer, as told in a new book co-written by Oxford Mathematicians Christopher Hollings and Ursula Martin together with colleague Adrian Rice from Randolph-Macon College.

A sheet of apparent doodles of dots and lines lay unrecognised in the Bodleian Library until Ursula Martin spotted what it was - a conversation between Ada Lovelace and Charles Babbage about finding patterns in networks, a very early forerunner of the sophisticated computer techniques used today by the likes of Google and Facebook. It is just one of the remarkable mathematical images to be found in the new book, 'Ada Lovelace: The Making of a Computer Scientist'.

Ada, Countess of Lovelace (1815–1852) was the daughter of poet Lord Byron and his highly educated wife, Anne Isabella. Active in Victorian London's social and scientific elite alongside Mary Somerville, Michael Faraday and Charles Dickens, Ada Lovelace became fascinated by the computing machines devised by Charles Babbage.  A table of mathematical formulae sometimes called the ‘first programme’ occurs in her 1843 paper about his most ambitious invention, his unbuilt ‘Analytical Engine.’

Ada Lovelace had no access to formal school or university education but studied science and mathematics from a young age. This book uses previously unpublished archival material to explore her precocious childhood: her ideas for a steam-powered flying horse, pages from her mathematical notebooks, and penetrating questions about the science of rainbows. A remarkable correspondence course with the eminent mathematician Augustus De Morgan shows her developing into a gifted, perceptive and knowledgeable mathematician, not afraid to challenge her teacher over controversial ideas.

 “Lovelace’s far sighted remarks about whether the machine might think, or compose music, still resonate today,” said Professor Martin. “This book shows how Ada Lovelace, with astonishing prescience, learned the maths she needed to understand the principles behind modern computing.”

Ada Lovelace: The Making of a Computer Scientist, by Christopher Hollings, Ursula Martin and Adrian Rice will be launched on 16th April 2018 by Bodleian Library Publishing, in partnership with the Clay Mathematics Institute.  

The page of doodles is on display until February 2019 as part of the Bodleian Library’s exhibition 'Sappho to Suffrage: women who dared.'

Ursula Martin will be speaking at the Hay Festival and Edinburgh Book Festival.

Monday, 9 April 2018

The contact-free knot - Oxford Mathematics Research explains

Knots are widespread, universal physical structures, from shoelaces to Celtic decoration to the many variants familiar to sailors. They are often simple to construct and aesthetically appealing, yet remain topologically and mechanically quite complex.

Knots are also common in biopolymers such as DNA and proteins, with significant and often detrimental effects, and biological mechanisms also exist for 'unknotting'.

There are numerous types of questions when studying knots. From a topological standpoint, fundamental issues include knot classification and equivalence of different knot descriptions. In continuum mechanics and elasticity, a knot is a physical structure with finite thickness, and aspects of interest include the strength, stability, equilibrium shape, and dynamic behaviour of a knotted filament. Such aspects are strongly connected to points/regions of self-contact, at which distant points push against each other.

Consider a simple hand-held experiment: take a strip of paper or flexible wire, tie it into a standard but loose knot (an open trefoil), and you will observe 2 isolated points of self-contact surrounding an interval of self-contact. Now add twist by rotating the ends, change the end-to-end distance by bringing your hands closer or further apart, and combine with small transverse displacements, i.e. shifting the end. For certain materials and with a little finesse, all points of contact can be removed.

Such configurations – contact-free, knotted, and mechanically stable – have never been described before, and Oxford Mathematician Derek Moulton and colleagues sought to understand and characterise them in terms of the underlying geometry and mechanics. To do so, they turned to the Kirchhoff equations for elastic rods, a set of 18 nonlinear differential equations that describe the balance of forces and moments as well as the geometrical shape of a thin and long elastic material. These equations admit an incredibly rich and non-unique solution space. A small modification to these equations yields the 'ribbon equations', more appropriate for a strip of paper and with a similarly complex solution space.

The goal was to find configurations within this solution space that satisfy the conditions of being contact-free, mechanically stable, and knotted. This was a bit like finding a needle in a haystack, but after applying some numerical tricks they showed that in fact such configurations exist as theoretical solutions of the full nonlinear 18D system; they then categorised the space of 'good knots' in terms of the 3 experimental measures: end-rotation, end-displacement, and end-shift. The numerical study was complemented with an asymptotic analysis of a perturbed 'double ring' solution; the idea being that knotted solutions can be found in the neighbourhood of a planar circle that overlaps itself exactly once.

The analysis suggests that the transverse displacement is a necessary component for generating contact-free knots. While the researchers only considered the "simplest" trefoil knot, they conjecture that toroidal knots of increasing genus can be stabilised in a contact-free state.

For a fuller explanation of the team's work please click here.

Pages