News

Monday, 14 May 2018

Flagging corruption in Government contracting in Africa

Public procurement – or government contracting – is critical to development, accounting for as much as 50% of government spending in developing countries. The procurement process is known to be highly prone to corruption, however corruption is difficult to detect or measure. A recent project led by the University of Oxford in collaboration with Sussex University and Government Transparency Institute has been using and implementing new methodologies for analysing large open public procurement datasets to detect ‘red flags’ that could indicate risks of corruption. Now, researchers from Oxford Mathematics are supporting the delivery of workshops in Africa to share these new methodologies and software tools with anti-corruption groups and researchers to enable them to analyse corruption risks in public procurement data.

Danny Parsons from the African Maths Initiative and Postdoctoral Research Assistant with Prof Balazs Szendroi at the Mathematical Institute in Oxford and Dr Elizabeth David-Barrett (Sussex University) delivered a 2-day workshop at the African Institute of Mathematical Sciences (AIMS), Ghana on Analysing Public Procurement Data for Corruption Risks. This workshop came out of an earlier collaboration between Dr David-Barrett, Dr Mihaly Fazekas (Government Transparency Institute), Prof Szendroi and Danny Parsons on data driven approaches to measuring corruption risks in government contracting. During that project Danny Parsons worked on implementing new methodologies for detecting corruption risks into an open source front end to the R statistics language, to make it easier for researchers in political science, civil society organisations and anti-corruption agencies to detect patterns of corruption risk in public procurement data. In this latest workshop in Ghana, which brought together students and researchers in mathematical sciences and political science as well as civil society groups, Danny showed participants how they could use these recently developed software tools to investigate "red flag" indicators of corruption risk in large open public procurement data. The event highlighted the potential impact this could have on the fight against corruption in Africa - freely available software tools tailored to public procurement data and a growing movement towards governments opening up their data. Interestingly the workshop was picked up by local media (the Ghana News Agency and the Ghana Times) which stressed its relevance to ongoing discussions in Ghana around open government data and in particular the Right to Information Bill.

Wednesday, 9 May 2018

Andreas Sojmark awarded the Bar-Ilan Young Researcher Prize in Financial Mathematics

Oxford Mathematician Andreas Sojmark, a DPhil student in the EPSRC Centre for Doctoral Training in Partial Differential Equations has been awarded the Bar-Ilan Young Researcher Prize in Financial Mathematics. The prize is awarded to a PhD student or early career postdoctoral researcher for an outstanding paper in financial mathematics submitted for the Third Bar-Ilan Conference in Financial Mathematics.

Andreas' paper `An SPDE model for systemic risk with endogenous contagion' will be presented at the conference at the end of May.

Tuesday, 8 May 2018

The twist and turns of curved objects - Oxford Mathematics research investigates the stability and robustness of everted spherical caps

Everyday life tells us that curved objects may have two stable states: a contact lens (or the spherical cap obtained by cutting a tennis ball, see picture) can be turned ‘inside out’. Heuristically, this is because the act of turning the object inside out keeps the central line of the object the same length (the centreline does not stretch significantly). Such deformations are called ‘isometries’ and the ‘turning inside out’ (or everted) isometry of a thin shell is often referred to as mirror buckling.

However, mirror buckling is only strictly an isometry for objects with a vanishing thickness: an object with small, but finite thickness bends and stretches slightly at its outer edge (see second figure). Depending on its size, this bent region can even prevent the object from having two stable states – if the shell is too ‘shallow’, it will not stay in the everted shape but will ‘snap’ back to the natural state.

The rapid snapping between these two states is used to create striking children’s toys, while the Venus Flytrap plant uses an analogous mechanism to catch flies unaware. Surprisingly, however, the conditions under which two stable states exist has not been characterized, even for a spherical shell. In a recent study, Oxford Mathematicians Matteo Taffetani and Dominic Vella with colleagues from Boston University investigated when a spherical shell may exist in this everted state, together with the robustness of the everted state to poking. One surprising result of their analysis is that, though bistability is possible only for shells that are ‘deep enough’, the transition can be understood quantitatively using a mathematical model that exploits the shallowness of a shell.

The study of when the everted state exists provides one perspective on mirror buckling. However, it is also known that very thin shells (which are expected to remain close to isometry) can form polygonal buckles on being poked (think of a ‘broken’ ping pong ball). To gain new understanding of this instability, and how it interacts with snap-through, the authors then studied how robust the everted state is to poking: will it buckle or snap-through first? They found that even when once buckled polygonally, the purely axisymmetric theory gives a good account of when snap-through occurs, suggesting that the underlying mirror buckled solution, while not ultimately attained in this limit, heavily influences the stability of the whole shell structure.

 

Tuesday, 8 May 2018

Tricks of the Tour - optimizing the breakaway position in cycle races using mathematical modelling

Cycling science is a lucrative and competitive industry in which small advantages are often the difference between winning and losing. For example, the 2017 Tour de France was won by a margin of less than one minute for a total race time of more than 86 hours. Such incremental improvements in performance come from a wide range of specialists, including sports scientists, engineers, and dieticians. How can mathematics assist us?

Long-distance cycle races, such as a Tour de France stage, typically follow a prescribed pattern: riders cycle together as a main group, or peloton, for the majority of the race before a solo rider, or small group of riders, makes a break from the peloton, usually relatively close to the finish line. The main reason for this behaviour is that cycling in a group reduces the air resistance that is experienced by a cyclist. With energy savings of up to around a third when cycling in the peloton compared with riding solo, it is energetically favourable to stay with the main field for the majority of the race. However, if a cyclist wishes to win a race or a Tour stage then they must decide on when to make a break. In doing so, the rider must provide an additional pedal force to offset the effects of air resistance that would otherwise be mitigated by riding in the peloton. However, the cyclist will not be able to sustain this extra force indefinitely, with fatigue effects coming into play. As a result, a conflict emerges: if the cyclist breaks away too soon then they risk fatigue effects kicking in before the finish line and being caught by the peloton. On the other hand, if the cyclist breaks too late then they reduce their chance of a large winning margin.

So Oxford Mathematicians Ian Griffiths and Lewis Gaul and Stuart Thomson from MIT asked the question: ‘for a given course profile and rider statistics, what is the optimum time to make a breakaway that maximizes the finish time ahead of the peloton?’

To answer the question, a mathematical model is derived for the cycling dynamics, appealing to Newton’s Second Law, which captures the advantage of riding in the peloton to reduce aerodynamic drag and the physical limitations (due to fatigue) on the force that can be provided by the leg muscles. The effect of concentration of potassium ions in the muscle cells is also a strong factor in the fatigue of the muscles: this is responsible for the pain you experience in your legs after a period of exertion, and is what sets a rider’s baseline level of exertion. The model derived captures the evolution of force output over time due to all of these effects and is applied to a breakaway situation to understand how the muscles respond after a rider exerts a force above their sustainable level.

Asymptotic techniques are used that exploit the fact that the course may be divided into sections within which variations from a mean course gradient are typically small. This leads to analytical solutions that bypass the need for performing complex numerical parameter sweeps. The asymptotic solutions provide a method to draw direct relationships between the values of physical parameters and the time taken to cover a set distance.

The model serves to frame intuitive results in a quantitative way. For instance, it is expected that a breakaway is more likely to succeed on a climb stage, as speeds are lower and so the energy penalty from wind resistance when cycling alone is reduced. The theory confirms this observation while also providing a measure of precisely how much more advantageous a breakaway on a hill climb would be. For multiple stage races the theory can even identify which stages are best to make a breakaway and when it is better to stay in the peloton for the entire stage to conserve energy. The resulting theory could allow a cycle team to identify the strategy and exact breakaway position during each stage in advance of a major race, with very little effort. Such prior information could provide the necessary edge required to secure the marginal gains required to win a race.

While it is clear that winning a Tour de France stage involves a great deal of preparation, physical fitness and, ultimately, luck on the day, mathematics can provide a fundamental underpinning for the race dynamics that can guide strategies to increase the chance of such wins.

Monday, 7 May 2018

Do stochastic systems converge to a well-defined limit? Oxford Mathematics Research investigates

Oxford Mathematician Ilya Chevyrev talks about his research into using stochastic analysis to understand complex systems.

"Stochastic analysis sits on the boundary between probability theory and analysis. It is often a useful tool in studying complex systems subject to noise. Such systems appear frequently in the financial markets, statistical physics, mathematical biological, etc., and it becomes extremely important to determine their statistical properties: can the noise cause the system to blow-up or collapse? How quickly do small perturbations propagate? Is there an equilibrium state? Is it always reached? Due to the fundamental importance of such questions, mathematicians from many fields have devised methods to address them, ranging from the analysis of partial differential equations (PDEs) to game theory.

A question one often encounters is whether a family of stochastic systems converges to a well-defined limit. For example, consider the Glauber dynamics of the Ising-Kac model: we are given a two dimensional lattice $\epsilon \mathbb{Z}^2$ with spacing $\epsilon > 0$. At each site we place a ferromagnet carrying an up or down spin (i.e. we consider a function $\sigma : \epsilon \mathbb{Z}^2 \to \{-1,1\}$). As time evolves, the ferromagnets interact according to some prescribed dynamics. Sending the lattice spacing $\epsilon \to 0$ and rescaling the dynamics appropriately, one is interested in whether the process converges to a non-trivial limit (for this exact example, see this article of Mourrat-Weber).

It turns out that for a wide class of models, one can describe (or at least expect to describe) the limit as a stochastic PDE of the form \[ \partial_t u = \mathcal{L} u + F(u,\nabla u, \xi) \] where $\mathcal{L}$ is an elliptic operator, $F$ is a non-linearity which can depend on the solution $u$ and its derivatives $\nabla u$, and $\xi$ is the noise term. A difficulty one often encounters in studying such equations is that they are classically ill-posed. This means that, given a typical realisation of $\xi$, there exist no function spaces in which we can solve for $u$ using e.g. a fixed point argument. Whenever this occurs, we call the equation singular. The fundamental obstacle, which is also typically encountered in quantum field theory (QFT), is that there is no canonical way to define products of distributions.

An example of a singular SPDE with motivations from QFT is the dynamical $\Phi^4_3$ model \[ \partial_t u = \Delta u - u^3 + \xi \] posed in $(1+3)$-dimensions $u : [0,T]\times \mathbb{R}^3$. Here $\xi$ is a space-time white noise on $ \mathbb{R}^4$ (a random distribution). The noise is sufficiently irregular that we expect the solution $u$ to belong to a space of distributions (not functions!), rendering the cubic term $u^3$ ill-posed. This has further ramifications if one takes approximations of the equation: substituting $\xi$ by a smoothed out version $\xi_\epsilon$ so that $\xi_\epsilon \to \xi$ as $\epsilon \to 0$, the corresponding classical smooth solutions $u_\epsilon$ do not converge to a non-trivial limit as $\epsilon \to 0$.

Starting with the famous KPZ equation, the last five years have seen much progress in providing a solution theory to singular SPDEs. The theories of regularity structures and paracontrolled distributions have been particularly successful at this task. An important feature of any such solution theory is the need for renormalization: smooth/lattice approximations of the equation converge only after appropriate counterterms are added to the equation. In the $\Phi^4_3$ example, this means that there exists a diverging family of constants $(C_\epsilon)_{\epsilon > 0}$ such that solutions to the renormalised PDEs \[ \partial_t u_\epsilon = \Delta u_\epsilon - u_\epsilon^3 + \xi_\epsilon + C_\epsilon u_\epsilon \] converge to a non-trivial limit. It is this limit which one calls the solution of the original $\Phi^4_3$ SPDE.

In a recent paper with Bruned, Chandra and Hairer (Imperial College London) we developed a systematic method to determine the counterterms needed to solve a very general class of SPDEs. Combined with other recent results in regularity structures, particularly with a version of the BPHZ renormalization scheme from perturbative QFT, this essentially provides a robust method to solve general systems of semi-linear SPDEs which are subcritical (this last constraint is known in QFT as super-renormalizability). The fundamental technique behind our approach is algebraic, motivated in particular by pre-Lie algebras."

Wednesday, 2 May 2018

The ‘shear’ brilliance of low head hydropower

The generation of electricity from elevated water sources has been the subject of much scientific research over the last century. Typically, in order to produce cost-effective energy, hydropower stations require large flow rates of water across large pressure drops. Although there are many low head sites around the UK, including numerous river weirs and potential tidal sites, the pursuit of low head hydropower is often avoided because it is uneconomic. Thus the UK, and other relatively flat countries miss out on hydropower due to the lack of sufficient elevated water sources.

In his DPhil project, Oxford Mathematician Graham Benham, has been studying a novel type of low head hydropower generation which uses the Venturi effect to amplify the pressure drop across a turbine. The Venturi effect is similar to a mechanical gearing system. Instead of a turbine dealing with the full flow and a low head, it deals with a reduced flow and an amplified head, thereby allowing for much cheaper electricity. However, the hydropower efficiency depends on how the turbine wake mixes together with the main pipe flow – that is the key to understanding the mixing process.

Mixing occurs in a thin turbulent region of fluid called a shear layer, or a mixing layer. In their recently published research, Oxford Mathematicians Graham Benham, Ian Hewitt and Colin Please, as well as Oxford physicist Alfonso Castrejon-Pita, present a simple mathematical model for the development of such shear layers inside a pipe. The model is based on the assumption that the flow can be divided into a number of thin regions, and this agrees well with both laboratory experiments and computational turbulence modelling. Specifically the model is used to solve a shape optimisation problem, which enables the design of the Venturi to produce the maximum amount of electricity from low head hydropower.

The image above shows the assembly of VerdErg's Venturi-Enhanced Turbine Technology (VETT). VerdErg is a British renewable energy company that has patented VETT. The image was taken from Innovate UK.

Tuesday, 1 May 2018

Inaugural András Gács Award given to Oxford Mathematician Gergely Röst

A new mathematical award has been established in Hungary to honour the memory of talented Hungarian mathematician András Gács (1969-2009), a man famed for his popularity among students and his capacity to inspire the young. The committee of the András Gács Award aimed to reward young mathematicians (under the age of 46), who not only excelled in research, but also motivated students to pursue mathematics. Oxford Mathematician Gergely Röst, a Research Fellow of the Wolfson Centre for Mathematical Biology, was one of the first two awardees. For nearly a decade Gergely has prepared the students of the Universtiy of Szeged for various international mathematics competitions. One of these is the National Scientific Students' Associations Conference, which is a biannual national contest of student research projects with more than 5000 participants. Gergely supervised a prize winning project in applied mathematics for four years in a row (2011, 2013, 2015, 2017).

The award ceremony took place in Budapest, in the Ceremonial Hall of the Eötvös Loránd University (ELTE), during the traditional yearly Mathematician’s Concert. 

Thursday, 19 April 2018

Jochen Kursawe awarded the Reinhart Heinrich Prize

Former Oxford Mathematician Jochen Kursawe, now in the Faculty of Biology, Medicine and Health, University of Manchester, has been awarded the Reinhart Heinrich Prize for his thesis on quantitative approaches to investigating epithelial morphogenesis. Jochen worked with Oxford Mathematician Ruth Baker and former Oxford colleague Alex Fletcher, now in the University of Sheffield, on the research.

The Reinhart Heinrich Prize is awarded annually by the European Society for Mathematical and Theoretical Biology (ESMTB).

Friday, 13 April 2018

Incorporating stress-assisted diffusion in cardiac models

Oxford Mathematician Ricardo Ruiz Baier, in collaboration mainly with the biomedical engineer Alessio Gizzi from Campus Bio-Medico, Rome, have come up with a new class of models that couple diffusion and mechanical stress and which are specifically tailored to the study of cardiac electromechanics. 

Cardiac tissue is a complex multiscale medium constituted by highly interconnected units (cardiomyocytes, the cardiac cells) which have remarkable structural and functional properties. Cardiomyocytes are excitable and deformable cells. Inside them, plasma membrane proteins and intracellular organelles all depend on the current mechanical state of the (macroscopic) tissue. Special structures, such as ion channels or gap junctions, rule the passage of charged particles throughout the cell as well as between different cells and their behaviour can be described by reaction-diffusion systems. All these mechanisms work in synchronisation to conform the coordinated contraction and pumping function of the heart.

During the cardiac cycle, mechanical deformation undoubtedly affects the electrical impulses that modulate muscle contraction, and also modifies the properties of the substrate where the electrical wave propagates. These multiscale interactions are commonly referred to as the mechano-electric feedback (MEF). Theoretical and clinical studies have been contributing to the systematic investigation of MEF effects for over a century; however, several open questions still remain. For example, and focusing on the cellular level, it is still now not completely understood what is the effective contribution of stretch-activated ion channels and what is the most appropriate way to describe them. In addition, and focusing on the organ scale, the clinical relevance of MEF in patients with heart diseases remains an open issue, specifically in relation to how MEF mechanisms translate into ECGs.

The idea of coupling mechanical stress directly as a mechanism to modify diffusive properties has been exploited for several decades by focusing on the context of dilute solutes in a solid, but remarkable similarities exist between these fundamental processes and the propagation of voltage membrane within cardiac tissue. Indeed, on a macroscopically rigid matrix, the propagating membrane voltage can be regarded as a continuum field undergoing slow diffusion.  

The approach described above basically generalizes Fick's diffusion by using the classical Euler's axioms of continuously distributed matter. An important part of the project, now under development, deals with the stability of the governing partial differential equations, the existence and uniqueness of weak solutions, and the formulation of mixed-primal and fully mixed discretisations needed to compute numerical solutions in an accurate, robust, and efficient manner. Some of the challenges involved relate to strong nonlinearities, heterogeneity, anisotropy, and the very different spatio-temporal scales present in the model. The construction and analysis of the proposed models and methods requires advanced techniques from abstract mathematics, the interpretation of the obtained solutions necessitates a clear understanding of the underlying bio-physical mechanisms, and the implementation (carried out exploiting modern computational architectures) depends on sophisticated tools from computer science. 
 
Other applications of a similar framework are encountered in quite different scenarios, for instance in the modelling of lithium ion batteries. Oxford visiting student Bryan Gomez (from Concepcion, Chile, co-supervised by Ruiz Baier and Gabriel Gatica) is currently looking at the fixed-point solvability and regularity of weak solutions, as well as the construction and analysis of finite element methods tailored for this kind of coupled problems (see also a different perspective focusing on homogenisation and asymptotic analysis, carried out by Oxford Mathematicians Jon Chapman, Alain Goriely, and Colin Please). 

Friday, 13 April 2018

How do node attributes mix in large-scale networks? Oxford Mathematics Research investigates

In this collaboration with researchers from the University of Louvain, Renaud Lambiotte from Oxford Mathematics explores the mixing of node attributes in large-scale networks.

A central theme of network science is the heterogeneity present in real-life systems. Take an element, called a node, and its number of connections, called its degree, for instance. Many systems do not have a characteristic degree for the nodes, as they are made of a few highly connected nodes, i.e. hubs, and a majority of poorly connected nodes. Networks are also well-known to be small-world in a majority of contexts, as a few links are typically sufficient to connect any pair of nodes. For instance, the Erdős number of Renaud Lambiotte is 3, as he co-authored a paper with Vincent D. Blondel, who co-authored with Harold S. Shapiro, who co-authored with Paul Erdős. 3 links are sufficient to reach Paul Erdős in the co-authorship network.

 

      

 

Because of their small-worldness, it is often implicitly assumed that node attributes (for instance, the age or gender of an individual in a social network) are homogeneously mixed in a network and that different regions exhibit the same behaviour. The contribution of this work is to show that this not the case in a variety of systems. Here, the authors focus on assortativity, a network analogue of correlation used to describe how the presence and absence of edges co-varies with the properties of nodes. The authors design a method to characterise the heterogeneity and local variations of assortativity within a network. The left-hand figure (please click to enlarge) for instance, illustrates an analogy to the classical Anscombe’s quartet, with 5 networks having the same number of nodes, number of links and average assortativity, but different local mixing patterns. The method developed by the authors is based on the notion of random walk with restart and allows them to define localized metrics of assortativity in the network. The method is tested on various biological, ecological and social networks, and reveals rich mixing patterns that would be obscured by summarising assortativity with a single statistic. As an example, the right-hand figure shows the local assortativity of gender in a sample of Facebook friendships. One observes that different regions of the graph exhibit strikingly different patterns, confirming that a single variable, e.g. global assortativity, would provide a poor description of the system.

For a more detailed description of the work please click here.

Pages