Thursday, 21 April 2016

From Birds to Bacteria: Modelling Migration at Many Scales

The use of mathematical models to describe the motion of a variety of biological organisms has been the subject of much research interest for several decades. If we are able to predict the future locations of bacteria, cells or animals, and then we subsequently observe differences between the predictions and the experiments, we would have grounds to suggest that the local environment has changed, either on a chemical or protein scale, or on a larger scale, e.g. weather patterns or changing distributions of predators/prey.

Early approaches were predominantly centred on the position jump model of motion, where agents undergo instantaneous changes of position according to a distribution kernel interspersed with waiting periods of stochastic length. To clarify, after a random period of time, the organism in question disappears in one location, and reappears in another nearby location. Equations for the probability that a particle is located in a position in space are called drift-diffusion equations - which are usually easy to solve numerically.
However, the position jump framework suffers from the limitation that correlations in the direction of successive runs are difficult to capture; this directional persistence is present in many types of movement. Furthermore, the diffusive nature of the position jump framework results in an unbounded distribution of movement speeds between successive steps – so theoretically an animal could be moving at any speed! Consequently Oxford Mathematician Jake P. Taylor-King and colleagues have been looking at other ways to address the issue.
Some organisms, whose sizes can differ by many orders of magnitude, have been observed to switch between different modes of operation. For instance, the bacterium Escherichia coli changes the orientation of one or more of its flagella between clockwise and anticlockwise to achieve a run-and-tumble like motion. As a result, during the runs, we see migration-like movement and during the tumbles, we see resting or local diffusion behaviour. To add to this complexity, it should be noted that the direction of successive runs are correlated. On a larger scale let's compare the migratory movements of vertebrates where individuals often travel large distances with intermittent stop-overs to rest or forage. An example is the lesser black-backed gull (Larus fuscus). Individuals of this species that breed in the Netherlands migrate southwards during Autumn. Even though the scales involved in these two processes differ by many orders of magnitude, one can use the same mathematical framework to model the observed motion.
When considering the movement of a `particle’ as a series of straight-line trajectories, the corresponding mathematical description is known as a velocity jump process [Othmer 1988]. Organisms travel with a randomly-distributed speed and angle for a finite duration, before undergoing a stochastic reorientation event. A big hurdle when using this approach is that the underlying differential equation involves the use of mesoscopic transport equations that need to be solved in a higher dimensional space than traditional drift-diffusion equations. Until recently [Friedrich 2006], the length of jumps has been modelled as exponentially distributed for mathematical ease. Therefore, it is assumed there is a constant rate at which animals reorientate.

The researchers' new approach allows the specification of any running or waiting time distribution along with any angular and speed distributions. The resulting system of partial integro-differential equations are challenging to solve both analytically and numerically, and therefore it is necessary to both simplify and derive summary statistics.
For comparison between theory and experimental data, the researchers derived expressions for the mean squared displacement which shows good agreement with experimental data from the bacterium Escherichia coli and the gull Larus fuscus. A large time diffusive approximation is also considered via a Cattaneo approximation [Hillen 2004]. This leads to the novel result that the effective diffusion constant is dependent on the mean and variance of the running time distribution but only on the mean of the waiting time distribution. Therefore, two processes with the same means but different variances for how long an animal moves in the same direction can have different large scale observed behaviour. 

Finally, this method then enables us to switch between straight-line trajectory GPS (or tracking) data and some of the commonly studied differential equation models used within mathematical ecology. The main benefit of this approach is that velocity jump models can often be parameterised using smaller quantities of data than what may be required when using a position jump process. All of which enables us to better predict the future locations of animals and, in turn, to better understand the reasons for the choice of those locations.

(the image above shows the pattern of seagulls above the UK).


Wednesday, 20 April 2016

E is for Elliptic Curves

Appearing everywhere from state-of-the-art cryptosystems to the proof of Fermat's Last Theorem, elliptic curves play an important role in modern society and are the subject of much research in number theory today. Jennifer Balakrishnan, a researcher working in number theory, explains more in the latest in our Oxford Mathematics Alphabet.

Thursday, 14 April 2016

Rob Style wins 2016 Adhesion Society Young Scientist Award

Oxford Mathematician Rob Style has been awarded the 2016 Adhesion Society Young Scientist Award, sponsored by the Adhesion and Sealant Council, for his fundamental contributions to our understanding of the coupling of surfaces tension to elastic deformation.  Rob researches the mechanics of very soft solids like gels and rubber, in particular investigating why they don’t obey the same rules as hard materials that are more traditionally used by engineers.

Thursday, 14 April 2016

Jake Taylor King wins Lee Segel Prize

Oxford Mathematician Jake Taylor King has won the Lee Segel Prize for Best Student Paper for his paper 'From birds to bacteria: Generalised velocity jump processes with resting states.' Jake worked on his research with Professor Jon Chapman. The prize is awarded annually by the Society for Mathematical Biology. One of Jake's co-authors on the paper, Gabs Rosser, previously also studied Mathematics at Oxford in the Wolfson Centre for Mathematical Biology.

Wednesday, 13 April 2016

Linus Schumacher wins Reinhart Heinrich Doctoral Thesis Award

Oxford Mathematician Linus Schumacher has won the prestigious Reinhart Heinrich Doctoral Thesis Award. The award is presented annually to the student submitting the best doctoral thesis in any area of Mathematical and Theoretical Biology. 

In the judges' view "Linus' thesis is an outstanding example of how mathematical modelling and analysis that is kept close to the experimental system can contribute efficiently to advance the understanding of complex biological questions. The roles of cellular heterogeneity, microenvironmental cues and cell-to-cell interactions, which are common themes in the study of biomedical systems, are skillfully dissected and analysed in relevant experimental model systems, leading to significant advances in the current understanding of said systems."

The judges concluded: "the modelling aims to derive generic, theoretical insights from specific, biological questions. The work has led to a number of excellent publications."

Friday, 8 April 2016

Predicting and managing energy use in a low-carbon future

If effectively harnessed, increased uptake of renewable generation, and the electrification of heating and transport, will form the bedrock of a low carbon future. Unfortunately, these technologies may have undesirable consequences for the electricity networks supplying our homes and businesses. The possible plethora of low carbon technologies, like electric vehicles, heat pumps and photovoltaics, will lead to increased pressure on the local electricity networks from larger and less predictable demands.

Stephen Haben and colleagues from the University of Oxford and colleagues from the University of Reading are working with the distribution network operator (DNO) Scottish and Southern Energy Power Distribution on the £30m Thames Valley Vision project. The aim is to develop sophisticated modelling techniques to help DNOs avoid expensive network reinforcement as the UK moves toward a low carbon economy. In other words, what are some of the smart alternatives to “keeping the lights on” without simply digging up the road and laying bigger cables?

With recent advanced monitoring infrastructures (such as smart meters) we can now start using mathematical and statistical techniques to better understand, anticipate and support local electricity networks. The team has been analysing smart meter data and employing clustering methods to better understand household energy usage and discover how many different types of behaviours exist. This is turn can lead to improvements in demand modelling, designing tariffs and other energy efficiency strategies (e.g. demand side response). The researchers found different types of behaviour with varying degrees of intra-day demand, seasonal variability and volatility. Each of these therefore has different types of possible strategies in terms of reducing energy and costs. An important discovery is that energy behavioural use has very weak links with the socio-demographics, tariffs or houses size. Hence to really understand your energy demand requires the monitoring of data available through smart meters.

Forecasts can help DNOs manage and plan the networks in many ways, in particular by anticipating extremes in demand (e.g. large amounts of local generation on a sunny day). The researchers have developed a range of point and probabilistic forecasts for a wide number of relevant applications. Long term, scenario forecasts are generated using agent based models to simulate the impact of low carbon technologies. Shorter term forecasts have been developed to estimate daily demands and thus create appropriate plans for the charging and discharging cycles of batteries, helping to reduce peak overloads. These algorithms have been successfully used in silico and will soon be deployed and tested on real storage devices on the network.

Most recently the team are working on understanding limits to their models when monitoring data is unavailable or sparse. This is desirable since acquiring data and installing monitoring equipment is expensive. Can households be accurately modelled with only limited access to monitored data? If so, how much monitoring is really necessary? They have found that local energy demand is very dependent on the number and proportion of commercial and domestic properties. Such insights will be used to device workable solutions so that a DNO can choose the most appropriate (i.e. least disruptive but most cost effective) solution for different network types. Whether, for example, that is installing batteries, introducing monitoring or investing in infrastructure upgrades.

In summary, the extra visibility of household level demand through higher resolution monitoring equipment has created new opportunities for better understanding energy behavioural usage and highlighted the need for novel analytics. Demand at the individual customer level is irregular and volatile in contrast to the high voltage demands that has traditionally been investigated and thus current methods may not be applicable.  The methods necessary to reduce energy demand and promote energy efficiency sit in many areas of applied mathematics, data science and statistics. This requires mathematicians to be at the forefront of designing and creating new methods and techniques for the future energy networks.

For more information see a list of publications and the Mathematics Matters article.

Wednesday, 6 April 2016

Endre Suli and Xunyu Zhou elected SIAM Fellows

The Society for Industrial and Applied Mathematics (SIAM) has announced that Professors Xunyu Zhou and Endre Suli from Oxford Mathematics are among its newly elected Fellows for 2016.

SIAM exists to ensure the strongest interactions between mathematics and other scientific and technological communities through membership activities, publication of journals and books, and conferences.

Saturday, 26 March 2016

D is for Diophantine Equations - the latest in the Oxford Mathematics Alphabet

diophantine equation is an algebraic equation, or system of equations, in several unknowns and with integer (or rational) coefficients, which one seeks to solve in integers (or rational numbers). The study of such equations goes back to antiquity. Their name derives from the mathematician Diophantus of Alexandria, who wrote a treatise on the subject, entitled Arithmetica.

The most famous example of a diophantine equation appears in Fermat’s Last Theorem. This is the statement, asserted by Fermat in 1637 without proof, that the diophantine equation has no solutions in whole numbers when n is at least 3, other than the 'trivial solutions' which arise when XYZ = 0. The study of this equation stimulated many developments in number theory. A proof of the theorem was finally given by Andrew Wiles in 1995.

The basic question one would like to answer is: does a given system of equations have solutions? And if it does have solutions, how can we find or describe them? While the Fermat equation has no (non-trivial) solutions, similar equations (for example ) do have non-trivial solutions. One of the problems on Hilbert’s famous list from 1900 was to give an algorithm to decide whether a given system of diophantine equations has a solution in whole numbers. In effect this is asking whether the solvability can be checked by a computer programme. Work of Martin Davis, Yuri Matiyasevich, Hilary Putnam and Julia Robinson, culminating in 1970, showed that there is no such algorithm. It is still unknown whether the corresponding problem for rational solutions is decidable, even for plane cubic curves. This last problem is connected with one of the Millennium Problems of the Clay Mathematics Institute (with a million dollar prize): the Birch Swinnerton Dyer Conjecture. 

To find out more about diophantine problems read Professor Jonathan Pila's latest addition to our Oxford Mathematics Alphabet.

Tuesday, 15 March 2016

Andrew Wiles awarded the Abel Prize

The Norwegian Academy of Science and Letters has decided to award the Abel Prize for 2016 to Sir Andrew J. Wiles (62), University of Oxford, “for his stunning proof of Fermat’s Last Theorem by way of the modularity conjecture for semistable elliptic curves, opening a new era in number theory.”

The President of the Norwegian Academy of Science and Letters, Ole M. Sejersted, announced the winner of the 2016 Abel Prize at the Academy in Oslo today, 15 March. Andrew J. Wiles will receive the Abel Prize from H.R.H. Crown Prince Haakon at an award ceremony in Oslo on 24 May.

The Abel Prize recognizes contributions of extraordinary depth and influence to the mathematical sciences and has been awarded annually since 2003. It carries a cash award of NOK 6,000,000 (about EUR 600,000 or USD 700,000).

Andrew J. Wiles is one of very few mathematicians – if not the only one – whose proof of a theorem has made international headline news. In 1994 he cracked Fermat’s Last Theorem, which at the time was the most famous, and long-running, unsolved problem in the subject’s history.

Wiles’ proof was not only the high point of his career – and an epochal moment for mathematics – but also the culmination of a remarkable personal journey that began three decades earlier. In 1963, when he was a ten-year-old boy growing up in Cambridge, England, Wiles found a copy of a book on Fermat’s Last Theorem in his local library. Wiles recalls that he was intrigued by the problem that he as a young boy could understand, and yet it had remained unsolved for three hundred years. “I knew from that moment that I would never let it go,” he said. “I had to solve it.”

The Abel Committee says: “Few results have as rich a mathematical history and as dramatic a proof as Fermat’s Last Theorem.”

Wednesday, 9 March 2016

Comparing the social structure of different cities

People make a city. Each city is as unique as the combination of its inhabitants. Currently, cities are generally categorised by size, but research by Oxford Mathematicians Peter Grindrod and Tamsin Lee on the social networks of different cities shows that City A, which is twice the size of City B, may not necessarily be accurately represented as an amalgamation of two City Bs.

The researchers use Twitter data from ten different UK cities, showing reciprocal tweets within each city. By defining cities in terms of these social network structures, they break each city into its comprising modular communities. Next, they build virtual cities from the actual cities. For example, Bristol has 74 communities. Randomly sampling (with replacement) from these communities 145 times builds a virtual city the same size as Manchester - but made up of modular communities actually observed in Bristol. How much does our virtual Manchester network resemble the true Manchester network? The answer is very closely. So if one was trying to spread a message via Twitter through Manchester, or make other social interventions, it may prove beneficial to test the same activity in Bristol first.

However, sampling the Bristol communities to create a virtual city the same size as Leeds, which is smaller than Manchester, does not create a network of similar structure to the 'real' Leeds. This highlights that the relationship between social structures of cities is not immediately obvious, and requires further analysis. Furthermore, this relationship is not symmetrical: a virtual city created by randomly sampling 74 communities from the Leeds network, does in fact resemble the true Bristol social network. So Bristol could learn from Leeds but not vice versa.

In summary, we may sometimes replicate one city using the communities from another. However, some cities have a very diverse range of communities, making them difficult to replicate - Leeds is a good example of this. Perhaps cities can be put into classes where those cities in the same class are socially similar and so any experience of social phenomena or reactions to interventions in one such city may be relevant to another.