Tuesday, 4 October 2016

Using geometry to choose the best mathematical model

Across the physical and biological sciences, mathematical models are formulated to capture experimental observations. Often, multiple models are developed to explore alternate hypotheses.  It then becomes necessary to choose between different models.

Oxford Mathematician Heather Harrington and colleagues from the United States have explored the problem of model selection by regarding mathematical models as geometric objects in space. In general, model selection is a hard problem, but recasting it in geometric terms allows the authors to give a new methodology for selecting the best explanation of observed phenomena, thereby bringing recent groundbreaking developments in nonlinear algebra to the study of biological and other complex systems.

Specifically their paper, published today in the Royal Society Journal Interface, considers polynomial models (e.g., mass-action chemical reaction networks at steady state) and describe a framework for their analysis based on optimisation using numerical algebraic geometry. The authors use probability-one polynomial homotopy continuation methods to compute all critical points of the objective function, then filter to recover the global optima. This approach exploits the geometric structures relating models and data, and demonstrates its utility on examples from cell signalling, synthetic biology, and epidemiology.

Tuesday, 4 October 2016

Frances Kirwan wins Suffrage Science award

The Clinical Sciences Centre based at Imperial College in London has launched a new initiative to celebrate women in maths and computing. As a new branch of the existing Suffrage Science scheme, it will encourage women into science, and to reach senior leadership roles.

Women make up no more than four in ten undergraduates studying maths. Suffrage Science aims to make a difference. There are currently two sections, one for women in the Life Sciences, and one for those in Engineering and the Physical Sciences. Now there is a third specialism, for women in Maths and Computing. Twelve women will receive awards to celebrate their scientific achievements and ability to inspire others. Oxford Mathematician Frances Kirwan FRS is one of the first recipients of the award for Mathematics.

The awards themselves are pieces of jewellery, designed by students at the arts college Central Saint Martins-UAL, and inspired by science. One, a silver bangle, holds a secret. Engraved on the inside, and hidden beneath a layer of silver, is what many mathematicians consider the most beautiful equation in mathematics, Euler’s equation.


Wednesday, 21 September 2016

H is for Homology - The Oxford Mathematics Alphabet

A life belt, a coffee cup, a jumping ball, a beach ball. What do these objects have in common? What sets them apart? Questions like these come under the mathematical umbrella of topology. And the theory of homology enables us to explore and understand them. Find out more in the latest in our Oxford Mathematics Alphabet.

Saturday, 17 September 2016

Roger Heath-Brown in conversation with Ben Green

Roger Heath-Brown is one of Oxford's foremost mathematicians. His work in analytic number theory has been critical to the advances in the subject over the past thirty years and garnered Roger many prizes.

As he approached retirement, Roger gave this interview to Ben Green, Waynflete Professor of Mathematics in Oxford and himself a leading figure in the field of number theory. In the interview Roger reflects on his influences, his achievements and the pleasures that mathematics has given him.

Thursday, 8 September 2016

Marcus du Sautoy named one of London's most influential people

Oxford Mathematician and Charles Simonyi Professor for the Public Understanding of Science in the University of Oxford, Marcus du Sautoy, has been named one of London's most influential mathematicians in the London Standard Progress 1000 awards. The Progress 1000, in partnership with Citi, is an annual event hosted by The London Evening Standard to celebrate the people whose influence across many spheres of London life is felt most keenly by those who live in the City.

Marcus is not only a leading mathematician in his own right, but a driving force in the promotion and popularisation of mathematics and associated sciences. He has taken mathematics and science around the world, expressing its elegance, pleasures, and occasional pains, through lectures, theatre (his X & Y, a mathematical play written with Victoria Gould has been widely acclaimed) and most recently in his book "What We Cannot Know" where he tries to identify the frontiers to our knowledge. 


Sunday, 4 September 2016

Mathematics enables a better understanding of damage during brain surgery

For over a hundred years, when confronted by swelling in the brain, surgeons more often than not have resorted to decompressive craniectomy, the traditional route to reducing swelling by removing a large part of the skull. However, while this might be the standard procedure, its failure rate has been worryingly high, primarily because the consequences on the rest of the brain have been poorly understood. 
This is no longer just a challenge for neurosurgeons. Mathematicians are now able to study and model the impact of surgery at a cellular level and by so doing develop a more specific picture of the impact of surgery across the whole brain. In particular Oxford Mathematician Alain Goriely and colleagues Johannes Weickenmeier and Ellen Kuhl from Stanford University have looked at the issue by studying a standard physical problem: the problem of bulging in soft solids.

Bulging is due to the swelling of a material while constrained except at an opening, as would be the case when the skull is opened during surgery and the brain bulges out of the skull creating, potentially, deformations in another part of the brain, away from the immediate incision. To quantify possible deformations inside the brain, the team created a personalised finite element craniectomy model from high-resolution medical resonance imaging. Their study reveals three failure mechanisms which would suggest damage beyond the initial incision point: axonal stretch in the centre of the bulge, axonal compression at the edge of the craniectomy, and axonal shear around the opening. Strikingly, even small swelling volumes of 50ml can induce axonal strain in excess of 30% above reported damage thresholds in patients.

Such models suggest a possible mechanism for proving and identifying long-term damage. Indeed, this theoretical study is a first step towards gaining better insight into the complex mechanisms underlying craniectomy and opens the door for systematic personalised studies of craniectomy in patients. The next step is to combine this theoretical work with experimental and clinical work to enable surgeons to provide better-informed and more successful treatments.



A fuller explanation can be found in the following papers:

Physical Review Letters

Journal of the Mechanics and Physics of Solids

Computer Methods in Applied Mechanics and Engineering

Cornell University Library

Monday, 22 August 2016

Constructing reaction systems - an inverse problem in mathematics

There is a wide class of problems in mathematics known as inverse problems. Rather than starting with a mathematical model and analysing its properties, mathematicians start with a set of properties and try to obtain mathematical models which display them. For example, in mathematical chemistry researchers try to construct chemical reaction systems that have certain predefined behaviours. From a mathematical point of view, this can be used to create simplified chemical systems that can be used as test problems for different mathematical fields. From an experimental perspective, it is useful to create chemical systems that can be used as blueprints for constructing physical networks in synthetic biology, for example in the area of DNA computing.

Oxford Mathematicians Tomislav Plesa and Radek Erban, together with their colleague Tomáš Vejchodský of the Institute of Mathematics in the Czech Academy of Sciences, have recently published a paper in the Journal of Mathematical Chemistry developing the theory behind this. They have built on previous work to create an inverse problem framework suitable for constructing a reaction system, and have used this to construct certain two- and three-dimensional systems of particular interest. Their work concentrates primarily on networks modelled by systems of kinetic equations, a class of ordinary differential equations of a certain type. Such equations may display 'exotic phenomena', corresponding to specific biological functions in the underlying biochemical reaction networks, and so the inverse problem framework needs to be able to handle such behaviours.

Sunday, 21 August 2016

The mathematics of species extinction

Correctly predicting extinction is critical to ecology. Claim extinction too late, and you may be taking resources away from a species that actually could be saved. Claim extinction too early, and you may cause the true extinction due to stopping resources, such as removing protection of its habitat.

There is a balance to be sought, and it's clear that we're not quite there because every year several species that were thought to be extinct are rediscovered. This may seem good news, but it creates a lack of faith for the International Union for Conservation of Nature (IUCN) when it categorises a species as extinct.

Rediscovered species are dubbed Lazarus species, after the character in the bible that came back to life. We have data for some Lazarus mammals, and some mammals that are presumed extinct. What can we infer from the Lazarus mammals to help us establish which presumed-extinct mammals are actually still alive?

Oxford Mathematician Tamsin Lee and colleagues from Australia in a paper published in Global Change Biology use information about the size of the mammal, the search effort it has received, and whether the mammal lived in dense or sparse populations. Some traits, such as the body size, may affect the chances of extinction and rediscovery - large mammals are easier to hunt, but also easier to rediscover. Whereas, factors such as search effort, will only affect the chances of rediscovery. How can we separate and quantify these effects? 

To establish which mammals are likely to be rediscovered, and when, the researchers used a model that is commonly used in medicine. Suppose you're conducting a trial for a new medicine which may cure a terminal disease. You give this medicine to 100 subjects, and you make a note of the proportion which are cured. And among those which are not cured, you note how long it takes the patient to die from the disease. You also have notes about, for example, the age of the patients, their gender, cholesterol and whether they're a smoker. From these traits you can establish which patients are most likely to be cured - perhaps young female non-smokers with low cholesterol, followed by young male non-smokers with low cholesterol, and so on. Among those which are not cured, perhaps the medicine prolonged their life, So again, we need to establish which traits created a delay. 

Applying this model to the mammal data set, the researchers quantified the effect of traits such as body size, on extinction and rediscovery. They found that indeed, large mammals, such as the Tasmanian Tiger, are more likely to go extinct, as are those mammals that live in dense populations. The effect is compounded for mammals that are both large and live in dense population, such as the Saudi Gazelle which has a 95% chance of being extinct after missing for 79 years. This chance will keep increasing until it reaches 100% in 2039. 

Large mammals, which experience a medium search effort (3 to 6 searches) are likely to be rediscovered less than 50 years after they were last seen, whereas small rodent-sized mammals could be missing for over a hundred years, and still be rediscovered. These time limits can be decreased with higher search effort, but search effort has a stronger effect on large mammals. That is, when choosing to allocate resources, searching for a large mammal will enable us to determine the status of the species sooner than when searching for a small mammal. The Saudi Gazelle illustrates this well, since it is a large mammal, but has not reached 100% chance of extinction despite being missing for 79 years. This is because it has received a low search effort. 

The strong effect of search effort on large mammals bodes poorly for the Tasmanian Tiger, which was last seen in 1933. There has been a huge search effort, but they did not bear any certain sightings. (The question of certain and uncertain sightings is, as you can imagine, another huge topic in ecology). This implies that since 1983 the Tasmanian Tiger has been truly extinct. However, the Chinese River Dolphin, which has also received a high search effort, has only been missing for 9 years, so it has a 72% chance of being extinct, with this chance not reaching 100% until 2034. 

Ulitmately this model demonstrates how even ecology, a relatively new scientific field, is advancing by capitalising upon centuries worth of mathematics.

Saturday, 20 August 2016

Alison Etheridge named Fellow of the Institute of Mathematical Statistics

Alison Etheridge FRS, Professor of Probability in the University of Oxford, has been named Fellow of the Institute of Mathematical Statistics (IMS).  Professor Etheridge received the award for outstanding research on measure-valued stochastic processes and applications to population biology; and for international leadership and impressive service to the profession.

Each Fellow nominee is assessed by a committee of their peers for the award.  In 2016, after reviewing 50 nominations, 16 were selected for Fellowship.  Created in 1935, the Institute of Mathematical Statistics is a member organisation which fosters the development and dissemination of the theory and applications of statistics and probability. An induction ceremony took place on July 11 at the World Congress in Probability and Statistics in Toronto, Ontario, Canada.

Alison is also President Elect of the IMS.

Wednesday, 10 August 2016

Mathematics enables faster computer simulations of biology

Numerous processes across both the physical and biological sciences are driven by diffusion, for example transport of proteins within living cells, and some drug delivery mechanisms. Diffusion is an unguided process which is of great importance at small spatial scales. Partial differential equations (PDEs) are a popular tool for modelling such phenomena deterministically, but it is often necessary to use stochastic (probabilistic) models instead to capture the behaviour of a system accurately, especially when the number of diffusing particles is low, such as in gene regulation.

Exploring the underlying mathematics behind these models is an important current area of research. Mathematicians need to understand these models better, so that they can be applied more meaningfully and so that they can be made more efficient while still preserving their accuracy (as computational power and time are often limiting factors). Oxford Mathematicians Paul Taylor and Ruth Baker, working with colleagues Christian Yates of the University of Bath and Matthew Simpson of the Queensland University of Technology, have been seeking to explore stochastic models of diffusion that are 'compartment-based'. In their paper, published in the Journal of Royal Society Interface, the domain under consideration is discretized into compartments, with particles jumping between compartments, possibly with constraints such as that a compartment cannot contain more than a certain number of particles. Previous work by these authors has concentrated on situations where the compartments all have the same size, but these can be unhelpfully restrictive for some applications, where it is important to focus at a high resolution in some parts but impractical to apply this same high resolution across the whole domain. This latest piece of work brings together a number of aspects, including allowing different compartments to have different sizes.

Crucially, this research demonstrates that these new approaches will be of value to researchers working on multi-scale systems, as they can speed up simulations while preserving precision where needed.