Tuesday, 13 June 2017

How fast does the Greenland Ice Sheet move?

Governments around the world are seeking to address the economic and humanitarian consequences of climate change. One of the most graphic indications of warming temperatures is the melting of the large ice caps in Greenland and Antarctica.  This is a litmus test for climate change, since ice loss may contribute more than a metre to sea-level rise over the next century, and the fresh water that is dumped into the ocean will most likely affect the ocean circulation that regulates our temperature.

Melting ice is not itself a sign of climate warming, because the ice sheet is constantly being replenished by new snow falling on its surface. The net amount of ice loss in fact results from a quite delicate imbalance between the addition of new snow and the discharge of ice to the ocean in the form of icebergs.  Understanding this imbalance requires an understanding of how fast the ice moves.   

Oxford Mathematician Ian Hewitt has been addressing this question using fluid-dynamical models. “We model glacial ice as a non-Newtonian viscous fluid. The central difficulty in computing the ice flow is the boundary condition at the base.  In conventional fluid mechanics a no-slip condition would apply, but the presence of melt water that has penetrated through cracks in the ice acts as a lubricant and effectively allows slip - often a very significant amount of slip.”   

In a recent study Ian has combined a model of the ice flow with a model for the water drainage underneath the ice to account for this varying degree of lubrication. "This new model is able to explain seasonal variations in the flow of the ice that have been observed using GPS instruments - it moves faster during spring and early summer, and slows down slightly in autumn.  Intriguingly, the net effect on the ice motion over the course of the year can be both positive and negative, depending on which of these - the acceleration or subsequent deceleration - dominates.  This depends on the amount of melt water that is produced on the surface, which has almost doubled over the last decade.”  

Ongoing work is attempting to combine the modelled behaviour with satellite observations to better constrain what might happen in the future.  Ian Hewitt talks about this research in an Oxford Sparks podcast.

Monday, 5 June 2017

The mathematics of abnormal skull growth

Mathematics is delving in to ever-wider aspects of the physical world. Here Oxford Mathematician Alain Goriely describes how mathematicians and engineers are working with medics to better understand the workings of the human brain and in particular the issue of abnormal skull growth.

"In 2013, together with Prof. Antoine Jérusalem from the Engineering Department, I opened the International Brain Mechanics and Trauma Lab (IBMTL) here in Oxford. IBMTL is a network of people interested in the many and varied problems of brain mechanics and morphogenesis. As part of our launch, in true Oxford style, we organised a workshop where I got talking to Jayaratnam Jayamohan, aka Jay Jay, MD at the John Radcliffe Hospital in Oxford and a brilliant paediatric neurosurgeon whose work has featured in BBC documentaries. Jay Jay routinely performs surgery on children to rectify abnormal skull growth (so-called “craniosynostosis”).The variety of shapes and intricacy of growth processes that he talked about immediately captured my imagination. He explained that much has been learnt about this process from a genetic and biochemical perspective and the world expert, Prof. Andrew O. M. Wilkie, also happened to be working in Oxford. I decided to pay him a visit.

Andrew Wilkie has done groundbreaking work in identifying genetic mutations behind rare craniofacial malformations and, in my discussions with him, he was particularly helpful in explaining the mechanisms underlying this fascinating process. Yet, surprisingly, I found that very little was known about the physics and bio-mechanics of the problem. And when I was told that the problem of understanding the formation of these shapes was probably too complex to be studied using mathematical modelling tools, I realised I had a challenge I couldn’t possibly resist. What’s more I had the perfect partner in Prof. Ellen Kuhl at Stanford University. Ellen is an expert in biomechanical modelling and has developed state-of-the-art computational techniques to simulate the growth of biological tissues. We had much to work on.

The growth of the skull in harmony with the brain is an extremely complex morphogenetic process. As the brain grows, the skull must grow in response to accommodate extra volume while providing a tight fit. These are very different growth processes. The extremely soft brain increases in volume while the extremely hard bone must increase in surface area. How does this process take place?

In the spirit of mathematical modelling, we started with a very simple question: how would a given shape remain invariant during such growth processes? We know that the skull grows through two different processes: first, accretion along the suture lines (transforming soft cartilage into bone) and second, remodelling of the shape to change locally the curvature. Without remodelling, the shape cannot remain invariant (since surface addition mostly happens along a line, a point with initial high curvature away from this line would remain highly curved unless a second process enabled the reduction of the curvature so that the shape remains a dilation of the original shape).

Using dimensional arguments, we concluded that the three processes (volume growth, line growth, and remodelling) are inter-dependent and must necessarily be tightly regulated. But how is this process synchronized? Since the information about the shape is global, the cues that trigger the growth process must be physical as has been suggested in the biological literature. By simple physical estimates of pressure, stresses and strains, our analysis further identified strain as the main biophysical regulator of this growth process.

At this point, a natural question to ask is what happens when this process is disrupted? We decided to extract the fundamental elements of this growth process by looking at the evolution of a semi-ellipsoid (an elongated half-sphere) divided into a number of patches representing the various bones, fontanelles (soft spots), and sutures of the cranial vault. Normal growth process is obtained by allowing the bones to grow along the suture lines. However, we decided to perturb the system by fusing some of the suture lines early as happens during craniosynostosis. To our great surprise, the various shapes obtained mirrored the ones found in craniosynostosis. We showed that idealised geometries produce good agreement between numerically predicted and clinically observed cephalic indices (defined as the cranial vault’s width by its length) as well as excellent qualitative consistency in skull shape – in other words the model worked. The particular geometric role in the relative arrangement of the early cranial vault bones and the sutures appear clearly in our models. What is truly remarkable is that, despite the extreme complexity of the underlying system, the shapes developed in these pathologies seem to be dictated mostly by geometry and mechanics.

What's next? Our models are, of course, extremely simple from a biological standpoint. However, they can be easily coupled to biochemical processes in order to analyse several open questions in morphogenesis and clinical practice such as the impact of different bone growth rates, the relative magnitude of mechanical and biochemical stimuli during normal skull growth, and the optimal dimensions of surgically re-opened sutures. Our mechanics-based model is also a tool to explore fundamental questions in developmental biology associated with the universality and optimality of cranial design in the evolution of mammalian skulls. These questions were raised exactly a century ago by d’Arcy Thompson in his seminal book “On Growth and Form” and we now have the mathematical and computational tools to answer them. We are only at the beginning."

A fuller discussion of the issues can be found in Alain and his colleagues' recently published paper. 

Monday, 5 June 2017

The mathematics of glass sheets - how to make their thickness uniform

Oxford Mathematician Doireann O'Kiely was recently awarded the IMA's biennial Lighthill-Thwaites Prize for her work on the production of thin glass sheets. Here Doireann describes her work which was conducted in collaboration with Schott AG.

"Thin glass sheets have many modern applications, including touch-screens, cameras and thumbprint sensors for smartphones. Glass sheets with thicknesses in the range 50–100µm are flexible, and may be used in bendable devices.

In the glass sheet redraw process, a prefabricated glass sheet is fed through a heater and stretched. When the glass is hot it behaves as a viscous fluid. As the glass is stretched, it gets thinner and the edges of the sheet are pulled in. This combined response means that both the thickness and the width of the sheet decrease, and the cross-section of the sheet can change shape so that the final product may not have uniform thickness.

In industrial processes, the heater is typically short compared to the sheet width to minimize width reduction and yield desirable thin, wide glass sheets. However, sheets produced in this way are typically thicker at the edge than elsewhere (see image). Asymptotic analysis of the process in this limit indicates that the behaviour in the main part of the sheet is one-dimensional – it varies only in the direction of motion – and there is a two-dimensional boundary layer near the sheet edge.

Numerical solution of the boundary-layer problem illustrates that the glass in the path of the inward-moving edge accumulates, leading to the observed thick edges. The same numerical scheme can also be used to determine the modified input shape required for the manufacture of a uniformly thin sheet. Physically, a small region at the edge of the sheet is tapered, making it thinner to compensate for the accumulation of glass during redraw."

The image above shows that the redrawn glass sheet is extremely thin, but is also relatively thick in a localised region near the sheet edge. Photo by Dominic Vella. 


Monday, 5 June 2017

Can Big Data root out corruption in Africa?

Many anticorruption advocates are excited about the prospects that “big data” will help detect and deter graft and other forms of malfeasance. But good data alone isn’t enough. To be useful, there must be a group of interested and informed users, who have both the tools and the skills to analyse the data to uncover misconduct, and then lobby governments and donors to listen to and act on the findings. The analysis of big datasets to find evidence of corruption requires statistical skills and software, both of which are in short supply in many parts of the developing world, such as sub-Saharan Africa.

Yet some ambitious recent initiatives are trying to address this problem. Oxford mathematician Balázs Szendrői together with his colleague Danny Parsons and Elizabeth Dávid-Barrett from the University of Sussex have been leading one such intiative that helps empower a group of young African mathematicians to analyse “big data” on public procurement.

As part of the British Academy/DFID-funded project (Curbing Corruption in Development Aid-funded Procurement) Elizabeth, together with Mihály Fazekas and Olli Hellmann had painstakingly collected contract-level data from three major donors covering 20 years. However, data is only the start. Elizabeth explains:

"The first step in this project was to develop software; this may seem trivial, but many cash-strapped African universities simply don’t have the resources to purchase the latest statistical software packages. The African Maths Initiative (AMI), a Kenyan NGO that works to create a stronger mathematical community and culture of mathematics across Africa, has helped to solve this problem by developing a new open-source program, R-Instat (which builds on the popular but difficult-to-learn statistics package R), funded through crowd-sourcing. Still in development, it is on track for launch in July this year. AMI has also helped develop a menu on R-Instat that can be used specifically for analysing procurement data and identifying corruption risk indicators.

Once we’ve got the data and the software to analyze it, the next and most crucial ingredient are the people. For “big data” to be useful as an anticorruption tool, we need to bring together two groups: people who understand how to analyse data, and people who understand how procurement systems can be manipulated to corrupt ends. Communication between the two is essential. So last month I tried to do my part by visiting AIMS Tanzania, an institute that offers a one-year high-level Master’s programme to some of Africa’s best math students, to help conduct a one-day workshop. After a preliminary session in which we discussed the ways in which the procurement process can be corrupted, and how that might manifest in certain red flags (such as single-bidder contracts), the students had the opportunity to use the R-Instat software to analyse the aid-funded procurement dataset that my colleagues and I had created. Students formed teams and developed their own research questions that they attempted to answer by using R-Instat to run analyses on the data.

Even the simplest analyses revealed interesting patterns. Why did one country’s receipts from the World Bank drop off a cliff one year and never recover? Discussion revealed a few possible reasons: perhaps a change of government led donors to change policy, or the country reached a stage of development where it no longer qualified for aid? Students became excited as they realised how statistical methods could be applied to identify, understand and solve real-world problems. Some teams came up with really provocative questions, such as the group who wanted to know whether Francophone or Anglophone countries were more vulnerable to corruption risks. Their initial analysis revealed that contracting in the Francophone countries was more associated with red flags. They developed the analysis to include a wider selection of countries, and maintained broadly similar results. Another group found that one-quarter of contracts in the education sector in one country had been won by just one company, and more than half of contracts by value in this sector had been won by three companies, all of which had suspiciously similar names. Again, there might be perfectly innocent reasons for this, but in just a couple of hours, we had a set of preliminary results that certainly warrant further analysis. Imagine what we might find with a little more time!

It is programs like these, that develop the tools and cultivate the skills in the next generation of analysts, that will determine whether the promise of “big data” as an anticorruption tool will be realised in the developing world."

A fuller discussion of the research appears in the Sussex Centre for Corruption blog. In the image above Balázs Szendrői from Oxford Mathematics addresses the students.

Thursday, 1 June 2017

Ruth Baker and Alex Scott awarded Leverhulme Research Fellowships

Oxford Mathematicians Ruth Baker and Alex Scott have been awarded Leverhulme Research Fellowships. Ruth, a mathematical biologist, has been given her award to further her research in to efficient computational methods for testing biological hypotheses while Alex, who works in the areas of combinatorics, probability, and algorithms, will be working on interactions between local and global graph structure.

The Leverhume Research Fellowships are given to experienced researchers, particularly those who are or have been prevented by routine duties from completing a programme of original research.

Thursday, 25 May 2017

How do biomembranes form micro-structures in our cells?

The human body comprises an incredibly large number of cells. Estimates place the number somewhere in the region of 70 trillion, and that’s even before taking into account the microbes and bacteria that live in and around the body. Yet inside each cell, a myriad of complex processes occur to conceive and sustain these micro-organisms. One such process is the shaping of molecular membranes, known as lipid bilayers, to form protective barriers around important cellular parts and also to create spherical vessels and tubular networks to transport waste and nutrients at the microscopic level.

The mechanism believed to be responsible for the shaping of membranes involves the attachment of “curvature-inducing” proteins, whose role is to directly interact with the surface and bend it. By the cooperation of hundreds to tens of thousands of proteins, the membrane is shaped into the variety of micro-structures seen in the cell.

Previous efforts to gain a physical understanding of the dynamic shaping process required the use of supercomputers simulating thousands of molecules; a lengthy and costly process. However, recent research by Oxford Mathematicians James Kwiecinski, Jon Chapman and Alain Goriely shows that the problem can be formulated as an elegant mathematical model combining results from statistical and continuum mechanics – the first model of its kind. Yet despite the model’s simplicity, the phenomena exhibited are quite complex. James explains: “one of the surprising results from the model is that the types of tubes that can form and how stable they are in the face of thermodynamic fluctuations is completely determined by the mechanical stiffness of the proteins themselves. We were also expecting that the proteins would uniformly distribute themselves around the membrane, forming a scaffold structure, almost like a mould. However, this isn’t always true; there are some instances where the proteins can aggregate, forming these complex patterns which then merge and interact.”

Asked about the future of the work, James further commented: “the research is a significant first step into a fundamental problem of cellular mechanics, and one where we’re only getting started. There are still many more interesting geometries and unanswered questions to study.”

Thursday, 25 May 2017

The Sound of Symmetry - Marcus du Sautoy Public Lecture now online

From Bach’s Goldberg Variations to Schoenberg’s Twelve-tone rows, composers have exploited symmetry to create variations on a theme. But symmetry is also embedded in the very way instruments make sound. Marcus du Sautoy shares his passion for music, mathematics and their enduring and surprising relationship. The lecture culminates in a reconstruction of nineteenth-century scientist Ernst Chladni's exhibition that famously toured the courts of Europe to reveal extraordinary symmetrical shapes in the vibrations of a metal plate. 

Marcus du Sautoy is Charles Simonyi Professor for the Public Understanding of Science at Oxford University.





Wednesday, 24 May 2017

Andrew Wiles awarded the Royal Society's Copley Medal

Oxford Mathematics's Professor Andrew Wiles has been awarded the Copley Medal, the Royal Society's oldest and most prestigious award. The medal is awarded annually for outstanding achievements in research in any branch of science and alternates between the physical and biological sciences.

Andrew Wiles is one of the world's foremost mathematicians. His proof of Fermat's Last Theorem in the 1990s catapulted him to unexpected fame as both the mathematical and the wider world were gripped by the solving of a 300 year-old mystery. In 1637 Fermat had stated that there are no whole number solutions to the equation $x^n + y^n = z^n$ when n is greater than 2, unless xyz=0. Fermat went on to claim that he had found a proof for the theorem, but said that the margin of the text he was making notes on was not wide enough to contain it. 

After seven years of intense study in private at Princeton University, Andrew announced he had found a proof in 1993, combining three complex mathematical fields – modular forms, elliptic curves and Galois representations. However, he had not only solved the long-standing puzzle of the Theorem, but in doing so had created entirely new directions in mathematics, which have proved invaluable to other scientists in the years since his discovery. 

Educated at Merton College, Oxford and Clare College, Cambridge, where he was supervised by John Coates, Andrew made brief visits to Bonn and Paris before in 1982 he became a professor at Princeton University, where he stayed for nearly 30 years. In 2011 he moved to Oxford as a Royal Society Research Professor. Andrew has won many prizes including, in 2016, the Abel Prize, the Nobel Prize of mathematics. He is an active member of the research community at Oxford, where he is a member of the eminent number theory research group. In his current research he is developing new ideas in the context of the Langlands Program, a set of far-reaching conjectures connecting number theory to algebraic geometry and the theory of automorphic forms.

Thursday, 18 May 2017

The Real Butterfly Effect - Tim Palmer's Oxford Mathematics Public Lecture now online

Meteorologist Ed Lorenz was one of the founding fathers of chaos theory. In 1963 he showed with just three simple equations that the world around us could be both completely deterministic and yet practically unpredictable. In the 1990s, Lorenz’s work was popularised by science writer James Gleick who used the phrase “The Butterfly Effect” to describe Lorenz’s work. The notion that the flap of a butterfly’s wings could change the course of weather was an idea that Lorenz himself used. However, he used it to describe something much more radical - he didn’t know whether the Butterfly Effect was true or not.

In this lecture Tim Palmer discusses Ed Lorenz the man and his work, and compares and contrasts the meaning of the “Butterfly Effect" as most people understand it today, and as Lorenz himself intended it to mean. 

Tim Palmer is Royal Society Research Professor in Climate Physics at the University of Oxford.






Thursday, 18 May 2017

J is for Juggling - the latest in the Oxford Mathematics Alphabet

Juggling is the act of iteratively catching and throwing several objects. To a mathematician a juggling pattern can be described using a mathematical notation called siteswap. The idea of siteswap notation is to keep track of the order in which the objects are thrown. The notation does not indicate what kind of objects are being juggled (e.g. balls, rings, clubs, etc) or whether a special kind of throw is performed (e.g. under-the-leg or behind-the-back).

Want to know more? Let Data Scientist and Oxford Mathematician Ross Atkins explain all in the latest in our Oxford Mathematics Alphabet series.