News

Tuesday, 10 April 2018

Ada Lovelace - the Making of a Computer Scientist. The latest book from Oxford Mathematics

Our latest book features the remarkable story of Ada Lovelace, often considered the world’s first computer programmer, as told in a new book co-written by Oxford Mathematicians Christopher Hollings and Ursula Martin together with colleague Adrian Rice from Randolph-Macon College.

A sheet of apparent doodles of dots and lines lay unrecognised in the Bodleian Library until Ursula Martin spotted what it was - a conversation between Ada Lovelace and Charles Babbage about finding patterns in networks, a very early forerunner of the sophisticated computer techniques used today by the likes of Google and Facebook. It is just one of the remarkable mathematical images to be found in the new book, 'Ada Lovelace: The Making of a Computer Scientist'.

Ada, Countess of Lovelace (1815–1852) was the daughter of poet Lord Byron and his highly educated wife, Anne Isabella. Active in Victorian London's social and scientific elite alongside Mary Somerville, Michael Faraday and Charles Dickens, Ada Lovelace became fascinated by the computing machines devised by Charles Babbage.  A table of mathematical formulae sometimes called the ‘first programme’ occurs in her 1843 paper about his most ambitious invention, his unbuilt ‘Analytical Engine.’

Ada Lovelace had no access to formal school or university education but studied science and mathematics from a young age. This book uses previously unpublished archival material to explore her precocious childhood: her ideas for a steam-powered flying horse, pages from her mathematical notebooks, and penetrating questions about the science of rainbows. A remarkable correspondence course with the eminent mathematician Augustus De Morgan shows her developing into a gifted, perceptive and knowledgeable mathematician, not afraid to challenge her teacher over controversial ideas.

 “Lovelace’s far sighted remarks about whether the machine might think, or compose music, still resonate today,” said Professor Martin. “This book shows how Ada Lovelace, with astonishing prescience, learned the maths she needed to understand the principles behind modern computing.”

Ada Lovelace: The Making of a Computer Scientist, by Christopher Hollings, Ursula Martin and Adrian Rice will be launched on 16th April 2018 by Bodleian Library Publishing, in partnership with the Clay Mathematics Institute.  

The page of doodles is on display until February 2019 as part of the Bodleian Library’s exhibition 'Sappho to Suffrage: women who dared.'

Ursula Martin will be speaking at the Hay Festival and Edinburgh Book Festival.

Monday, 9 April 2018

The contact-free knot - Oxford Mathematics Research explains

Knots are widespread, universal physical structures, from shoelaces to Celtic decoration to the many variants familiar to sailors. They are often simple to construct and aesthetically appealing, yet remain topologically and mechanically quite complex.

Knots are also common in biopolymers such as DNA and proteins, with significant and often detrimental effects, and biological mechanisms also exist for 'unknotting'.

There are numerous types of questions when studying knots. From a topological standpoint, fundamental issues include knot classification and equivalence of different knot descriptions. In continuum mechanics and elasticity, a knot is a physical structure with finite thickness, and aspects of interest include the strength, stability, equilibrium shape, and dynamic behaviour of a knotted filament. Such aspects are strongly connected to points/regions of self-contact, at which distant points push against each other.

Consider a simple hand-held experiment: take a strip of paper or flexible wire, tie it into a standard but loose knot (an open trefoil), and you will observe 2 isolated points of self-contact surrounding an interval of self-contact. Now add twist by rotating the ends, change the end-to-end distance by bringing your hands closer or further apart, and combine with small transverse displacements, i.e. shifting the end. For certain materials and with a little finesse, all points of contact can be removed.

Such configurations – contact-free, knotted, and mechanically stable – have never been described before, and Oxford Mathematician Derek Moulton and colleagues sought to understand and characterise them in terms of the underlying geometry and mechanics. To do so, they turned to the Kirchhoff equations for elastic rods, a set of 18 nonlinear differential equations that describe the balance of forces and moments as well as the geometrical shape of a thin and long elastic material. These equations admit an incredibly rich and non-unique solution space. A small modification to these equations yields the 'ribbon equations', more appropriate for a strip of paper and with a similarly complex solution space.

The goal was to find configurations within this solution space that satisfy the conditions of being contact-free, mechanically stable, and knotted. This was a bit like finding a needle in a haystack, but after applying some numerical tricks they showed that in fact such configurations exist as theoretical solutions of the full nonlinear 18D system; they then categorised the space of 'good knots' in terms of the 3 experimental measures: end-rotation, end-displacement, and end-shift. The numerical study was complemented with an asymptotic analysis of a perturbed 'double ring' solution; the idea being that knotted solutions can be found in the neighbourhood of a planar circle that overlaps itself exactly once.

The analysis suggests that the transverse displacement is a necessary component for generating contact-free knots. While the researchers only considered the "simplest" trefoil knot, they conjecture that toroidal knots of increasing genus can be stabilised in a contact-free state.

For a fuller explanation of the team's work please click here.

Thursday, 5 April 2018

The Oxford Maths Festival 28-29 April 2018

Bringing together talks, workshops, hands-on activities and walking tours, the Oxford Maths Festival is an extravaganza of all the wonderful curiosities mathematics holds. Board games, sport, risk and the wisdom of crowds courtesy of Marcus du Sautoy are all on the menu.

Over two days you can immerse yourself in a wide range of events, with something for everyone, no matter what your age or prior mathematical experience. 

All events are free to attend. Some require pre-booking. For the entire programme, please click here.

Tuesday, 3 April 2018

Alain Goriely and Mike Giles made SIAM Fellows

Oxford Mathematicians Alain Goriely and Mike Giles have been made Fellows of the Society for Industrial and Applied Mathematics (SIAM). Alain is recognised for his "contributions to nonlinear elasticity and theories of biological growth" while Mike receives his Fellowship for his "contributions to numerical analysis and scientific computing, particularly concerning adjoint methods, stochastic simulation, and Multilevel Monte Carlo."

Alain is Professor of Mathematical Modelling in the University of Oxford where he is Director of the Oxford Centre for Industrial and Applied Mathematics (OCIAM) and Co-Director of the International Brain Mechanics and Trauma Lab (IBMTL). He is an applied mathematician with broad interests in mathematics, mechanics, sciences, and engineering. His current research also include the modelling of new photovoltaic devices, the modelling of cancer and the mechanics of the human brain. He is author of the recently published Applied Mathematics: A Very Short Introduction. Alain is also the founder of the successful Oxford Mathematics Public Lecture series. You can watch his recent Public Lecture, 'Can Mathematics Understand the Brain' here.

Mike is Professor of Scientific Computing in the University of Oxford. After working at MIT and the Oxford University Computing Laboratory on computational fluid dynamics applied to the analysis and design of gas turbines, he moved into computational finance and research on Monte Carlo methods for a variety of applications. His research focuses on improving the accuracy, efficiency and analysis of Monte Carlo methods. He is also interested in various aspects of scientific computing, including high performance parallel computing and has been working on the exploitation of GPUs (graphics processors) for a variety of financial, scientific and engineering applications.

Monday, 19 March 2018

Knots and surfaces - the fascinating topology of n-manifolds

Oxford Mathematician Andras Juhasz discusses and illustrates his latest research into knot theory.

"We can only see a small part of Space, even with the help of powerful telescopes. This looks like 3-dimensional coordinate space, but globally it might have a more complicated shape. An n-dimensional manifold, or n-manifold in short, is a space that locally looks like the standard n-dimensional coordinate space, whose points we can describe with n real coordinates. Topology considers such spaces up to continuous or smooth deformations, as if they were made out of rubber.

The only connected 1-manifolds are the real line and the circle. 2-dimensional manifolds are also called surfaces. The closed oriented (or 2-sided) surfaces are the sphere, the surface of a doughnut (the torus), or the surface of a doughnut with several holes. The number of holes is called the genus of  the surface, and is an example of a topological invariant: an algebraic object (e.g., a number, polynomial, or vector space) assigned to a space that is unchanged by deformations. We have already seen that we live in a 3-manifold, and, if we add the time dimension, in a 4-dimensional spacetime.

1-manifolds:

2-manifolds (please view all films in Chrome, Firefox or Explorer):

 

Genus                 0                                     1                                               2

 

The theory of 1- and 2-manifolds is classical. Surprisingly, dimensions greater than 4 are simpler than dimensions 3 and 4, due to the fact that there is enough space to perform a certain topological trick that allows one to reduce the classification problem to algebra. The focus of modern topology is hence in dimensions 3 and 4.  While 3-manifold topology is closely related to geometry, the theory of smooth 4-manifolds is more analytical. In dimension 4, the difference between smooth and continuous deformations becomes essential. For example, there is just one 4-manifold that looks like 4-dimensional coordinate space up to continuous deformation, but infinitely many of these are smoothly different.

A knot is a circle embedded in 3-space, up to deformation. (Topologically, a knot on a string is always trivial, as one can just pull one end along the string itself until the knot disappears.) A link is a collections of knots that link with each other (hence the name). These play an important role in low-dimensional topology, since every 3- and 4-manifold can be described by a link whose components are each labelled by an integer.

Deformation of an unknot:

 

Knots:

 

Links:

Knot Floer homology is an invariant of links defined independently by Ozsváth-Szabó and Rasmussen in 2002. It assigns a finite-dimensional vector space to every link, and contains important geometric information.

Two links are cobordant if they can be connected by a surface in 4-space. If we think of the fourth coordinate as time, each time slice gives a (possibly singular) link. As time varies from say 0 to 1, we obtain a movie of links. In a recent paper published in Advances in Mathematics, I have shown that a link cobordism induces a linear map on knot Floer homology. This can be used to understand the possible surfaces links can bound in 4-space, which is closely related to the topology of smooth 4-manifolds".

A link cobordism:

 

 

 

 

Thursday, 15 March 2018

Understanding plasma-liquid interactions

Oxford Mathematician John Allen, Professor Emeritus of Engineering Science, talks about his work on the electrohydrodynamic stability of a plasma-liquid interface. His collaborators are Joshua Holgate and Michael Coppins at Imperial College.

 '"The study of plasma-liquid interactions is an increasingly important topic in the field of plasma science and technology with applications in nanoparticle synthesis, catalysis of chemical reactions, material processing, water treatment, sterilization and plasma medicine. This particular work is motivated by the plasma-liquid interactions inherent in magnetic confinement fusion devices, such as tokamaks, either due to melt damage of the metal walls or in new liquid metal divertor concepts. The ejection of molten droplets has been observed in both cases and is of considerable concern to the operation of a successful fusion device. Understanding the stability of the liquid metal surface is a critical issue.

Previously-studied instabilities of liquid metal surfaces in tokamaks include a Kelvin-Helmholtz instability due to plasma flow across the metal surface, a Rayleigh-Taylor instability driven by the j × B force due to a current flowing in the metal, a Rayleigh-Plateau instability of the liquid metal rim around a cathode arc spot crater, and droplet emission from bursting bubbles which are formed by liquid boiling or absorption of gases from the plasma. However, none of these studies considers the effect of the strong electric fields and ion flows in the sheath region between the plasma and the liquid surface despite the observations of electrical effects such as arcing, which cause considerable damage to the tokamak wall, and enhanced droplet emission rates from electrically-biased surfaces. Furthermore electrostatic breakup has been identified as an important process for liquid droplets in plasmas.

Instabilities driven by electric fields, i.e. electrohydrodynamic (EHD) instabilities, at the interface between a conducting liquid and vacuum, were originally studied by Melcher and subsequently by Taylor and McEwan. Melcher’s marginal stability criterion was invoked by Bruggeman et al. in order to explain the filamentary structure of a glow discharge over a water cathode and, additionally, to explain the instability of an electrolytic water solution cathode from an earlier experiment. Earlier evidence for EHD instabilities of the plasma-liquid interface appears in an experiment on unrelated work where an arc spot occasionally formed on an electrically-isolated mercury pool which was in contact with the plasma. Another EHD effect, the deformation of a liquid surface into a Taylor cone, has recently been used to form the cathode of a corona discharge.

Our work investigates the EHD stability of a plasma-liquid interface with a linear perturbation analysis. Melcher’s stability criterion is found to apply to short-wavelength perturbations of the surface. However the fast-moving ions in the sheath provide a significant pressure on the liquid surface which can overcome the electric stress for long-wavelength perturbations. This effect has been neglected in previous studies and provides an overall increase in the critical voltage which must be applied to the surface in order to make it unstable. This effect is encouraging for the ongoing development of new plasma-liquid technologies."

Thursday, 15 March 2018

John Ball wins Leonardo da Vinci Award

Oxford Mathematician John Ball has won the European Academy of Sciences Leonardo da Vinci award. The award is given annually for outstanding lifetime scientific achievement. In the words of the Committee,  "through a research career spanning more than 45 years, Professor Ball has made groundbreaking and highly significant contributions to the mathematical theory of elasticity, the calculus of variations, and the mathematical analysis of infinite-dimensional dynamical systems."

Sir John Ball FRS is Sedleian Professor of Natural Philosophy in the University of Oxford and Director of the Oxford Centre for Nonlinear Partial Differential Equations. He is a Fellow of The Queen's College.

 

 

Monday, 12 March 2018

Oxford Summer School on Economic Networks 25-29 June 2018 - register by 15 March

The Oxford Summer School on Economic Networks, hosted by Oxford Mathematics and the Institute of New Economic Thinking, aims to bring together graduate students from a range of disciplines (maths, statistics, economics, policy, geography, development, ..) to learn about the techniques, applications and impact of network theory in economics and development. 
 
We look forward to welcoming a large number of world renowned experts in economic networks and complexity science. Confirmed speakers for the 2018 edition include Fernando Vega-Redondo, Mihaela van der Schaar, Rama Cont, Doyne Farmer, Pete Grindrod, Renaud Lambiotte, Elsa Arcaute and Taha Yasseri. Tutorials and lectures include social networks, games and learning, financial networks, economic complexity and urban systems.
 
Alongside a rigorous academic schedule, the summer school also includes a walking tour of the historic university and city centre, a punting trip on the river Cherwell and a dinner in one of Oxford's historic colleges.
 
The deadline for applications is March 15th - more information is available here. Please contact us at economicnetworks@maths.ox.ac.uk with any questions.
 

Friday, 9 March 2018

How do airlines gauge unknown demand?

Oxford Mathematicians Ilan Price and Jaroslav Fowkes discuss their work on unconstraining demand with Gaussian Processes.

"One of the key revenue management challenges which airlines, hotels, cruise ships (and other industries) all share is the need to make business decisions in the face of constrained (or censored) demand data.

Airlines, for example, commonly set booking limits on the number of cheaper fare-classes that can be purchased, or make cheaper fare-classes unavailable for booking at certain times, in an attempt to divert some of that demand to the more expensive tickets still available. While a fare-class on a given flight route is available for booking, the demand for that 'product', at that price, is accurately captured by its total recorded bookings. However, once the product has been unavailable for booking for a period of time, recorded bookings no longer capture true demand, and the demand data is said to be 'constrained' or 'censored'.

Practices which constrain demand data pose a big challenge for successful revenue management. This is because many important decisions, including setting ticket prices, making changes to an airline's flight network, adding or removing capacity on a certain route, and many others, are all heavily dependent on accurate historical demand data. Moreover, precisely those decisions regarding which fare-classes to make unavailable (and for what periods of time) themselves depend on accurate demand data. Thus predicting what demand would have been had it not been constrained - known as 'unconstraining demand' - is an important research problem.

Our research proposes a new approach to this problem, using a model developed within the framework of Gaussian process (GP) regression. The general idea behind Gaussian process regression is very intuitive: we start by assuming a prior Gaussian distribution over functions, and then we condition that distribution on the observed data, so as to restrict the set of likely functions to only those functions which make sense given the observed data. More precisely, our goal is to infer the posterior predictive distribution $p(f^* | y, X, X^*)$, where $f^*$ are the values of the function evaluated at some prediction points $X^*$, and $y$ are the observed data at points $X$. We can then use the mean of this distribution as our predictions. 


Figure 1: Illustration of GP regression for unconstraining demand. The figure on the left shows the mean prediction and confidence interval produced by our GP method, based on the true demand observations. The dotted black line indicates when the booking limit was reached, and the red line beyond this point shows the GP's unconstrained approximations. The figure on the right shows in red the reconstruction of the cumulative demand curve over the constrained period using the daily demand values predicted with the GP.

In the course of this inference procedure, we need to specify (i) a likelihood function or observation model, and (ii) a mean and covariance function for the GP prior.

Our model uses a Poisson likelihood, based on our implicit model of the bookings process as a doubly stochastic Poisson process, i.e. where bookings are determined by a Poisson process whose rate $\lambda$ is itself a Gaussian process (and thus changes over time).

For the GP prior, we use a zero mean function and define a new 'variable degree polynomial covariance function' \begin{equation} k(x,x') = \sigma^2(x^\top x' + c)^p, \end{equation} with $\theta_c = \{\sigma , c, p\}$ as the covariance hyperparameters (a modification of the polynomial covariance function in which $p$ is a fixed positive integer).

Having conducted a number of numerical experiments, our results are rather promising: the method compares favourably with state of the art methods when repeating experiments from recent literature. The added benefit, though, is that when these experiments are modified to have weaker assumptions on how the test data should look and be generated, our method maintains its strong performance better than its competitors. Our modifications included diversifying the shape of demand curve on which the methods were tested, as well as allowing for the presence of changepoints - points at which the characteristics of the underlying demand trend change dramatically. Using existing theory, we can elegantly extend our GP regression framework to cope with such situations by constructing an appropriate covariance function. For our purposes, we want to allow for the fact that the covariance before and after the changepoint might be completely different. We therefore redefine our covariance function to be \begin{align}\label{eq: Changepoint covariance} k(x,x') = \begin{cases} \sigma_1^2(x^\top x' + c_1)^{p_1} & \text{if } x,x' < x_c,\\ \sigma_2^2(x^\top x' + c_2)^{p_2} & \text{if } x,x' \geq x_c,\\ 0 & \text{otherwise}, \end{cases} \end{align} where $\theta = \{\sigma_1, \sigma_2, c_1, c_2, p_1, p_2, x_c\}$ are all hyperparameters inferred from the data. You can see an example of how well it performs in the image below."

 
 Figure 2: Illustration of automatic changepoint detection with GPs using our piecewise-defined variable degree polynomial covariance function.

You can read the research in greater detail here.
 

Monday, 5 March 2018

The brains of the matter. Understanding the cerebral cortex

The brain is the most complicated organ of any animal, formed and sculpted over 500 million years of evolution. And the cerebral cortex is a critical component. This folded grey matter forms the outside of the brain, and is the seat of higher cognitive functions such as language, episodic memory and voluntary movement.

The cerebral cortex of mammals has a unique layered structure where different types of neuron reside. The thickness of the cortical layer is roughly the same across different species, while the cortical surface area shows a dramatic increase (1000 fold from mouse to human). This difference underlies a significant expansion in the number of cortical neurons produced in the course of embryonic development, resulting in the increased function and complexity of the adult brain. A human cortex accommodates 16 billion neurons as opposed to a mouse’s mere 14 million.

Key elements of this problem are being addressed by Oxford Mathematical Biologist Noemi Picco in a new collaboration involving an interdisciplinary team of mathematicians including Philip Maini in Oxford and Thomas Woolley in  Cardiff and biologists Zoltán Molnár from the Department of Physiology, Anatomy and Genetics in Oxford and Fernando García-Moreno at the Achucarro Basque Center for Neuroscience in Bilbao.

In particular the team are developing a mathematical model of cortical neurogenesis, the process by which neurons grow and develop in the cerebral cortex. Given that species diversity originates from the divergence of developmental programmes, understanding the cellular and molecular mechanisms regulating cell number and diversity is critical for shedding light on cortex evolution.

Many factors influence how neurogenesis in the cortex differs between species, including the types of neurons and neural progenitor cells, the different ways in which they proliferate and differentiate, and the length of the process (85 days in a human, 8 days in a mouse). This project combines mathematical modelling and experimental observations to incorporate these different factors. A key determinant of the neuronal production is the modulation of proliferative (self-amplifying) and differentiative (neurogenic) divisions. By modelling the temporal changes in the propensity of different cell division types, we are able to identify the developmental programme that can justify the observed number of neurons in the cortex.

The growing availability of species-specific experimental data will allow the researchers to map all the possible evolutionary pathways of the cortex, and create a mathematical framework that is general enough to encompass all cortex developmental programmes, while being specifiable enough to be descriptive of single species. This, in turn, has the potential to create a new way to identify developmental brain disorders as deviations from the normal developmental program, giving a mechanistic insight into their cause and clinically actionable suggestions to correct them.

As part of the project, Noemi has released a Neurogenesis Simulator, an app that allows experimentalists to ‘play’ with the mathematical model, choosing the species and the model and calibrating the parameters, to observe how the model outcome changes without having to worry about the mathematical formulation and thereby generating even further cross-disciplinary collaboration.

Noemi’s work is supported by St John’s College Research Centre. Click here for the published article.

Pages