News

Friday, 27 December 2019

Nick Woodhouse appointed CBE in 2020 New Year Honours List

Professor Nick Woodhouse, Emeritus Professor of Mathematics in Oxford and Emeritus Fellow of Wadham College, former Head of the Mathematical Institute and previously President of the Clay Mathematics Institute has been appointed CBE in the 2020 New Year Honours List for services to mathematics.

Nick has had a distinguished career as both a researcher and a leading administrator in the University. His research has been at the interface between mathematics and physics, initially in relativity, and later in more general connections between geometry and physical theory, notably via twistor theory.  In parallel he led the Mathematical Institute in Oxford at a time of major expansion and was the leading figure in the Institute's move to the Andrew Wiles Building, completed in 2013. His time as President of the Clay Mathematics Institute saw its profile and influence increase and its roster of talented Clay Research Fellows grow.

Nick also played a leading role in the administration of the wider University including a period as Deputy Head of the Mathematical, Physical and Life Sciences Division; and was a member of the North Commission set up in 1997 to review the management and structure of the collegiate University and whose recomendations helped shape Oxford as it operates in 2020.

Monday, 9 December 2019

The Penrose Proofs: an exhibition of Roger Penrose’s Scientific Drawings 1-6

As you might expect from a man whose family included the Surrealist artist Roland Penrose, Roger Penrose has always thought visually. That thinking is captured brilliantly in this selection of Roger’s drawings that he produced for his published works and papers.

From quasi-symmetric patterns to graphic illustrations of the paradoxical three versions of reality via twistor theory and the brain, this selection captures the stunning range of Roger’s scientific work and the visual thinking that inspires and describes it.

Mezzanine Level
Mathematical Institute
Oxford

10 December 2019- 31 March 2020

Friday, 22 November 2019

Oxford Mathematics London Public Lecture with Tim Gowers and Hannah Fry now online

Oxford Mathematics London Public Lecture: Timothy Gowers - Productive generalization: one reason we will never run out of interesting mathematical questions

In our Oxford Mathematics London Public Lecture held at the Science Museum, Fields Medallist Tim Gowers uses the principle of generalization to show how mathematics progresses in its relentless pursuit of problems.

After the lecture in a fascinating Q&A with Hannah Fry, Tim discusses how he approaches problems, both mathematical and personal.

Oxford Mathematics Public Lectures are generously supported by XTX Markets. 

 

 

 

Friday, 22 November 2019

Oxford Mathematics 2nd Year Student Lecture on Quantum Theory now online

Our latest online student lecture is the first in the Quantum Theory course for Second Year Students. Fernando Alday reflects on the breakdown of the deterministic world and describes some of the experiments that defined the new Quantum Reality.

This is the sixth lecture in our series of Oxford Mathematics Student Lectures. The lectures aim to throw a light on the student experience and how we teach. All lectures are followed by tutorials where pairs of students spend an hour with their tutor to go through the lectures and accompanying work sheets.

An overview of the course and the relevant materials are available here:

 

 

 

 

 

 

Tuesday, 19 November 2019

Exploring wrinkling in thin membranes

For centuries, engineers have sought to prevent structures from buckling under heavy loads or large impacts, constructing ever larger buildings and safer vehicles. However, recent advances in soft matter are redefining the way we manipulate materials. In particular, an age-old aversion to buckling is being recast in a new light as researchers find that structural instabilities can be harnessed for functionality. This paradigm shift, from buckliphobia to buckliphilia, permits re-evaluation of the potential of soft, deformable structures, opening up methods of exploiting buckling to tune material characteristics or develop metamaterials.

Elastic instabilities provide a means of generating regular topographies with a well-defined wavelength. For example, a thin elastic film attached to a softer substrate buckles into an array of regular wrinkles under quasi-static compression. The wrinkle wavelength is selected by the mechanical properties of the system, so that different wavelengths are typically attained through variation of the film thickness. In an article recently published in the Proceedings of the National Academy of Sciences (PNAS) Oxford Mathematicians Finn Box, Doireann O’KielyOusmane Kodio, Maxime Inizan, and Dominic Vella and Alfonso A. Castrejón-Pita from Oxford's Fluid Dynamics Laboratory in the Department of Engineering Science show that, for a film of given thickness, variation in the wrinkle wavelength can instead be achieved via impact.

The researchers dropped steel spheres onto ultra-thin sheets of polystyrene, floating on water, and filmed what happened with a high-speed video camera. They found that ballistic impact caused the floating sheet to retract inwards, and the compression associated with this retraction induced buckling – resulting in a striking pattern of radial wrinkles.

Importantly, the distance between neighbouring wrinkles was found to evolve in time (equivalently, the number of wrinkles decreased in time) in stark contrast to previous observations in both static indentation experiments and previous dynamic impact experiments. Through mathematical modelling and systematic experimentation, the researchers found that the inertial response of the liquid substrate (i.e. the water on which the sheet floats) controlled the evolution of the wrinkle pattern.

This demonstration of wrinkle coarsening suggests that a dynamic substrate stiffness may provide a means of breaking away from the single, static wavelength that is selected by material properties alone, opening the route towards dynamic tuning of wrinkle-patterned topographies. This novel method for tunable wrinkle formation may prove to be a useful fabrication technique in a range of engineering applications that require regular, patterned topographies.

In their work, the researchers demonstrated that rapid coarsening of wrinkle wavelengths occurs with wavelengths on the order of 100 microns, making it readily observable. The uncovered mechanism for wrinkle formation is scale-independent, however, which indicates that this dynamic method of altering surface structure is suitable for reproduction at the nanoscale where the lengthscale of the wrinkles would be small enough for use in optical applications including photonic materials, which require periodic structures with period comparable with the wavelength of visible light. The reported dynamic wrinkling of thin, floating sheets is fast though, occurring within 10s of milliseconds – blink twice and you’ll miss it. 

A film showing the Dynamic wrinkling of thin sheets

 

Tuesday, 19 November 2019

How is the global energy challenge related to chaos and machine learning?

Energy production is arguably one of the most important factors underlying modern civilisation. Energy allows us to inhabit inhospitable parts of the Earth in relative comfort (using heating and air conditioning), create large cities (by efficiently transporting food and pumping water), or maintain our health (providing the energy for water purification). It also connects people by allowing long-distance travel and facilitating digital communication.

But energy is a sensitive subject at the moment, mainly for two reasons. Firstly, the way we currently produce energy is not sustainable: the Earth’s oil, coal, gas, and uranium reserves are finite, and we are tearing through them. Secondly, it is widely acknowledged that burning fossil fuels is affecting the Earth’s climate as we release greenhouse gases into the atmosphere. How we deal with these issues is a vital, but challenging problem.

Tokamaks are nuclear fusion reactors which are designed to prove the feasibility of fusion as a large-scale and carbon-free source of energy. These reactors are suggested as one of the potential solutions to the global energy challenge. Nuclear fusion involves controlling plasmas at temperatures of 100 Million degrees Celsius, which is ten times the temperature of the Sun. However, this produces unwanted turbulence in the tokamak due to the huge temperature gradients at certain plasma parameters. One of the challenges for Culham Centre for Fusion Energy (CCFE) is to identify such chaotic scenarios in order to avoid damage to the facility and to optimise the efficiency of energy production by stabilising the plasma.

Attracted by the recent spectacular successes of machine learning techniques for image classification, Debasmita Samaddar, a computational plasma physicist from CCFE, approached Oxford Mathematicians Nicolas Boullé, Vassilios Dallas, and Yuji Nakatsukasa to investigate whether machine learning can be employed to effectively control fusion reactors. The research that was carried out focused on how time series can be classified into chaotic or not (see Fig. 1) using machine learning.

Figure 1: A non-chaotic (left) and a chaotic (right) time series generated by the Lorenz system.

Contrary to standard machine learning techniques, the neural network was trained on a different and simpler set than the testing set of interest in order to demonstrate the generalisation ability of neural networks in this classification problem. The main challenge is to learn the chaotic features of the training set, without overfitting, and generalise on the testing data set, which behaves differently. Using a neural network that was trained on the Lorenz system, which is a system of three coupled nonlinear Ordinary Differential Equations (ODEs), we were able to classify time series of the Kuramoto-Sivashinsky (KS) equation  (see Fig. 2) as chaotic or not with high accuracy. The KS equation arises in a wide range of physical problems including instabilities in plasmas and is a characteristic example of a nonlinear PDE that exhibits spatiotemporal chaos.

 

Figure 2: A spatiotemporal chaotic solution of the Kuramoto-Sivashinsky equation (left) and its corresponding chaotic energy time series (right).

This important scientific result from this cross-disciplinary collaboration (facilitated by the Industrially Focused Mathematical Modelling Centre for Doctoral Training in Oxford) suggests that neural networks are able to identify the critical regimes that a fusion reactor might exhibit, paving the way to resolve central problems about the stability of CCFE's fusion reactors. It will be of great interest to see whether this work proves to be vital for the design of the next generation fusion reactors, helping them provide a sustainable energy solution.

Monday, 11 November 2019

When models are wrong, but useful

Applied mathematics provides a collection of methods to allow scientists and engineers to make the most of experimental data, in order to answer their scientific questions and make predictions. The key link between experiments, understanding, and predictions is a mathematical model: you can find many examples in our case-studies. Experimental data can be used to calibrate a model by inferring the parameters of a real-world system from its observed behaviour. The calibrated model then enables scientists to make quantifiable predictions and quantify the uncertainty in those predictions.

There are two important caveats. The first is that, for any one scientific phenomenon, there can be as many models as there are scientists (or, probably, more). So, which should you choose? The second is that “all models are wrong, but some are useful". Often, those that include a high level of detail, and have the potential to make accurate quantitative predictions, are far too complicated to efficiently simulate, or for mathematicians to analyse. On the other hand, simple models are often amenable to analysis, but may not include all the important mechanisms and so cannot make accurate predictions. So how does one go from experimental observations to an understanding of the underlying science, when many different models of variable accuracy and tractability are available to reason with?

In a forthcoming paper, accepted for publication in the SIAM/ASA Journal of Uncertainty Quantification, Oxford Mathematicians Thomas Prescott and Ruth Baker have presented a multifidelity approach to model calibration. The model calibration procedure often requires a very large number of model simulations to ensure that the parameter estimate is reliable (i.e. has a low variance). Due to the large number of simulations required, it may not be feasible to calibrate the accurate model within a reasonable timeframe. Suppose there exists a second model which is much quicker to simulate, but inaccurate. The increased simulation speed means that it is more practical to calibrate this second model to the data. However, although the inaccurate model can be calibrated more quickly, its inaccuracy means that the resulting estimate of the system’s parameters is likely to be biased (see Fig. 1).

The model calibration task aims to achieve an unbiased estimate of the model’s parameters that balances between two conflicting aims: ensuring that the resulting estimates have reasonably small variance, but also that they are produced in a reasonably quick time. By combining each model’s strengths (the short simulation time of one, and the accuracy of the other), the key result of this project deals with the following question: how can the inaccurate model be used to calibrate the accurate model? In particular, how much simulation time should be given to simulating each of the models, and how should those simulations be combined? The result is a model calibration algorithm, tuned according to a formula that determines how to optimally share the computation effort between the two models. This algorithm enables the accurate model to be calibrated with an unbiased parameter estimate and with a significantly improved trade-off between variance and speed.

The results can be applied to speed up the calibration of many types of complicated mathematical models against experimental data, whenever a simpler alternative model can be used to help. These applications exist in fields as varied as ecology, physiology, biochemistry, genetics, engineering, physics, and mathematical finance.

 

Fig 1. In blue are 500 estimates of a parameter n, each of which was generated using 10,000 simulations of a slow, accurate model, and taking around 40 minutes each. In orange are 500 estimates that each took only around 7 minutes to generate by simulating a fast, inaccurate model instead. These estimates are biased (i.e. centred around the wrong value). The estimates in green are where, for every 10 simulations of the inaccurate model, we also produced one simulation of the accurate model. This allows us to remove the bias. But here we see the effect of the trade-off: while the total simulation time is greatly reduced relative to the accurate model (to 10 minutes), this is at the cost of an increased variance (i.e. spread) of the estimate.

Thursday, 7 November 2019

Oxford Mathematics 2nd Year Student Lecture on Differential Equations now online

We continue with our series of Student Lectures with this first lecture in the 2nd year Course on Differential Equations. Professor Philip Maini begins with a recap of the previous year's work before moving on to give examples of ordinary differential equations which exhibit either unique, non-unique, or no solutions. This leads us to Picard's Existence and Uniqueness Theorem...

This latest student lecture is the fifth in our series shining a light on the student experience in Oxford Mathematics. We look forward to your feedback. The full course overview and materials can be found here:

https://courses.maths.ox.ac.uk/node/44002

 

 

 

 

Wednesday, 6 November 2019

James Maynard awarded the 2020 Cole Prize in Number Theory

Oxford Mathematician James Maynard has been awarded the 2020 Cole Prize in Number Theory by the American Mathematical Society (AMS) "for his many contributions to prime number theory."

James is one of the leading lights in world mathematics, having made dramatic advances in analytic number theory in the years immediately following his 2013 doctorate. These advances have brought him worldwide attention in mathematics and beyond and many prizes including the European Mathematical Society Prize, the Ramanujan Prize and the Whitehead Prize. In 2017 he was appointed Research Professor in Oxford.

James paid tribute to the many people whose work laid the foundations for his own discoveries and the people who have guided him in his career, from his parents to school teachers and university supervisors. He added: "the field of analytic number theory feels revitalised and exciting at the moment with new ideas coming from many different people, and hopefully this prize might inspire younger mathematicians to continue this momentum and make new discoveries about the primes."

The Cole Prize in Number Theory recognizes a notable research work in number theory that has appeared in the last six years. The work must be published in a recognized, peer-reviewed journal.

Friday, 1 November 2019

Ehud Hrushovski awarded the Heinz Hopf Prize

Oxford Mathematician Ehud Hrushovski has been awarded the 2019 Heinz Hopf Prize for his outstanding contributions to model theory and their application to algebra and geometry.

The Heinz Hopf Prize at ETH Zurich honours outstanding scientific achievements in the field of pure mathematics. The prize is awarded every two years with the recipient giving the Heinz Hopf Lecture. This year Ehud spoke on 'Logic and geometry: the model theory of finite fields and difference fields.'

Pages