News

Monday, 11 November 2019

When models are wrong, but useful

Applied mathematics provides a collection of methods to allow scientists and engineers to make the most of experimental data, in order to answer their scientific questions and make predictions. The key link between experiments, understanding, and predictions is a mathematical model: you can find many examples in our case-studies. Experimental data can be used to calibrate a model by inferring the parameters of a real-world system from its observed behaviour. The calibrated model then enables scientists to make quantifiable predictions and quantify the uncertainty in those predictions.

There are two important caveats. The first is that, for any one scientific phenomenon, there can be as many models as there are scientists (or, probably, more). So, which should you choose? The second is that “all models are wrong, but some are useful". Often, those that include a high level of detail, and have the potential to make accurate quantitative predictions, are far too complicated to efficiently simulate, or for mathematicians to analyse. On the other hand, simple models are often amenable to analysis, but may not include all the important mechanisms and so cannot make accurate predictions. So how does one go from experimental observations to an understanding of the underlying science, when many different models of variable accuracy and tractability are available to reason with?

In a forthcoming paper, accepted for publication in the SIAM/ASA Journal of Uncertainty Quantification, Oxford Mathematicians Thomas Prescott and Ruth Baker have presented a multifidelity approach to model calibration. The model calibration procedure often requires a very large number of model simulations to ensure that the parameter estimate is reliable (i.e. has a low variance). Due to the large number of simulations required, it may not be feasible to calibrate the accurate model within a reasonable timeframe. Suppose there exists a second model which is much quicker to simulate, but inaccurate. The increased simulation speed means that it is more practical to calibrate this second model to the data. However, although the inaccurate model can be calibrated more quickly, its inaccuracy means that the resulting estimate of the system’s parameters is likely to be biased (see Fig. 1).

The model calibration task aims to achieve an unbiased estimate of the model’s parameters that balances between two conflicting aims: ensuring that the resulting estimates have reasonably small variance, but also that they are produced in a reasonably quick time. By combining each model’s strengths (the short simulation time of one, and the accuracy of the other), the key result of this project deals with the following question: how can the inaccurate model be used to calibrate the accurate model? In particular, how much simulation time should be given to simulating each of the models, and how should those simulations be combined? The result is a model calibration algorithm, tuned according to a formula that determines how to optimally share the computation effort between the two models. This algorithm enables the accurate model to be calibrated with an unbiased parameter estimate and with a significantly improved trade-off between variance and speed.

The results can be applied to speed up the calibration of many types of complicated mathematical models against experimental data, whenever a simpler alternative model can be used to help. These applications exist in fields as varied as ecology, physiology, biochemistry, genetics, engineering, physics, and mathematical finance.

 

Fig 1. In blue are 500 estimates of a parameter n, each of which was generated using 10,000 simulations of a slow, accurate model, and taking around 40 minutes each. In orange are 500 estimates that each took only around 7 minutes to generate by simulating a fast, inaccurate model instead. These estimates are biased (i.e. centred around the wrong value). The estimates in green are where, for every 10 simulations of the inaccurate model, we also produced one simulation of the accurate model. This allows us to remove the bias. But here we see the effect of the trade-off: while the total simulation time is greatly reduced relative to the accurate model (to 10 minutes), this is at the cost of an increased variance (i.e. spread) of the estimate.

Thursday, 7 November 2019

Oxford Mathematics 2nd Year Student Lecture on Differential Equations now online

We continue with our series of Student Lectures with this first lecture in the 2nd year Course on Differential Equations. Professor Philip Maini begins with a recap of the previous year's work before moving on to give examples of ordinary differential equations which exhibit either unique, non-unique, or no solutions. This leads us to Picard's Existence and Uniqueness Theorem...

This latest student lecture is the fifth in our series shining a light on the student experience in Oxford Mathematics. We look forward to your feedback. The full course overview and materials can be found here:

https://courses.maths.ox.ac.uk/node/44002

 

 

 

 

Wednesday, 6 November 2019

James Maynard awarded the 2020 Cole Prize in Number Theory

Oxford Mathematician James Maynard has been awarded the 2020 Cole Prize in Number Theory by the American Mathematical Society (AMS) "for his many contributions to prime number theory."

James is one of the leading lights in world mathematics, having made dramatic advances in analytic number theory in the years immediately following his 2013 doctorate. These advances have brought him worldwide attention in mathematics and beyond and many prizes including the European Mathematical Society Prize, the Ramanujan Prize and the Whitehead Prize. In 2017 he was appointed Research Professor in Oxford.

James paid tribute to the many people whose work laid the foundations for his own discoveries and the people who have guided him in his career, from his parents to school teachers and university supervisors. He added: "the field of analytic number theory feels revitalised and exciting at the moment with new ideas coming from many different people, and hopefully this prize might inspire younger mathematicians to continue this momentum and make new discoveries about the primes."

The Cole Prize in Number Theory recognizes a notable research work in number theory that has appeared in the last six years. The work must be published in a recognized, peer-reviewed journal.

Friday, 1 November 2019

Ehud Hrushovski awarded the Heinz Hopf Prize

Oxford Mathematician Ehud Hrushovski has been awarded the 2019 Heinz Hopf Prize for his outstanding contributions to model theory and their application to algebra and geometry.

The Heinz Hopf Prize at ETH Zurich honours outstanding scientific achievements in the field of pure mathematics. The prize is awarded every two years with the recipient giving the Heinz Hopf Lecture. This year Ehud spoke on 'Logic and geometry: the model theory of finite fields and difference fields.'

Thursday, 31 October 2019

Applied Pure at the Mathematical Institute, Oxford: Music & Light Symbiosis no.3 - An Art Exhibition and a Light & Music Concert

An Art Exhibition and a Light & Music Concert

Katharine Beaugié - Light Sculpture
Medea Bindewald - Harpsichord
Curated by Balázs Szendrői

Concert: 18 November, 6.45pm followed by a reception
Exhibition: 18th November – 6th December 2019, Mon-Fri, 8am-6pm

Applied Pure is a unique collaboration between light sculptor Katharine Beaugié and international concert harpsichordist Medea Bindewald, combining the patterns made by water and light with the sound of harpsichord music in a mathematical environment.

Katharine Beaugié will also be exhibiting a new series of large-scale photograms (photographic shadows), displaying the patterns of the natural phenomena of human relationship with water and light.

The Programme of music for harpsichord and water includes the composers: Domenico Scarlatti (1685-1757), Johann Jakob Froberger (1616-1667), Enno Kastens (b 1967) and Johann Sebastian Bach (1685-1750).

For more information about the concert and exhibition which is FREE please click this link

Image of Drop | God 2018

 

Friday, 25 October 2019

Martin Bridson wins Leroy P. Steele Prize for Mathematical Exposition from the American Mathematical Society

Oxford Mathematician Martin Bridson together with co-author André Haefliger has won the 2020 Steele Prize for Mathematical Exposition awarded by the American Mathematical Society for the book 'Metric Spaces of Non-positive Curvature', published by Springer-Verlag in 1999. 

In the words of the citation "Metric Spaces of Non-positive Curvature is the authoritative reference for a huge swath of modern geometric group theory. It realizes Mikhail Gromov's vision of group theory studied via geometry, has been the fundamental textbook for many graduate students learning the subject, and has paved the way for the developments of the subsequent decades."

Professor Martin Bridson is Whitehead Professor of Pure Mathematics in Oxford, A Fellow of Magdalen College and President of the Clay Mathematics Institute. His research interests lie in geometric group theory, low-dimensional topology, and spaces of non-positive curvature. Born on the Isle of Man, In 2016 Martin became only the second Manxman to ever be elected to the Royal Society, after Edward Forbes.

Wednesday, 23 October 2019

Introductory Calculus - watch an Oxford Mathematics 1st year Student Lecture

As part of our 'going behind the scenes' at Oxford Mathematics, we offer the fourth in our series of real student lectures. In our latest lecture we give you a taste of the Oxford Mathematics Student experience as it begins in its very first week.

This is the first lecture in the Introductory Calculus course. Dan Ciubotaru summarises how the course works and what we expect the new students to already know in order to ensure all of them are prepared for the more complex work ahead. We will be filming two more lectures for second year students very shortly. 

An overview of the course and the course materials are here:
https://courses.maths.ox.ac.uk/node/43879
 

 

 

 

 

Tuesday, 22 October 2019

Can mathematical modelling help make lithium-ion batteries better than “good enough”?

Have you ever wished that the battery on your phone would last longer? That you could charge it up more rapidly? Maybe you have thought about buying an electric vehicle, but were filled with range anxiety – the overwhelming fear that the battery will run out before you reach your destination, leaving you stranded? Oxford Mathematicians are hard at work demonstrating that mathematics may provide the key to help tackle problems faced by the battery industry. Robert Timms talks about the battery research going on in Oxford.

"There is a long history of battery research at Oxford: this month Professor John B Goodenough received the Nobel Prize in Chemistry for his work at Oxford University that made possible the development of lithium-ion batteries. His identification and development of Lithium Cobalt Oxide as a cathode material paved the way for the rechargeable devices such as smartphones, laptops and tablets that are now ubiquitous in today’s society. Given that Oxford can be viewed as the birthplace of rechargeable lithium-ion batteries, it is natural that the Oxford Mathematical Institute, with its long association with doing industrial mathematics, is now home to a vibrant battery research community focussed on the mathematical modelling of batteries.

Mathematical models of batteries can be broadly categorised into two groups: equivalent circuit models, which make analogies with traditional circuit components such as resistors and capacitors; and electrochemical models, which describe the physical processes of mass and charge transport within the cell. Equivalent circuit models can be solved rapidly on cheap computing hardware, making them the ideal choice for real-time battery management applications. However, they provide limited physical insight into battery behaviour. On the other hand, electrochemical models are computationally expensive, but provide a much more detailed description of the internal physics of battery operation which can be used for improving cell design.

Figure 1: Equivalent circuit models describe battery behaviour using standard circuit components, and are used in real-time applications such as estimating State of Charge (how much battery life you have left). They are easy to interpret and computationally cheap to solve, but offer limited physical insight.

In order to develop electrochemical models that describe the physical processes that underpin battery operation it is necessary to account for effects that vary over length scales on the order of microns – this is similar to the breadth of a human hair! However, understanding how batteries operate as part of a device requires modelling on the length scale of centimetres. In order to bridge the length scale gap, Oxford Mathematicians use a technique called homogenisation which allows the description of the physics at the microscale to be systematically upscaled into effective equations on the macroscale.

 

Figure 2: A typical single-layer pouch cell design (not to scale). Lithium-ion batteries are made up of a number of components: a negative current collector, porous negative electrode, separator, porous positive electrode and a positive current collector. The porous electrodes are made up of solid particles of spheres that can be modelled whose radius is of the order of tens of microns (much smaller than shown in this sketch). Physics processes on the particle scale must be upscaled to give effective equations for the behaviour of the cell as a whole. The dimensions of the width and height of the pouch cell, labelled here as Ly and Lz  , are of the order of tens of centimetres.

Even after the electrochemical models have been upscaled to the cell level they still comprise a large collection of partial differential equations, so can be computationally expensive to solve and difficult to interpret directly. Starting with complicated electrochemical models and exploiting techniques such as asymptotic analysis, we systematically derive simplified physics-based models, which provide a useful theoretical middle ground between electrochemical and equivalent circuit models to support battery management, on-line diagnostics, and cell design. Using these simplified models we can better understand the underlying principles of battery operation and help to inform the design of new and improved lithium-ion batteries.

So, next time you are using your phone, think about all of the interesting mathematics being used to make your battery last longer."

Notes:
Battery research at the Mathematical Institute is conducted in collaboration with engineering groups across Oxford. The University of Oxford is a founding partner of the Faraday Institution – the UK’s independent institute for electrochemical energy storage research. This partnership has allowed us to develop exciting research links with a number of Universities and industrial bodies across the UK. We also benefit from a number of industrial links, working with national and international partners BBOX, Nexeon and Siemens.

For more information about battery research at Oxford and its partners please visit the following links:

Oxford Mathematics Battery Modelling

Dave Howey Group

Charles Monroe Group 

Patrick Grant Group

Oxford Research Software Engineering

Open Source Battery Modelling Software 

Faraday Institution 

Friday, 18 October 2019

Centre for Doctoral Training in Mathematics of Random Systems: Analysis, Modelling and Algorithms opens its doors

This autumn we welcomed the first students on the EPSRC CDT in Mathematics of Random Systems: Analysis, Modelling and Algorithms. The CDT (Centre for Doctoral Training) is a partnership between the Mathematical Institute and the Department of Statistics here in Oxford, and the Department of Mathematics, Imperial College London. Its ambition is to train the next generation of academic and industry experts in stochastic modelling, advanced computational methods and Data Science. 

In the first year, students follow four core courses on Foundation areas as well as three elective courses, and undertake a supervised research project, which then evolves into a PhD thesis. Our first cohort of 16 students joined in September for an introductory week of intensive courses in Oxford on stochastic analysis, data science, function spaces and programming. Course director Rama Cont (Oxford), and co-directors Thomas Cass (Imperial) and Ben Hambly (Oxford) put the students through their paces with the first week ending with a round of junk yard golf - a perfect tool for applying mathematics skills to the world around us.

Over the year the students will spend some of their days on courses at Oxford and some at Imperial, take part in residential courses in the UK and overseas while all the time firming up their research plans with supervisors at their home department.

In addition to our main funding from EPSRC, we have received support from our industrial partners including Deutsche Bank, JP Morgan and InstaDeep. We are excited to see our first cohort of students start their 4 year journeys. Applications are now open for fully funded studentships to start in Autumn 2020. Find out more.

 

Wednesday, 16 October 2019

Iterated integrals on elliptic and modular curves

Oxford Mathematician Ma Luo talks about his work on constructing iterated integrals, which generalizes usual integrals, to study elliptic and modular curves. 

Usual integrals
Given a path $\gamma$ and a differential 1-form $\omega$ on a space $M$, we can parametrize the path $$\gamma:[0,1]\to M, \qquad t\mapsto\gamma(t)$$ and write $\omega$ as $f(t)dt$, then define the usual integral $$\int_\gamma \omega=\int_0^1 f(t)dt.$$ If we have two loops $\alpha$ and $\beta$ based at the same point $x$ on $M$, then $$\int_{\alpha\beta}\omega=\int_\alpha \omega+\int_\beta \omega=\int_{\beta\alpha} \omega.$$ The order of the loops from which we integrate does not affect the result. Therefore, the usual integral can only detect commutative, i.e. abelian, information in the fundamental group $\pi_1(M,x)$.

Iterated integrals
Kuo-Tsai Chen has discovered a generalization of the usual integral as follows: Given a path $\gamma$ and differential 1-forms $\omega_1,\cdots,\omega_r$ on $M$. Write each $\omega_j$ as $f_j(t)dt$ on the parametrized path $\gamma(t)$. Define an iterated integral by \begin{equation}\label{def} \int_\gamma \omega_1\cdots\omega_r=\idotsint\limits_{0\le t_1\le \cdots \le t_r\le 1} f_1(t_1)f_2(t_2)\cdots f_r(t_r) dt_1\cdots dt_r. \end{equation} It is a time ordered integral. Now for the two loops $\alpha$ and $\beta$, we have $$\int_{\alpha\beta}\omega_1\omega_2-\int_{\beta\alpha}\omega_1\omega_2= \begin{vmatrix} \int_\alpha\omega_1 & \int_\beta\omega_1\\ \int_\alpha\omega_2 & \int_\beta\omega_2 \end{vmatrix},$$ which is often nonzero. Therefore, iterated integrals are sensitive to the order, and they must capture some non-abelian information. But what kind of non-abelian information?

Differential equations and nilpotence
We can reformulate the definition of iterated integral as the element $y_r$ in the solution of a system of differential equations: \begin{align*} dy_0/dt &=0\\ dy_1/dt &=f_1\cdot y_0\\ dy_2/dt &=f_2\cdot y_1\\ \cdots & \\ dy_r/dt &=f_r\cdot y_{r-1} \end{align*} where we insist $y_0(t)\equiv 1$ so that $y_r$ agrees with our previous definition. The auxiliary functions $\{y_0,y_1,\cdots,y_{r-1}\}$ allow us to rewrite the system in the following way: $$ \frac{d}{dt}(y_0,y_1,\cdots,y_r)=(y_0,y_1,\cdots,y_r) \begin{pmatrix} 0 & f_1 & 0 & \cdots & 0 \\ 0 & 0 & f_2 & \ddots & \vdots \\ \vdots & \vdots & 0 & \ddots & 0 \\ \vdots & \vdots & \vdots & \ddots & f_r \\ 0 & 0 & 0 & \cdots & 0 \end{pmatrix} $$ where the matrix on the right is nilpotent (powers to 0). In general, the solutions to a system exist on a small scale for a short time locally. As the time progresses, the local information is being transferred globally. The global behaviour of the solutions is dictated by the system. In our case, iterated integrals are limited by the nilpotence property. Perhaps surprisingly, even with this limited non-abelian information, one finds they have interesting applications to number theory, most notably in Minyhong Kim's work, which uses $p$-adic iterated integrals (local) to help find rational points on curves (global).

Algebraic iterated integrals and beyond nilpotence
Elliptic curves and modular curves both feature prominently in the proof of Fermat's Last Theorem by Andrew Wiles and are extensively studied objects in number theory. My recent work (PhD thesis) constructs algebraic iterated integrals on elliptic curves and the modular curve (of level one). The construction proceeds in a similar fashion as iteratively solving the system of differential equations above. In the case of elliptic curves, my work is based on previous work of Levin--Racinet. The algebraic iterated integrals on elliptic curves lead naturally to elliptic polylogarithms, which generalizes classical polylogarithms \begin{align*} \mathrm{Li}_k(x):&=\sum_{n=1}^\infty\frac{x^n}{n^k},\qquad k\ge 1 \\ &=\int_0^x \frac{dz}{1-z}\underbrace{\frac{dz}{z}\cdots\frac{dz}{z}}_{(k-1)\text{ times}} \end{align*}

In the case of the modular curve, one needs to go beyond nilpotence, by adding some prescribed reductive (more complicated non-abelian) data, thereby constructing iterated integrals with coefficients. Specifically, algebraic iterated integrals of modular forms are constructed. They provide multiple modular values, which belong to a special class of numbers called periods. These periods appear not only in number theory, but also in quantum field theory, and in the study of motives. Francis Brown has proposed a framework where Galois theory of periods can be studied. Just as symmetries of algebraic numbers can be deduced from their defining equations, many relations between these periods result from structural properties of their defining iterated integrals. Our goal is to understand these structures and then connect them back to relations between periods.

Pages