News

Friday, 25 October 2019

Martin Bridson wins Leroy P. Steele Prize for Mathematical Exposition from the American Mathematical Society

Oxford Mathematician Martin Bridson together with co-author André Haefliger has won the 2020 Steele Prize for Mathematical Exposition awarded by the American Mathematical Society for the book 'Metric Spaces of Non-positive Curvature', published by Springer-Verlag in 1999. 

In the words of the citation "Metric Spaces of Non-positive Curvature is the authoritative reference for a huge swath of modern geometric group theory. It realizes Mikhail Gromov's vision of group theory studied via geometry, has been the fundamental textbook for many graduate students learning the subject, and has paved the way for the developments of the subsequent decades."

Professor Martin Bridson is Whitehead Professor of Pure Mathematics in Oxford, A Fellow of Magdalen College and President of the Clay Mathematics Institute. His research interests lie in geometric group theory, low-dimensional topology, and spaces of non-positive curvature. Born on the Isle of Man, In 2016 Martin became only the second Manxman to ever be elected to the Royal Society, after Edward Forbes.

Wednesday, 23 October 2019

Introductory Calculus - watch an Oxford Mathematics 1st year Student Lecture

As part of our 'going behind the scenes' at Oxford Mathematics, we offer the fourth in our series of real student lectures. In our latest lecture we give you a taste of the Oxford Mathematics Student experience as it begins in its very first week.

This is the first lecture in the Introductory Calculus course. Dan Ciubotaru summarises how the course works and what we expect the new students to already know in order to ensure all of them are prepared for the more complex work ahead. We will be filming two more lectures for second year students very shortly. 

An overview of the course and the course materials are here:
https://courses.maths.ox.ac.uk/node/43879
 

 

 

 

 

Tuesday, 22 October 2019

Can mathematical modelling help make lithium-ion batteries better than “good enough”?

Have you ever wished that the battery on your phone would last longer? That you could charge it up more rapidly? Maybe you have thought about buying an electric vehicle, but were filled with range anxiety – the overwhelming fear that the battery will run out before you reach your destination, leaving you stranded? Oxford Mathematicians are hard at work demonstrating that mathematics may provide the key to help tackle problems faced by the battery industry. Robert Timms talks about the battery research going on in Oxford.

"There is a long history of battery research at Oxford: this month Professor John B Goodenough received the Nobel Prize in Chemistry for his work at Oxford University that made possible the development of lithium-ion batteries. His identification and development of Lithium Cobalt Oxide as a cathode material paved the way for the rechargeable devices such as smartphones, laptops and tablets that are now ubiquitous in today’s society. Given that Oxford can be viewed as the birthplace of rechargeable lithium-ion batteries, it is natural that the Oxford Mathematical Institute, with its long association with doing industrial mathematics, is now home to a vibrant battery research community focussed on the mathematical modelling of batteries.

Mathematical models of batteries can be broadly categorised into two groups: equivalent circuit models, which make analogies with traditional circuit components such as resistors and capacitors; and electrochemical models, which describe the physical processes of mass and charge transport within the cell. Equivalent circuit models can be solved rapidly on cheap computing hardware, making them the ideal choice for real-time battery management applications. However, they provide limited physical insight into battery behaviour. On the other hand, electrochemical models are computationally expensive, but provide a much more detailed description of the internal physics of battery operation which can be used for improving cell design.

Figure 1: Equivalent circuit models describe battery behaviour using standard circuit components, and are used in real-time applications such as estimating State of Charge (how much battery life you have left). They are easy to interpret and computationally cheap to solve, but offer limited physical insight.

In order to develop electrochemical models that describe the physical processes that underpin battery operation it is necessary to account for effects that vary over length scales on the order of microns – this is similar to the breadth of a human hair! However, understanding how batteries operate as part of a device requires modelling on the length scale of centimetres. In order to bridge the length scale gap, Oxford Mathematicians use a technique called homogenisation which allows the description of the physics at the microscale to be systematically upscaled into effective equations on the macroscale.

 

Figure 2: A typical single-layer pouch cell design (not to scale). Lithium-ion batteries are made up of a number of components: a negative current collector, porous negative electrode, separator, porous positive electrode and a positive current collector. The porous electrodes are made up of solid particles of spheres that can be modelled whose radius is of the order of tens of microns (much smaller than shown in this sketch). Physics processes on the particle scale must be upscaled to give effective equations for the behaviour of the cell as a whole. The dimensions of the width and height of the pouch cell, labelled here as Ly and Lz  , are of the order of tens of centimetres.

Even after the electrochemical models have been upscaled to the cell level they still comprise a large collection of partial differential equations, so can be computationally expensive to solve and difficult to interpret directly. Starting with complicated electrochemical models and exploiting techniques such as asymptotic analysis, we systematically derive simplified physics-based models, which provide a useful theoretical middle ground between electrochemical and equivalent circuit models to support battery management, on-line diagnostics, and cell design. Using these simplified models we can better understand the underlying principles of battery operation and help to inform the design of new and improved lithium-ion batteries.

So, next time you are using your phone, think about all of the interesting mathematics being used to make your battery last longer."

Notes:
Battery research at the Mathematical Institute is conducted in collaboration with engineering groups across Oxford. The University of Oxford is a founding partner of the Faraday Institution – the UK’s independent institute for electrochemical energy storage research. This partnership has allowed us to develop exciting research links with a number of Universities and industrial bodies across the UK. We also benefit from a number of industrial links, working with national and international partners BBOX, Nexeon and Siemens.

For more information about battery research at Oxford and its partners please visit the following links:

Oxford Mathematics Battery Modelling

Dave Howey Group

Charles Monroe Group 

Patrick Grant Group

Oxford Research Software Engineering

Open Source Battery Modelling Software 

Faraday Institution 

Friday, 18 October 2019

Centre for Doctoral Training in Mathematics of Random Systems: Analysis, Modelling and Algorithms opens its doors

This autumn we welcomed the first students on the EPSRC CDT in Mathematics of Random Systems: Analysis, Modelling and Algorithms. The CDT (Centre for Doctoral Training) is a partnership between the Mathematical Institute and the Department of Statistics here in Oxford, and the Department of Mathematics, Imperial College London. Its ambition is to train the next generation of academic and industry experts in stochastic modelling, advanced computational methods and Data Science. 

In the first year, students follow four core courses on Foundation areas as well as three elective courses, and undertake a supervised research project, which then evolves into a PhD thesis. Our first cohort of 16 students joined in September for an introductory week of intensive courses in Oxford on stochastic analysis, data science, function spaces and programming. Course director Rama Cont (Oxford), and co-directors Thomas Cass (Imperial) and Ben Hambly (Oxford) put the students through their paces with the first week ending with a round of junk yard golf - a perfect tool for applying mathematics skills to the world around us.

Over the year the students will spend some of their days on courses at Oxford and some at Imperial, take part in residential courses in the UK and overseas while all the time firming up their research plans with supervisors at their home department.

In addition to our main funding from EPSRC, we have received support from our industrial partners including Deutsche Bank, JP Morgan and InstaDeep. We are excited to see our first cohort of students start their 4 year journeys. Applications are now open for fully funded studentships to start in Autumn 2020. Find out more.

 

Wednesday, 16 October 2019

Iterated integrals on elliptic and modular curves

Oxford Mathematician Ma Luo talks about his work on constructing iterated integrals, which generalizes usual integrals, to study elliptic and modular curves. 

Usual integrals
Given a path $\gamma$ and a differential 1-form $\omega$ on a space $M$, we can parametrize the path $$\gamma:[0,1]\to M, \qquad t\mapsto\gamma(t)$$ and write $\omega$ as $f(t)dt$, then define the usual integral $$\int_\gamma \omega=\int_0^1 f(t)dt.$$ If we have two loops $\alpha$ and $\beta$ based at the same point $x$ on $M$, then $$\int_{\alpha\beta}\omega=\int_\alpha \omega+\int_\beta \omega=\int_{\beta\alpha} \omega.$$ The order of the loops from which we integrate does not affect the result. Therefore, the usual integral can only detect commutative, i.e. abelian, information in the fundamental group $\pi_1(M,x)$.

Iterated integrals
Kuo-Tsai Chen has discovered a generalization of the usual integral as follows: Given a path $\gamma$ and differential 1-forms $\omega_1,\cdots,\omega_r$ on $M$. Write each $\omega_j$ as $f_j(t)dt$ on the parametrized path $\gamma(t)$. Define an iterated integral by \begin{equation}\label{def} \int_\gamma \omega_1\cdots\omega_r=\idotsint\limits_{0\le t_1\le \cdots \le t_r\le 1} f_1(t_1)f_2(t_2)\cdots f_r(t_r) dt_1\cdots dt_r. \end{equation} It is a time ordered integral. Now for the two loops $\alpha$ and $\beta$, we have $$\int_{\alpha\beta}\omega_1\omega_2-\int_{\beta\alpha}\omega_1\omega_2= \begin{vmatrix} \int_\alpha\omega_1 & \int_\beta\omega_1\\ \int_\alpha\omega_2 & \int_\beta\omega_2 \end{vmatrix},$$ which is often nonzero. Therefore, iterated integrals are sensitive to the order, and they must capture some non-abelian information. But what kind of non-abelian information?

Differential equations and nilpotence
We can reformulate the definition of iterated integral as the element $y_r$ in the solution of a system of differential equations: \begin{align*} dy_0/dt &=0\\ dy_1/dt &=f_1\cdot y_0\\ dy_2/dt &=f_2\cdot y_1\\ \cdots & \\ dy_r/dt &=f_r\cdot y_{r-1} \end{align*} where we insist $y_0(t)\equiv 1$ so that $y_r$ agrees with our previous definition. The auxiliary functions $\{y_0,y_1,\cdots,y_{r-1}\}$ allow us to rewrite the system in the following way: $$ \frac{d}{dt}(y_0,y_1,\cdots,y_r)=(y_0,y_1,\cdots,y_r) \begin{pmatrix} 0 & f_1 & 0 & \cdots & 0 \\ 0 & 0 & f_2 & \ddots & \vdots \\ \vdots & \vdots & 0 & \ddots & 0 \\ \vdots & \vdots & \vdots & \ddots & f_r \\ 0 & 0 & 0 & \cdots & 0 \end{pmatrix} $$ where the matrix on the right is nilpotent (powers to 0). In general, the solutions to a system exist on a small scale for a short time locally. As the time progresses, the local information is being transferred globally. The global behaviour of the solutions is dictated by the system. In our case, iterated integrals are limited by the nilpotence property. Perhaps surprisingly, even with this limited non-abelian information, one finds they have interesting applications to number theory, most notably in Minyhong Kim's work, which uses $p$-adic iterated integrals (local) to help find rational points on curves (global).

Algebraic iterated integrals and beyond nilpotence
Elliptic curves and modular curves both feature prominently in the proof of Fermat's Last Theorem by Andrew Wiles and are extensively studied objects in number theory. My recent work (PhD thesis) constructs algebraic iterated integrals on elliptic curves and the modular curve (of level one). The construction proceeds in a similar fashion as iteratively solving the system of differential equations above. In the case of elliptic curves, my work is based on previous work of Levin--Racinet. The algebraic iterated integrals on elliptic curves lead naturally to elliptic polylogarithms, which generalizes classical polylogarithms \begin{align*} \mathrm{Li}_k(x):&=\sum_{n=1}^\infty\frac{x^n}{n^k},\qquad k\ge 1 \\ &=\int_0^x \frac{dz}{1-z}\underbrace{\frac{dz}{z}\cdots\frac{dz}{z}}_{(k-1)\text{ times}} \end{align*}

In the case of the modular curve, one needs to go beyond nilpotence, by adding some prescribed reductive (more complicated non-abelian) data, thereby constructing iterated integrals with coefficients. Specifically, algebraic iterated integrals of modular forms are constructed. They provide multiple modular values, which belong to a special class of numbers called periods. These periods appear not only in number theory, but also in quantum field theory, and in the study of motives. Francis Brown has proposed a framework where Galois theory of periods can be studied. Just as symmetries of algebraic numbers can be deduced from their defining equations, many relations between these periods result from structural properties of their defining iterated integrals. Our goal is to understand these structures and then connect them back to relations between periods.

Thursday, 10 October 2019

Oxford Mathematics London Public Lecture: Timothy Gowers - Productive generalization: one reason we will never run out of interesting mathematical questions. 18 November.

Productive generalization: one reason we will never run out of interesting mathematical questions.

We are delighted that Tim Gowers will be giving this year's Oxford Mathematics London Public Lecture followed by a question and answer session with Hannah Fry (and the audience!).

Tim Gowers is one of the world's leading mathematicians. He is a Royal Society Research Professor at the Department of Pure Mathematics and Mathematical Statistics at the University of Cambridge, where he also holds the Rouse Ball chair, and is a Fellow of Trinity College, Cambridge. In 1998, he received the Fields Medal for research connecting the fields of functional analysis and combinatorics.

After his lecture Tim will be in conversation with Hannah Fry. Hannah is a lecturer in the Mathematics of Cities at the Centre for Advanced Spatial Analysis at UCL. She is also a well-respected broadcaster and the author of several books including the recently published 'Hello World: How to be Human in the Age of the Machine.'

This lecture is in partnership with the Science Museum in London where it will take place.

Please email external-relations@maths.ox.ac.uk to register.

Watch live:
https://facebook.com/OxfordMathematics
https://livestream.com/oxuni/gowers

The Oxford Mathematics Public Lectures are generously supported by XTX Markets.

Wednesday, 9 October 2019

Filtering under Uncertainty

Oxford Mathematicians Andy Allan and Sam Cohen talk about their recent work on estimating with uncertainty.

"In many examples, we need to use indirect observations to estimate the state of an unseen object. A classic example of this comes from the Apollo missions: how could the crew use observations of the relative position of the sun, moon and earth to determine their position and velocity, and so correct their trajectory? The same ideas have been used in many problems since, from speech recognition to animal tracking to estimating the risk in financial markets. The problem of estimating the current state of such a hidden process from noisy observations is known as stochastic filtering.

To solve this problem, we often begin by assuming that we know 1) how the unseen quantity (the spacecraft's position and velocity) is changing and 2) how our observations relate to the unseen quantity. Both of these can have random errors, but we assume we know what the errors 'typically' look like. For a rocket, these are well understood, but in other applications, we may only have a rough estimate of how these should be modelled, and want to take that uncertainty into account in our calculations.

In our recent work, we investigate how to do this estimation in a way which is robust to errors in our model - instead of directly estimating the hidden state, we first calculate how 'reasonable' each possible model is, then try to ensure our estimates work well for all reasonable models. This is difficult, as random observations make our problem too 'rough' for the usual mathematical approaches to work.

In order to give a more precise description of the problem, suppose that we are interested in the position of some 'signal' process $S$, but we are only able to observe some separate 'observation' process $Y$. A classic example is when $S$ and $Y$ satsify the linear equations \begin{align*} d S_t &= (\theta_t + \alpha S_t)\,d t + \sigma\,d B^1_t,\\ d Y_t &= cS_t\,d t + d B^2_t, \end{align*} where $\theta, \alpha, \sigma$ and $c$ are various (in general time-dependent) parameters, and $B^1, B^2$ are some random noise (which we assume to be independent Brownian motions).

Let us write $\mathcal{Y}_t$ for our observations of $Y$ up to time $t$. In this setting the posterior distribution of the signal $S_t$ at time $t$, given our observations $\mathcal{Y}_t$, can be shown to be Gaussian, i.e. $S_t|\mathcal{Y}_t \sim N(x_t,V_t)$. Moreover, the conditional mean $x_t = \mathbb{E}[S_t\,|\,\mathcal{Y}_t]$ and variance $V_t = \mathbb{E}[(S_t - x_t)^2\,|\,\mathcal{Y}_t]$ satisfy the Kalman-Bucy filtering equations: \begin{align*} d x_t &= (\theta_t + \alpha x_t)\,d t + cV_t(d Y_t - cx_t\,d t),\\ \frac{d V_t}{d t} &= \sigma^2 + 2\alpha V_t - c^2V_t^2. \end{align*}

This works provided that we know the exact values of all the parameters of the model. However, let us now suppose for instance that the parameter $\theta$ in the above is unknown, or we are worried that it may have been miscalibrated. It might also be the case that $\theta$ itself varies through time, but we are unsure of its dynamics. In either case, this means that there is uncertainty in the dynamics of the posterior mean $x_t$.

At a given time $t$, we are therefore uncertain of the 'true' posterior distribution of $S_t$. However, we know that it must be a Gaussian distribution with some mean $x_t = \mu \in \mathbb{R}$. What we would like to know is, given our observations, how 'reasonable' is each choice of posterior mean $\mu$?

Given a fixed $\mu \in \mathbb{R}$, there are infinitely many different choices of the parameter $\theta$ (each with a corresponding initial value $x_0$) which would have resulted in the terminal value $x_t = \mu$ for the posterior mean. That is, for each parameter choice $\theta \colon [0,t] \to \mathbb{R}$ we obtain a trajectory $x^\theta \colon [0,t] \to \mathbb{R}$ satisfying the filtering equation with parameter $\theta$, and the terminal condition $x^\theta_t = \mu$.

We can then consider penalising each such trajectory $x^\theta$ according to how 'unreasonable' we consider it to be. For example, if we believe the 'true' parameter $\theta$ should be fairly close to some specified value, then the severity of the penalisation for a particular $\theta$ can depend on how far it strays from this value. Moreover, we can consider penalising trajectories according to the likelihood of our observations under each parameter.

For a given posterior mean $\mu$, we then wish to find the most 'reasonable' trajectory, i.e. the one with the least associated penalty. In other words, we wish to minimise a 'cost functional' subject to a constraint: an optimal control problem. Although we are in a stochastic setting, the optimisation here should be performed separately for each possible realisation of the observation process. (There is no point averaging over all possible realisations of the observation process when it has already been observed!) This is therefore an instance of pathwise stochastic optimal control, and thus in general requires a pathwise approach to stochastic calculus (rough path theory).

The value function of this control problem characterises how reasonable we consider different choices of the posterior mean $x_t = \mu$ and the unknown parameter $\theta_t$ to be. In particular, the minimum point of this function tells us the 'most reasonable' mean $\mu$ and parameter value $\theta_t$. We establish this function as the unique solution of a (rough) Hamilton-Jacobi equation.

We illustrate our results with a numerical example. Here we take $\alpha = -0.3$, $\sigma = 0.5$ and $c = 1$. We suppose the (unknown) true parameter $\theta$ is a sine function, but that our best estimate of $\theta$ is the constant value $0$ (i.e. the long-time average of the true $\theta$); see Figure 1. We then simulate a realisation of the signal and observation processes.

Solving the Hamilton-Jacobi equation driven by this simulated observation path and evaluating the minimum point of the solution (i.e. the value function of our control problem), we obtain robust estimates of $S_t$ and $\theta_t$. In Figure 1 we plot the most reasonable value of the parameter $\theta_t$, given our observations, and compare it with the true and estimated parameter values.

      

                                                                       

                                                                                                        Figure 1: Learning θ.

 

In particular, the solution is able to 'learn' the true parameter value $\theta_t$ without assuming precise knowledge of the dynamics of $\theta$, we just assume that it's 'not too far from zero'.

In Figure 2 we plot the $\mu$-component of the value function, and see how this function evolves in time. The minimum point of this function, i.e. the most reasonable value of the posterior mean $\mu$, is then shown in Figure 3. We also compare this 'robust filter' to the signal $S_t$ itself, as well as the standard Kalman-Bucy filter $x_t = \mathbb{E}[S_t\,|\,\mathcal{Y}_t]$, calculated using both the true and 'estimated' parameters.

 

                                                                               

                                                                                  Figure 2: ‘Unreasonableness’ of each possible posterior mean µ.

 

                                                                          

                                                                                                                   Figure 3: Estimates of St.

 

For all the details on this approach see:
A.L. Allan and S.N. Cohen, Parameter Uncertainty in the Kalman-Bucy Filter
A.L. Allan and S.N. Cohen, Pathwise Stochastic Control with Applications to Robust Filtering

Monday, 7 October 2019

Multiple zeta values and modular forms

Oxford Mathematician Nils Matthes talks about trying to understand old numbers using new techniques.

"The Riemann zeta function is arguably one of the most important objects in arithmetic. It encodes deep information about the whole numbers; for example the celebrated Riemann hypothesis, which gives a precise location of its zeros, predicts deep information about the prime numbers. In my research, I am mostly interested in the special values of the Riemann zeta function at integers $k\geq 2$,

$(1)\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\displaystyle\zeta(k)=\sum_{n=1}^{\infty}\frac{1}{n^k}, \qquad$

called zeta values. Their appearance is by no means confined to arithmetic and they are known to appear in a multitude of other mathematical areas (hyperbolic geometry, algebraic K-theory, knot theory, ...) and even in amplitude computations in high energy physics. A better understanding of these numbers therefore has potentially important consequences for other fields as well.

One of the earliest results in this direction is due to Euler, who showed in 1734 that for even values of $k$ the zeta value $\zeta(k)$ is an explicit rational multiple of $\pi^k$. Therefore, the even zeta values satisfy a rather simple algebraic relation; up to a rational factor they are all powers of $\zeta(2)$. On the other hand, Euler was unable to show the analogous result for odd zeta values and moreover did not find any algebraic relations between $\pi$ and the $\zeta(2k-1)$. It is nowadays widely believed that $\zeta(2k-1)$ cannot be written as a rational multiple of $\pi^{2k-1}$ and that in fact no such algebraic relations exist.

In the 1920s, a new point of view on the odd zeta values emerged from work of Ramanujan who was able to express the difference between $\zeta(2k-1)$ and a certain rational multiple of $\pi^{2k-1}$ as a Lambert series. For example, in the case of $\zeta(3)$, his formula reads

$\displaystyle(2)\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\frac{7\pi^3}{180}-\zeta(3)=2\sum_{n=1}^{\infty}\frac{1}{n^3(e^{2\pi n}-1)}. $

As it turns out, Ramanujan's formula is much better suited for numerical computation of the odd zeta values than (1). Indeed, the first $10$ terms on the right hand side of (2) are already sufficient to compute the decimal expansion of $\zeta(3)$ correctly up to $30$ digits. This is because the series in (2) converges exponentially quickly while the series in (1) has a much slower rate of convergence.

Somewhat later, it was realised that Ramanujan essentially discovered that $\zeta(2k-1)$ is a period integral of the Eisenstein series $G_{2k}$. These are particularly simple examples of modular forms, a certain class of holomorphic functions on the complex upper half-plane which transform in a rather simple way under Möbius transformations. Modular forms are ubiquitous in modern number theory; for example, they play a key role in Andrew Wiles' proof of Fermat's last theorem. Moreover, since the even zeta value $\zeta(2k)$ appears as the constant term of the Fourier expansion of $G_{2k}$, all zeta values naturally arise from Eisenstein series.

In the course of his studies, Euler also introduced multiple zeta values \begin{equation} \zeta(k_1,\ldots,k_r)=\sum_{n_1>\ldots>n_r>0}\frac{1}{n_1^{k_1}\ldots n_r^{k_r}}, \quad \mbox{for } k_1,\ldots,k_r \geq 1, \, k_1 \geq 2, \end{equation} which generalize (1) to several arguments. Again, these numbers appear in a variety of mathematical contexts and a basic problem is to understand their algebraic relations completely. It turns out that multiple zeta values are also related to modular forms but that here the relationship is much more mysterious. It was uncovered only in 2006 by Herbert Gangl, Masanobu Kaneko and Don Zagier who found the following relation among double zeta values: \begin{equation} \label{eqn:GKZ} 28\zeta(9,3)+150\zeta(7,5)+168\zeta(5,7)=\frac{5197}{691}\zeta(12). \end{equation} Moreover, they interpreted the coefficients $28,150$ and $168$ on the left hand side in terms of period integrals of a certain modular form of weight $12$. More generally, to every modular form for $\operatorname{PSL}_2(\mathbb Z)$ they associated a $\mathbb Q$-linear relation among certain $\zeta(r,s)$ whose coefficients can again be interpreted using period integrals of modular forms. Their observation is subsumed in the far-reaching Broadhurst-Kreimer conjecture which describes the algebraic structure of multiple zeta values entirely in terms of modular forms.

In light of the seemingly mysterious relationship between multiple zeta values and period integrals of modular forms, it is natural to ask for a common framework accommodating both objects. In my PhD thesis at University of Hamburg, building on foundational work of Francis Brown, Andrey Levin and Benjamin Enriquez, I studied elliptic multiple zeta values which bridge these two seemingly very different worlds. More precisely, elliptic multiple zeta values are holomorphic functions on the upper half plane which on one hand satisfy a first-order differential equation involving Eisenstein series and, on the other, degenerate to multiple zeta values at the cusp $i\infty$. It turns out that some of the rather surprising modular phenomena occurring in the study of multiple zeta values such as the relations found by Gangl-Kaneko-Zagier have more transparent analogues in the elliptic setting, which helps with elucidating the algebraic structure of multiple zeta values. In particular, the study of elliptic multiple zeta values should offer a more conceptual explanation of the Broadhurst-Kreimer conjecture.

To conclude, it should be mentioned that the Riemann zeta function is only the simplest example of a so-called Hasse-Weil L-function. More generally, to any algebraic variety over a number field, one can attach such an L-function which encapsulates deep arithmetic-geometric information in a single analytic object. On the other hand, multiple zeta values are only very special examples of periods in the sense of Maxim Kontsevich and Don Zagier which are defined as the values of certain algebraic integrals. Important and far-reaching conjectures due to Pierre Deligne and Alexander Beilinson predict that the special values of Hasse-Weil L-functions are periods. In the case of L-functions associated with elliptic curves, their conjectures are closely related to the Birch and Swinnerton-Dyer conjecture. In this context, the putative relationship between multiple zeta values and modular forms is part of a much more general program of Francis Brown's which, among other things, aims at expressing periods in the sense of Kontsevich-Zagier as (iterated) integrals of modular forms, with the hope of gaining new insights into both objects."

Thursday, 3 October 2019

Early Prediction of Sepsis from Clinical Data - Oxford Mathematicians win the PhysioNet Computing in Cardiology Challenge 2019

Sepsis is a life-threatening condition caused by the body’s response to an infection. In the US alone, there are over 970,000 reported cases of sepsis each year accounting for between 6-30% of all Intensive Care Unit (ICU) admissions and over 50% of hospital deaths. It has been reported that in cases of septic shock, the risk of dying increases by approximately 10% for every hour of delay in receiving antibiotics. Early detection of sepsis events is essential in improving sepsis management and mortality rates in the ICU.

Since 2000, PhysioNet has hosted an annual challenge on clinically important problems involving data, whereby participants are invited to submit solutions that are run and scored on hidden test sets to give overall rankings. This year’s challenge was the “Early prediction of Sepsis from Clinical data.”
    
A team from Oxford Mathematics and Oxford Psychiatry which consisted of James Morrill, Andrey Kormilitzin, Alejo Nevado-Holgado, Sam Howison, and Terry Lyons ranked in first place out of 105 entries. The team built a method based on feature extraction using the Signature method. They showed how the model predictions could be used to provide an early warning system for high risk patients who can be given additional treatment or subject to closer monitoring.

Their work was made possible by support from the The Engineering and Physical Sciences Research Council (EPSRC) and the Alan Turing Institute.

Friday, 27 September 2019

Oxford Mathematics NEWCASTLE Public Lecture: Vicky Neale - 😊🤔😔😁😕😮😍 in Maths?

Mathematics is the pursuit of truth. But it is a pursuit carried out by human beings with human emotions. Join Vicky Neale as she travels the mathematical rollercoaster.

Oxford Mathematics is delighted to announce that in partnership with Northumbria University we shall be hosting our first Newcastle Public Lecture on 13 November. Everybody is welcome as we demonstrate the range, beauty and challenges of mathematics. Vicky Neale, Whitehead Lecturer here in Oxford, will be our speaker. Vicky has given a range of Public Lectures in Oxford and beyond and has made numerous radio and television appearances.

Wednesday 13 November
5.00pm-6.00pm
Northumbria University
Lecture Theatre 002, Business & Law Building, City Campus East
Newcastle upon Tyne, NE1 2SU

Please email external-relations@maths.ox.ac.uk to register

Watch live:
https://facebook.com/OxfordMathematics
https://livestream.com/oxuni/neale

Oxford Mathematics Public Lectures are generously supported by XTX Markets.

Pages