Thursday, 15 March 2018 
Oxford Mathematician John Ball has won the European Academy of Sciences Leonardo da Vinci award. The award is given annually for outstanding lifetime scientific achievement. In the words of the Committee, "through a research career spanning more than 45 years, Professor Ball has made groundbreaking and highly significant contributions to the mathematical theory of elasticity, the calculus of variations, and the mathematical analysis of infinitedimensional dynamical systems."
Sir John Ball FRS is Sedleian Professor of Natural Philosophy in the University of Oxford and Director of the Oxford Centre for Nonlinear Partial Differential Equations. He is a Fellow of The Queen's College.

Monday, 12 March 2018 
The Oxford Summer School on Economic Networks, hosted by Oxford Mathematics and the Institute of New Economic Thinking, aims to bring together graduate students from a range of disciplines (maths, statistics, economics, policy, geography, development, ..) to learn about the techniques, applications and impact of network theory in economics and development.
We look forward to welcoming a large number of world renowned experts in economic networks and complexity science. Confirmed speakers for the 2018 edition include Fernando VegaRedondo, Mihaela van der Schaar, Rama Cont, Doyne Farmer, Pete Grindrod, Renaud Lambiotte, Elsa Arcaute and Taha Yasseri. Tutorials and lectures include social networks, games and learning, financial networks, economic complexity and urban systems.
Alongside a rigorous academic schedule, the summer school also includes a walking tour of the historic university and city centre, a punting trip on the river Cherwell and a dinner in one of Oxford's historic colleges.
The deadline for applications is March 15th  more information is available here. Please contact us at economicnetworks@maths.ox.ac.uk with any questions.

Friday, 9 March 2018 
Oxford Mathematicians Ilan Price and Jaroslav Fowkes discuss their work on unconstraining demand with Gaussian Processes.
"One of the key revenue management challenges which airlines, hotels, cruise ships (and other industries) all share is the need to make business decisions in the face of constrained (or censored) demand data.
Airlines, for example, commonly set booking limits on the number of cheaper fareclasses that can be purchased, or make cheaper fareclasses unavailable for booking at certain times, in an attempt to divert some of that demand to the more expensive tickets still available. While a fareclass on a given flight route is available for booking, the demand for that 'product', at that price, is accurately captured by its total recorded bookings. However, once the product has been unavailable for booking for a period of time, recorded bookings no longer capture true demand, and the demand data is said to be 'constrained' or 'censored'.
Practices which constrain demand data pose a big challenge for successful revenue management. This is because many important decisions, including setting ticket prices, making changes to an airline's flight network, adding or removing capacity on a certain route, and many others, are all heavily dependent on accurate historical demand data. Moreover, precisely those decisions regarding which fareclasses to make unavailable (and for what periods of time) themselves depend on accurate demand data. Thus predicting what demand would have been had it not been constrained  known as 'unconstraining demand'  is an important research problem.
Our research proposes a new approach to this problem, using a model developed within the framework of Gaussian process (GP) regression. The general idea behind Gaussian process regression is very intuitive: we start by assuming a prior Gaussian distribution over functions, and then we condition that distribution on the observed data, so as to restrict the set of likely functions to only those functions which make sense given the observed data. More precisely, our goal is to infer the posterior predictive distribution $p(f^*  y, X, X^*)$, where $f^*$ are the values of the function evaluated at some prediction points $X^*$, and $y$ are the observed data at points $X$. We can then use the mean of this distribution as our predictions.
Figure 1: Illustration of GP regression for unconstraining demand. The figure on the left shows the mean prediction and confidence interval produced by our GP method, based on the true demand observations. The dotted black line indicates when the booking limit was reached, and the red line beyond this point shows the GP's unconstrained approximations. The figure on the right shows in red the reconstruction of the cumulative demand curve over the constrained period using the daily demand values predicted with the GP.
In the course of this inference procedure, we need to specify (i) a likelihood function or observation model, and (ii) a mean and covariance function for the GP prior.
Our model uses a Poisson likelihood, based on our implicit model of the bookings process as a doubly stochastic Poisson process, i.e. where bookings are determined by a Poisson process whose rate $\lambda$ is itself a Gaussian process (and thus changes over time).
For the GP prior, we use a zero mean function and define a new 'variable degree polynomial covariance function' \begin{equation} k(x,x') = \sigma^2(x^\top x' + c)^p, \end{equation} with $\theta_c = \{\sigma , c, p\}$ as the covariance hyperparameters (a modification of the polynomial covariance function in which $p$ is a fixed positive integer).
Having conducted a number of numerical experiments, our results are rather promising: the method compares favourably with state of the art methods when repeating experiments from recent literature. The added benefit, though, is that when these experiments are modified to have weaker assumptions on how the test data should look and be generated, our method maintains its strong performance better than its competitors. Our modifications included diversifying the shape of demand curve on which the methods were tested, as well as allowing for the presence of changepoints  points at which the characteristics of the underlying demand trend change dramatically. Using existing theory, we can elegantly extend our GP regression framework to cope with such situations by constructing an appropriate covariance function. For our purposes, we want to allow for the fact that the covariance before and after the changepoint might be completely different. We therefore redefine our covariance function to be \begin{align}\label{eq: Changepoint covariance} k(x,x') = \begin{cases} \sigma_1^2(x^\top x' + c_1)^{p_1} & \text{if } x,x' < x_c,\\ \sigma_2^2(x^\top x' + c_2)^{p_2} & \text{if } x,x' \geq x_c,\\ 0 & \text{otherwise}, \end{cases} \end{align} where $\theta = \{\sigma_1, \sigma_2, c_1, c_2, p_1, p_2, x_c\}$ are all hyperparameters inferred from the data. You can see an example of how well it performs in the image below."
Figure 2: Illustration of automatic changepoint detection with GPs using our piecewisedefined variable degree polynomial covariance function.
You can read the research in greater detail here.

Monday, 5 March 2018 
The brain is the most complicated organ of any animal, formed and sculpted over 500 million years of evolution. And the cerebral cortex is a critical component. This folded grey matter forms the outside of the brain, and is the seat of higher cognitive functions such as language, episodic memory and voluntary movement.
The cerebral cortex of mammals has a unique layered structure where different types of neuron reside. The thickness of the cortical layer is roughly the same across different species, while the cortical surface area shows a dramatic increase (1000 fold from mouse to human). This difference underlies a significant expansion in the number of cortical neurons produced in the course of embryonic development, resulting in the increased function and complexity of the adult brain. A human cortex accommodates 16 billion neurons as opposed to a mouse’s mere 14 million.
Key elements of this problem are being addressed by Oxford Mathematical Biologist Noemi Picco in a new collaboration involving an interdisciplinary team of mathematicians including Philip Maini in Oxford and Thomas Woolley in Cardiff and biologists Zoltán Molnár from the Department of Physiology, Anatomy and Genetics in Oxford and Fernando GarcíaMoreno at the Achucarro Basque Center for Neuroscience in Bilbao.
In particular the team are developing a mathematical model of cortical neurogenesis, the process by which neurons grow and develop in the cerebral cortex. Given that species diversity originates from the divergence of developmental programmes, understanding the cellular and molecular mechanisms regulating cell number and diversity is critical for shedding light on cortex evolution.
Many factors influence how neurogenesis in the cortex differs between species, including the types of neurons and neural progenitor cells, the different ways in which they proliferate and differentiate, and the length of the process (85 days in a human, 8 days in a mouse). This project combines mathematical modelling and experimental observations to incorporate these different factors. A key determinant of the neuronal production is the modulation of proliferative (selfamplifying) and differentiative (neurogenic) divisions. By modelling the temporal changes in the propensity of different cell division types, we are able to identify the developmental programme that can justify the observed number of neurons in the cortex.
The growing availability of speciesspecific experimental data will allow the researchers to map all the possible evolutionary pathways of the cortex, and create a mathematical framework that is general enough to encompass all cortex developmental programmes, while being specifiable enough to be descriptive of single species. This, in turn, has the potential to create a new way to identify developmental brain disorders as deviations from the normal developmental program, giving a mechanistic insight into their cause and clinically actionable suggestions to correct them.
As part of the project, Noemi has released a Neurogenesis Simulator, an app that allows experimentalists to ‘play’ with the mathematical model, choosing the species and the model and calibrating the parameters, to observe how the model outcome changes without having to worry about the mathematical formulation and thereby generating even further crossdisciplinary collaboration.
Noemi’s work is supported by St John’s College Research Centre. Click here for the published article.

Monday, 5 March 2018 
'Euler's Pioneering Equation' has been compared to a Shakespearean Sonnet. But even if you don't buy that, Robin Wilson's book does much to show how an 18th century Swiss mathematician managed to bring together the five key constants in the subject: the number 1, the basis of our counting system; the concept of zero, which was a major development in mathematics, and opened up the idea of negative numbers; π an irrational number, the basis for the measurement of circles; the exponential e, associated with exponential growth and logarithms; and the imaginary number i, the square root of 1, the basis of complex numbers. Some achievement.
We are always being told that mathematics impacts every corner of our lives  our security, our climate, even our very selves. Want a quick summary of how? Alain Goriely's Applied Mathematics: A Very Short Introduction does just that, laying out the basics of the subject and exploring its range and potential. If you want to know how cooking a turkey and medical imaging are best explained by mathematics (or even if you don't) this is an excellent read.
By contrast Yves Capdeboscq together with colleague Giovanni S. Alberti from Genoa has published 'Lectures on Elliptic Methods For Hybrid Inverse Problems based on a series of 2014 lectures. Targeting the Graduate audience, this work tackles one of the most important aspects of the mathematical sciences: the Inverse Problem. In the words of the authors "Inverse problems correspond to the opposite (of a direct problem), namely to find the cause which generated the observed, measured result."
Click here for our last literary selection including Prime Numbers, Networks and Russian Mathematicians.

Friday, 2 March 2018 
Oxford Mathematician Robin Wilson has been awarded the 2017 Stanton Medal. The medal is awarded every two years by the Institute of Combinatorics and its Applications (ICA) for outreach activities in combinatorial mathematics.
In the words of the ICA citation, "Robin Wilson has, for fifty years, been an outstanding ambassador for graph theory to the general public. He has lectured widely (giving some 1500 public lectures), and extended the reach of his lectures through television, radio, and videotape. He has also published extensively on combinatorial ideas, written in a style that is engaging and accessible. He has provided direction, encouragement, and support to colleagues and students at all levels. His superb talents at conveying the beauty of graphtheoretic ideas, and inviting his readers and listeners to join in, have enthused many students, teachers, and researchers. Professor Wilson’s advocacy and outreach for combinatorics continue to yield many positive impacts that are enjoyed by researchers and nonspecialists alike."
Robin Wilson is an Emeritus Professor of Pure Mathematics at the Open University, Emeritus Professor of Geometry at Gresham College, London, and a former Fellow of Keble College, Oxford. He is the author of many books including 'Combinatorics: A Very Short Introduction', 'Four Colours Suffice: How the Map Problem Was Solved,' 'Lewis Carroll in Numberland: His Fantastical Mathematical Logical Life' and his textbook ‘Introduction to Graph Theory.’ His latest Oxford Mathematics Public Lecture on Euler's pioneering equation can be watched here.

Monday, 26 February 2018 
We have two contrasting Oxford Mathematics Public Lectures coming up in the next ten days. One features a genius from the eighteenth century whose work is still pertinent today. The other is very much from the 21st century and illuminates the direction mathematics is currently travelling. Please email externalrelations@maths.ox.ac.uk to register or follow our twitter account for details on how to watch live.
Euler’s pioneering equation: ‘the most beautiful theorem in mathematics’  Robin Wilson. 28 February, 2018, 56pm
Can Mathematics Understand the Brain?  Alain Goriely, March 8th, 5.156.15pm

Euler’s pioneering equation: ‘the most beautiful theorem in mathematics’  Robin Wilson
Euler’s equation, the ‘most beautiful equation in mathematics’, startlingly connects the five most important constants in the subject: 1, 0, π, e and i. Central to both mathematics and physics, it has also featured in a criminal court case and on a postage stamp, and has appeared twice in The Simpsons. So what is this equation – and why is it pioneering?
Robin Wilson is an Emeritus Professor of Pure Mathematics at the Open University, Emeritus Professor of Geometry at Gresham College, London, and a former Fellow of Keble College, Oxford.
28 February 2018, 5pm6pm, Mathematical Institute, Oxford. Please email externalrelations@maths.ox.ac.uk to register
Can Mathematics Understand the Brain?  Alain Goriely
The human brain is the object of the ultimate intellectual egocentrism. It is also a source of endless scientific problems and an organ of such complexity that it is not clear that a mathematical approach is even possible, despite many attempts.
In this talk Alain will use the brain to showcase how applied mathematics thrives on such challenges. Through mathematical modelling, we will see how we can gain insight into how the brain acquires its convoluted shape and what happens during trauma. We will also consider the dramatic but fascinating progression of neurodegenerative diseases, and, eventually, hope to learn a bit about who we are before it is too late.
Alain Goriely is Professor of Mathematical Modelling, University of Oxford and author of 'Applied Mathematics: A Very Short Introduction.'
8 March, 5.15 pm6.15pm, Mathematical Institute, Oxford. Please email externalrelations@maths.ox.ac.uk to register

Monday, 19 February 2018 
Oxford Mathematician Katherine Staden provides a fascinating snapshot of the field of combinatorics, and in particular extremal combinatorics, and the progress that she and her collaborators are making in answering one of its central questions posed by Paul Erdős over sixty years ago.
"Combinatorics is the study of combinatorial structures such as graphs (also called networks), set systems and permutations. A graph is an encoding of relations between objects, so many practical problems can be represented in graph theoretic terms; graphs and their mathematical properties have therefore been very useful in the sciences, linguistics and sociology. But mathematicians are generally concerned with theoretical questions about graphs, which are fascinating objects for their own sake. One of the attractions of combinatorics is the fact that many of its central problems have simple and elegant formulations, requiring only a few basic definitions to be understood. In contrast, the solutions to these problems can require deep insight and the development of novel tools.
A graph $G$ is a collection $V$ of vertices together with a collection $E$ of edges. An edge consists of two vertices. We can represent $G$ graphically by drawing the vertices as points in the plane and drawing a (straight) line between vertices $x$ and $y$ if $x,y$ is an edge.
Extremal graph theory concerns itself with how big or small a graph can be, given that it satisfies certain restrictions. Perhaps the first theorem in this area is due to W. Mantel from 1907, concerning triangles in graphs. A triangle is what you expect it to be: three vertices $x,y,z$ such that every pair $x,y$ and $y,z$ and $z,x$ is an edge. Consider a graph which has some number $n$ of vertices, and these are split into two sets $A$ and $B$ of size $\lfloor n/2\rfloor$, $\lceil n/2\rceil$ respectively. Now add every edge with one vertex in $A$ and one vertex in $B$. This graph, which we call $T_2(n)$, has $AB=\lfloor n^2/4\rfloor$ edges. Also, it does not contain any triangles, because at least two of its vertices would have to both lie in $A$ or in $B$, and there is no edge between such pairs. Mantel proved that if any graph other than $T_2(n)$ has $n$ vertices and at least $\lfloor n^2/4\rfloor$ edges, it must contain a triangle. In other words, $T_2(n)$ is the unique `largest' trianglefree graph on $n$ vertices.
Following generalisations by P. Turán and H. Rademacher in the 1940s, Hungarian mathematician Paul Erdős thought about quantitatively extending Mantel's theorem in the 1950s. He asked the following: among all graphs with $n$ vertices and some number $e$ of edges, which one has the fewest triangles? Call this quantity $t(n,e)$. (One can also think about graphs with the most triangles, but this turns out to be less interesting).
Astoundingly, this seemingly simple question has yet to be fully resolved, 60 years later. Still, in every intervening decade, progress has been made, by Erdős, Goodman, MoonMoser, NordhausStewart, Bollobás, LovászSimonovits, Fisher and others. Finally, in 2008, Russian mathematician A. Razborov managed to solve the problem asymptotically (meaning to find an approximation $g(e/\binom{n}{2})$ to $t(n,e)$ which is arbitrarily accurate as $n$ gets larger). Razborov showed that, for large $n$, $g(e/\binom{n}{2})$ has a scalloped shape: it is concave between the special points $\frac{1}{2}\binom{n}{2}, \frac{2}{3}\binom{n}{2}, \frac{3}{4}\binom{n}{2}, \ldots$. His solution required him to develop the new method of flag algebras, part of the emerging area of graph limits, which has led to the solution of many longstanding problems in extremal combinatorics.
The remaining piece of the puzzle was to obtain an exact (rather than asymptotic) solution. In recent work with Hong Liu and Oleg Pikhurko at the University of Warwick, I addressed a conjecture of Lovász and Simonovits, the solution of which would answer Erdős's question in a strong form. The conjecture put forward a certain family of $n$vertex, $e$edge graphs which are extremal, in the sense that they should each contain the fewest triangles. So in general there is more than one such graph, one aspect which makes the problem hard. Building on ideas of Razborov and PikhurkoRazborov, we were able to solve the conjecture whenever $e/\binom{n}{2}$ is bounded away from $1$; in other words, as long as $e$ is not too close to its maximum possible value $\binom{n}{2}$.
Our proof spans almost 100 pages and (in contrast to Razborov's analytic proof) is combinatorial in nature, involving a type of stability argument. It would be extremely interesting to close the gap left by our work and thus fully answer Erdős's question."
Revolving captions:
The graph $T_2(n)$, which is the unique largest trianglefree graph on $n$ vertices.
The minimum number of triangles $t(n,e)$ in an $n$vertex $e$edge graph plotted against $e/\binom{n}{2}$. This was proved in the pioneering work of A. Razborov.
Making new graphs from old: an illustration of a step in the proof of the exact result by LiuPikhurkoStaden.

Thursday, 15 February 2018 
What does boiling water have in common with magnets and the horizon of black holes? They are all described by conformal field theories (CFTs)! We are used to physical systems that are invariant under translations and rotations. Imagine a system which is also invariant under scale transformations. Such a system is described by a conformal field theory. Remarkably, many physical systems admit such a description and conformal field theory is ubiquitous in our current theoretical understanding of nature.
Twodimensional CFTs are very special: in this case the conformal group is actually infinitely dimensional! This has led to remarkable progress over the last four decades. In contrast, CFTs in higher dimensions are notoriously difficult to deal with. First of all, in this case symmetries are much less powerful. Furthermore, the standard textbook methods to compute observables in physical theories, called perturbative methods, only apply to theories with small coupling constants (giving an expansion in powers of these couplings), but generic CFTs do not have small parameters! Because of these reasons, progress in higher dimensional CFTs proved to be much harder than in the twodimensional case.
The breakthrough came in 2008 when it was shown how to attack higher dimensional CFTs with the conformal bootstrap program. The basic idea is to resort to symmetries and consistency conditions and ask the question: which values of the observables I am interested in would lead to a consistent CFT? The basic observables in a CFT are correlators of local operators $\langle {\cal O}(x_1) \cdots {\cal O}(x_n) \rangle$, and one of the consistency conditions is that these correlators have the right properties under permutation of the insertion points, for instance: $$\langle {\cal O}(x_1){\cal O}(x_2){\cal O}(x_3){\cal O}(x_4)\rangle =\langle {\cal O}(x_3){\cal O}(x_2){\cal O}(x_1){\cal O}(x_4)\rangle$$ Believe it or not, when combined with conformal symmetry these conditions are incredibly constraining! The conformal bootstrap proved to be very powerful, and its range of applicability is very vast: it basically applies to every CFT.
While the original proposal, and most of the work which followed it, was numeric in nature, my collaborators and I have been developing a method to obtain analytic results in CFTs, following the bootstrap philosophy. Although still in its infancy, this method has already produced remarkable results for vast families of CFTs, including both perturbative and nonperturbative theories. For perturbative theories it offers an alternative to standard diagrammatic methods, but without many of the problems associated with them. An important conformal theory that has been studied for many decades, relevant for boiling water and a generalisation of the Ising Model, is the WilsonFisher model. For this model, and for certain families of observables, the results from our methods have already surpassed the available results from diagrammatic techniques! It will be very exciting to see where all this leads.

Monday, 29 January 2018 
Prof. Ursula Martin and Dr Ian Griffiths have each been awarded an MPLS Impact Award for 201718. The MPLS (Mathematical, Physical, Engineering and Life Sciences Division at the University of Oxford) Impact Awards scheme aims to foster and raise awareness of impact by rewarding it at a local level.
Ursula's award is for Public Engagement in connection with the 2015 celebrations of the 200th anniversary of Ada Lovelace's birth. This included exhibits at many museums (including the National Museum of Computing, Bletchley Park, the Science Museum and the Computer History Museum in Silicon Valley) as well as an issue of a children's computing magazine developed in collaboration with QMUL (Queen Mary University of London) and distributed to UK schools to encourage programming.
Ian's award is for NonCommercial Impact, and is in recognition of his work with researchers at IIT Kharagpur on the modelling and improvement of filters to remove arsenic from water supplies in India. This work is funded by GCRF (the UK Global Challenge Research Fund) and also supported by UNICEF which is now installing communityscale filters in India. Although it falls outside the definition of the category, Ian is also working with three companies (Dyson, Gore and Pall Corporation) to improve their filters for various purposes.
These awards, which include a £1000 payment, will be presented at the MPLS Winter Reception on February 6th.
