Friday, 11 January 2019 
We are very sorry to hear of the death of Michael Atiyah. Michael was a giant of mathematics. He held many positions including Savilian Professor of Geometry here in Oxford, President of the Royal Society, Master of Trinity College, Cambridge, the founding Directorship of the Isaac Newton Institute and Chancellor of the University of Leicester. From 1997, he was an honorary professor in the University of Edinburgh. He was awarded the Fields Medal in 1966 and the Abel Prize in 2004.
Michael's work spanned many fields. Together with Hirzebruch, he laid the foundations for topological Ktheory, an important tool in algebraic topology which describes ways in which spaces can be twisted. His Atiyah–Singer index theorem, proved with Singer in 1963, not only vastly generalised classical results from the 19th century such as the RiemannRoch theorem and the GaussBonnet theorem, the work of his teacher Hodge in the 1930s on harmonic integrals, and also Hirzebruch’s work, but also provided an entirely new bridge between analysis and topology which could also act as a mechanism for giving structure to identities in fields as far apart as number theory and group representations.
His more recent work was inspired by theoretical physics and coincided with the arrival of Roger Penrose in Oxford. The two exchanged ideas and realised how modern ideas in algebraic geometry formed the appropriate framework for Penrose’s approach to the equations of mathematical physics. This activity came to a head, following a visit of Singer in 1977, when a problem posed by the physicists on the YangMills equations was solved by a mixture of Penrose’s techniques and some recent sophisticated pure mathematics in the theory of holomorphic vector bundles. As his ideas developed Michael, at the urging of Ed Witten, began to consider quantum field theory more seriously and ultimately he became one of the founders of what is loosely called “quantum mathematics”.
Michael gave his time generously in the promotion of his subject. In May 2018 he gave a very entertaining Public Lecture here in Oxford. His title? 'Numbers are serious but they are also fun.'

Tuesday, 8 January 2019 
When mathematicians solve a differential equation, they are usually converting unbounded operators (such as differentiation) which are represented in the equation into bounded operators (such as integration) which represent the solutions. It is rarely possible to give a solution explicitly, but general theory can often show whether a solution exists, whether it is unique, and what properties it has. For this, one often needs to apply suitable (bounded) functions $f$ to unbounded operators $A$ and obtain bounded operators $f(A)$ with good properties. This is the rationale behind the theory of (bounded) functional calculus of (unbounded) operators. Applications include the ability to find the precise rate of decay of energy of damped waves and many systems of similar type.
Oxford Mathematician Charles Batty and collaborators have recently developed a bounded functional calculus which provides a unified and direct approach to various general results. They extend the scope of functional calculus to more functions and provide improved estimates for some functions which have already been considered. To apply the results, one only has to check whether a given function $f$ lies in the appropriate class of functions by checking a simple condition on the first derivative.
The calculus is a natural (and strict) extension of the classical HillePhillips functional calculus, and it is compatible with the other wellknown functional calculi. It satisfies the standard properties of functional calculi, provides a unified and direct approach to a number of normestimates in the literature, and allows improvements of some of them.

Thursday, 3 January 2019 
Oxford Mathematician Tsou Sheung Tsun talks about her work on building the Framed Standard Model and the exciting directions it has taken her.
"I have been working, in collaboration with José Bordes (Valencia) and Chan HongMo (RutherfordAppleton Laboratory), to build the Framed Standard Model (FSM) for some time now. The initial aim of the FSM is to give geometric meaning to (fermion) generations and the Higgs field(s). The surprise is that doing so has enabled one not only to reproduce some details of the standard model with fewer parameters but also to make testable new predictions, possibly even for dark matter. I find this really quite exciting.
It is well known that general relativity is solidly based on geometry. It would be nice if one can say the same for particle physics. The first steps are hopeful, since gauge theory has geometric significance as a fibre bundle over spacetime, and the gauge bosons are components of its connection.
The standard model (SM) of particle physics is a gauge theory based on the gauge group $SU(3) \times SU(2) \times U(1)$, ignoring discrete identifications for simplicity. The gauge bosons are the photon $\gamma$, $W^\pm, Z$ and the colour gluons. To these are added, however, the scalar Higgs fields, and the leptons and quarks, for which no geometric significance is usually sought nor given. Besides, the latter two have to come in three generations, as represented by the three rows of the table below.
In a gauge theory the field variables transform under the gauge group. and to describe their transformation, we need to specify a local (spacetime dependent) frame, with reference to a global (fixed) frame via a transformation matrix. We suggest therefore that it is natural to incorporate both the local and global symmetries, and to introduce the elements of the transformation matrix as dynamical variables, which we call framons. Consequently, the global $SU(3)$ symmetry can play the role of generations, and the $SU(2)$ framons can give rise to the Higgs field.
The FSM takes the basic structure of the SM, without supersymmetry, in four spacetime dimensions, adds to it a naturally occuring global symmetry, and uses 't Hooft's confinement picture instead of the usual symmetry breaking picture.
As a result of this suggestion, we find that many details of the SM can be explained. Indeed, already by oneloop renormalization, we are able to reproduce the masses of the quarks and leptons, and their mixing parameters including the neutrino oscillation angles, using 7 parameters, as compared to the 17 free parameters of the SM. It also gives an explanation of the strong CP problem without postulating the existence of axions.
In addition, the FSM has predictions which are testable in the near future. What is special about the FSM is the presence of the colour $SU(3)$ framons. They will have effects which make the FSM deviate from the SM. Let me mention one of these which is of immediate relevance to LHC experiments. The FSM has a slightly larger prediction for the mass of the $W$ boson than the SM (see figure). It would be very interesting to see if future improved measurements of the $W$ mass would agree better with FSM.
(Figure 1: Measurements of the W boson mass compared to the SM prediction (mauve) and the FSM predictions (green) at two different vaccuum expectation values).
One other possible implication is that some of the bound states involving the $SU(3)$ framons might be suitable candidates for dark matter. Now dark matter is one of the astrophysical mysteries still largely unresolved. We know that constituents of dark matter, whatever they may be, have mass but hardly interact with our world of quarks and leptons. These FSM candidates have exactly these properties, but we need more study to see if they are valid candidates for dark matter. If they were, then it would be a very interesting implication for the FSM."
For a fuller explanation of the work click here.

Wednesday, 12 December 2018 
Statistical mechanics (or thermodynamics) is a way of understanding large systems of interacting objects, such as particles in fluids and gases, chemicals in solution, or people meandering through a crowded street. Large macroscopic systems require prohibitively large systems of equations, and so equilibrium thermodynamics gives us a way to average out all of these details and understand the typical behaviour of the large scale system. Typical quantities of liquid, for instance, contain more than $10^{23}$ molecules, and this is far too many equations for modern supercomputers, or even Oxford students, to handle. This theory of averaging, which has been refined and extended for over a century, gives us a way to take equations modelling individuals, and derive equations governing bulk behaviour which are fundamentally easier to analyze. But it is not applicable to socalled fluctuating or nonequilibrium systems, where a net flow of mass or energy keeps the system away from equilibrium. Such systems are increasingly of interest to modern day scientists and engineers. Essentially all biological systems, for instance, are kept away from thermodynamic equilibrium, which in biology is also known as death.
Unfortunately, while mathematicians and physicists have struggled for many years to develop a theory capable of understanding nonequilibrium systems in general, many fundamental problems remain, and there is no consensus in the community about what approaches to take. This has led to several nonequivalent formulations which can give different predictions of physical phenomena, and far away from thermodynamic equilibrium these theories become increasingly difficult to reconcile. Near to equilibrium, on the other hand, most of these theories all become compatible with Linear Nonequilibrium Thermodynamics (LNET), which is attained in the limit of small (but nonzero) fluxes. Essentially this is just applying Taylor's Theorem from calculus: equilibrium thermodynamics is the first term in an expansion of most plausible theories of thermodynamics, and LNET is the first contribution of fluctuations to the system.
LNET can be characterised by an entropy production involving a matrix of "phenomenological coefficients," $L_{ij}$, relating thermodynamic fluxes and forces and which can often be determined experimentally. These coefficients give great insight into how to build a consistent macroscopic description of many nonequilibrium systems, such as the thermochemistry in LithiumIon batteries, or the dynamics of protein folding. While LNET has proven useful in many such applications, it is still far from a complete theory, especially regarding the coupling of different physical processes. An important tool for understanding such coupled processes are the OnsagerCasimir Reciprocal Relations, which essentially state that the phenomenological matrix described above has to have a certain symmetry property, so that (for state variables which are not changed by time reversal), $L_{ij}=L_{ji}$. These relations are still hotly contested in the community, though they have proven to be very powerful and seemingly consistent with many physical systems. These relations are useful both in terms of constraining plausible models, as well as in making the determination of these coefficients easier (as one would only need to find a subset of them experimentally in order to construct the entire matrix).
Contemporary nonequilibrium thermodynamics has become heavily invested in studying multiple coupled processes, and in this case the matrix $L_{ij}$ plays a central role in the theory. These coupled processes are of utmost interest as they bring insight into otherwise perhaps nonintuitive phenomena. For example, one chemical reaction can run against its natural direction (negative Gibbs energy) by using a positive source of entropy from another process, such as heat flow or another chemical reaction. These ideas are behind explanations of the classical experiment of DuncanTor which is outside of the classical Fickian concept of diffusion but fits closely with the MaxwellStefan model (which essentially is the coupled analogue of Fick's diffusion law). Further illustrations include the important electrophoretic drag in PE fuel cell membranes, thermodiffusion coupling (Soret's effect), which has been used in isotope separation in the Manhattan project, and similarly Seebeck's and Peltier's thermoelectric effects, which have found many applications. All of these phenomena are surprising in that they run against classical intuition, and cannot be explained by equilibrium thermodynamics.
Recently, Vaclav Klika from the Czech Technical University in Prague, and Oxford Mathematician Andrew Krause, have further contributed to our understanding of LNET by finding functional constraints that these coefficients must satisfy. The research, published in the Journal of Physical Chemistry Letters, shows that any dependence of these coefficients on state variables (such as temperature) must be shared among some or all of the phenomenological coefficients, and they cannot vary independently of one another. Additionally, while the OnsagerCasimir relations need certain assumptions to derive which are presently debated in the community, some version of these functional constraints must hold in general systems with conservative state variables. If the OnsagerCasimir relations are applicable to a system, then these functional constraints are even more powerful, showing that any dependence the phenomenological coefficients have on state variables must be the same for the whole matrix, and hence can be determined much more easily through experiments. More provocatively, these constraints suggest that many wellstudied models in the literature have employed constitutive relations which are not thermodynamically consistent, and so would need to be revisited in light of these results.
While there is still a tremendous amount of work left to do in extending these tools more generally, this research has shown the power of mathematics in the physical and life sciences, and will hopefully prove a useful step toward developing a more complete understanding of nonequilibrium systems.

Thursday, 6 December 2018 
Knots are isotopy classes of smooth embeddings of $S^1$ in to $S^3$. Intuitively a knot can be thought of as an elastic closed curve in space, that can be deformed without tearing. Oxford Mathematician Daniele Celoria explains.
"Knots are ubiquitous in the study of the topological and geometrical properties of manifolds with dimension $3$ and $4$. This is due to the fact that they can be used to prescribe the attachment instructions for the "building blocks" of these spaces, through a process known as surgery.
Figure 1. The connected sum of two knots. Strictly speaking this operation is defined only for oriented knots, but this distinction is irrelevant in the following.
There is a well defined notion of addition for two knots, called connected sum and denoted by $\#$, described in Figure3. However, the resulting algebraic structure is quite uninteresting. Namely, the set of knots with the operation $\#$ is just an infinitely generated monoid.
If instead we consider a coarser equivalence relation on the set of embeddings $S^1 \hookrightarrow S^3$, we obtain a group called the smooth concordance group $\mathcal{C}$. Two knots are said to be concordant if there exists a smooth and properly embedded annulus $A \cong S^1 \times [0,1]$ in the product $S^3 \times [0,1]$ interpolating between the knots, as schematically shown in Figure 3.
Figure 2. A schematic picture for a properly embedded annulus connecting two knots in $S^3 \times [0,1]$.
Knots representing the identity in $\mathcal{C}$ are those bounding a properly embedded disc in the 4ball. The inverse of the class represented by a knot $K$ is given by the class containing its mirror $K$, which is just the reflection of $K$ (see Figure 3).
It is possible to define the equivalence relation of concordance for knots in a 3manifold $Y$ other than the 3sphere; we denote the resulting set by $\mathcal{C}_Y$. Note that $\mathcal{C}_Y$ can not be a group, since connected sums do not preserve the ambient 3manifold. It is easy to realise that $\mathcal{C}_Y$ splits along free homotopy classes of loops in $Y$.
Figure 3. A knot and its mirror. It can be obtained by taking any diagram of $K$, and switching all crossings.
There is a welldefined and splittingpreserving action of $\mathcal{C}$ (so concordance classes of knots in the 3sphere) on $\mathcal{C}_Y$, induced by connected sum with a local (i.e. contained in a 3ball) knot. An equivalence class of concordances up to this action is called an almostconcordance class.
So we can partition the set $\mathcal{K}(Y)$ of knots in a $3$manifold $Y$ into (first) homology, free homotopy, almostconcordance and smooth concordance classes, as described in Figure 4.
Figure 4. Nested partitions of $\mathcal{K}(Y)$. Click to enlarge.
In my paper I defined an invariant of almostconcordance extracted from knot Floer homology, and proved that all 3manifolds with nonabelian fundamental group and most lens spaces admit infinitely many non almostconcordant classes. Moreover each element in all of these classes represents the trivial first homology class in its ambient $3$manifold. This result has been subsequently generalised to all 3manifolds by FriedlNagelOrsonPowell."

Tuesday, 4 December 2018 
Our Oxford Mathematics Public Lectures have been a huge success both in Oxford and London, and across the world through our live broadcasts. Speakers such as Roger Penrose, Stephen Hawking and Hannah Fry have shared the pleasures and challenges of their subject while not downplaying its most significant element, namely the maths. But this is maths for the curious. And all of us can be curious.
On the back of this success we now want to take the lectures farther afield. On 9th January our first Oxford Mathematics Midlands Public Lecture will take place at Solihull School. With topics ranging from prime numbers to the lottery, from lemmings to bending balls like Beckham, Professor Marcus du Sautoy will provide an entertaining and, perhaps, unexpected approach to explain how mathematics can be used to predict the future.
Please email externalrelations@maths.ox.ac.uk to register
Watch live:
https://facebook.com/OxfordMathematics
https://livestream.com/oxuni/duSautoy
We are very grateful to Solihull School for hosting this lecture.
The Oxford Mathematics Public Lectures are generously supported by XTX Markets.

Thursday, 29 November 2018 
Homogenization theory aims to understand the properties of materials with complicated microstructures, such as those arising from flaws in a manufacturing process or from randomly deposited impurities. The goal is to identify an effective model that provides an accurate approximation of the original material. Oxford Mathematician Benjamin Fehrman discusses his research.
"The practical considerations for identifying a simplified model are twofold:
(1) Approximation cost: Some bulk properties of materials, like the conductivity of a metallic composite, are strongly influenced by the material's composition at the microscale. This means that, in order to effectively simulate the behavior of such materials numerically, it is necessary to use prohibitively expensive approximation schemes.
(2) Randomness: Since the material's composition at the microscale is oftentimes the result of imperfections, it may be impossible to specify its smallscale structure exactly. It will be, at best, possible to obtain a statistical description of its flaws or impurities. That is, from our eyes, the material is effectively a random environment.
The simplest random environment is a periodic material, which is essentially deterministic, like a periodic composite of metals. In general, however, the randomness can be remarkably diverse, such as a composition of multiple materials distributed like a random tiling or as impurities distributed like a random point process.
The identification of the effective model is based on the intuition that, on small scales, the microscopic effects average out provided the random distribution of imperfections is (i) stationary and (ii) ergodic  assumptions which roughly guarantee that (i) imperfections are equally likely to occur at every point in the material and that (ii) each fixed realization of the material is representative of the random family as a whole. These assumptions are the minimal necessary to prove the existence of an effective model for the material, but they are by no means sufficient in general.
In terms of the conductance of a periodic composite of metals, visually speaking homogenization asserts that, whenever the periodic scale is sufficiently small, the conductance of the black and white composite behaves as though the material consists of a single shade of grey.
The random environment is indexed by a probability space $(\Omega,\mathcal{F},\mathbb{P})$, where elements $\omega\in\Omega$ index the realizations of the environment. The microscopic scale of the flaws or impurities is quantified by $\epsilon\in(0,1)$. The properties of the random material are then characterized, for instance, by solutions to partial differential equations of the type:$$F\left(\nabla^2u^\epsilon, \nabla u^\epsilon, \frac{x}{\epsilon}, \omega\right)=0,$$
such as the linear elliptic equation in divergence form: $$\nabla\cdot a\left(\frac{x}{\epsilon},\omega\right)\nabla u^\epsilon=0.$$
The aim of homogenization theory is to identify a deterministic, effective environment whose properties are described by equations of the type: $$\overline{F}\left(\nabla^2\overline{u}, \nabla\overline{u}\right)=0,$$
such that, for almost every $\omega\in\Omega$, as $\epsilon\rightarrow 0,$ $$u^\epsilon\rightarrow\overline{u}.$$
In terms of the example, this amounts to identifying a constant coefficient field $\overline{a}$ such that, for almost every $\omega\in\Omega$, as $\epsilon\rightarrow 0$, $$u^\epsilon\rightarrow \overline{u},$$
for the solution $\overline{u}$ of the equation: $$\nabla\cdot\overline{a}\nabla\overline{u}=0.$$
Observe that these equations are homogenous in the sense that they have no explicit dependence on the spatial variable.
The two fundamental objectives of the field are therefore the following:
(1) Identifying the effective environment: The identification of $\overline{F}$ generally involves a complicated nonlinear averaging even for linear equations. In particular, it is very much not the case that $\overline{F}$ is simply the expectation of the original equation.
(2) Quantifying the convergence: In terms of practical applications, it is important to quantify the convergence of the $\{u^\epsilon\}_{\epsilon\in(0,1)}$ to $\overline{u}$. This quantification will tell us for what scales $\epsilon\in(0,1)$ we can expect the effective model to be a good approximation for the original material."
Click below for more on Ben's research:
'On the existence of an invariant measure for isotropic diffusions in random environment'
'On the exit time and stochastic homogenization of isotropic diffusions in large domains'
'A Liouville theorem for stationary and ergodic ensembles of parabolic systems'

Monday, 26 November 2018 
Oxford Mathematician Xenia de la Ossa has been awarded the Dean’s Distinguished Visiting Professorship by the Fields Institute in Toronto and the Mathematics Department of Toronto University for the Fall of 2019. Xenia will be associated with the thematic programme on Homological algebra of mirror symmetry.
Xenia's research interests are in Mathematical Physics, Geometry and Theoretical Physics, specifically in the mathematical structures arising in String Theory.

Friday, 23 November 2018 
The discomfort experienced when a kidney stone passes through the ureter is often compared to the pain of childbirth. Severe pain can indicate that the stone is too large to naturally dislodge, and surgical intervention may be required. A ureteroscope is inserted into the ureter (passing first through the urethra and the bladder) in a procedure called ureteroscopy. Via a miniscule light and a camera on the scope tip, the patient’s ureter and kidney are viewed by a urologist. The fieldofview is obstructed by blood and stone particles, so a saline solution flows from a bag hanging above the patient, through a long, thin channel (the working channel) that runs through the shaft of the ureteroscope. The fluid flows out of the scope tip, clearing the area in front of the camera and exits the body, flowing in the opposite direction along the outside of the scope through an access sheath, a rigid tube that surrounds the scope. Stones are removed by auxiliary working tools of varying sizes which are introduced through the working channel, providing an undesirable resistance to the flow.
The flow of saline solution is vital for successful ureteroscopy, and understanding and improving this process, known as irrigation, is the subject of this research, carried out by a team comprising Boston Scientific, a medical manufacturing company, Ben Turney, a urologist based at the Nuffield Department of Surgical Sciences in Oxford, and Oxford Mathematicians Sarah Waters, Derek Moulton, and Jessica Williams.
The team apply mathematical modelling techniques to ureteroscope irrigation, based on systematic reductions of the NavierStokes equations. Due to the resistance to flow created by working tools, there is a complex relationship between driving pressure, scope geometry, and flow rate properties. The objective has been to understand and exploit that relationship to increase flow for a given driving pressure drop. The team have shown that increased flow and decreased kidney pressures can be accomplished through the use of noncircular crosssectional shapes for the working channel and the access sheath. These results have led to the filing of a joint patent with Boston Scientific. To complement the reduced analytical models, the team are performing numerical simulations to gain further insight into the flow patterns and resulting pressures within the kidney for a given operating setup.
Due to the realworld application of this modelling, it is vital that the predictions are validated via experiments. The researchers have performed benchtop flow tests to confirm their analytical models, and particle imaging velocimetry (PIV) to compare against their numerical simulations for the flow within the kidney. This work in constructing a mathematical framework to describe ureteroscope irrigation has significant potential in quantifying irrigation flow and improving scope design.
Images:
Left: A diagram of the urinary system. The ureteroscope is inserted into the urethra, passing through the bladder, ureter, and into the kidney.
Right: An idealised ureteroscopy setup. The bag of saline solution is at a height, above the patient. The ureteroscope shaft, containing a working channel, is inserted into the patient. The fluid is driven through the working channel with flow rate by the applied pressure drop, and returns back through an access sheath.
Left: The predicted flow rate through a working channel of circular crosssection containing a working tool of circular crosssection (shaded region). Upper black line of the working tool is at the edge of the channel, the lower line is in the centre. This is compared with experimental data from benchtop experiments (red data points). The dashed and dotted lines are for working channels of elliptical crosssections with elliptical eccentricity values 0.53 and 0.71. The working tool is in the position that optimises the flow (at the edge of the channel).
Right: Streamlines for simulated flow exiting the working channel into the kidney and returning back through the access sheath. Computed using opensource finite element library oomphlib.

Thursday, 8 November 2018 
The Sun has been emitting light and illuminating the Earth for more than four billion years. By analyzing the properties of solar light we can infer a wealth of information about what happens on the Sun. A particularly fascinating (and often overlooked) property of light is its polarization state, which characterizes the orientation of the oscillation in a transverse wave. By measuring light polarization, we can gather precious information about the physical conditions of the solar atmosphere and the magnetic fields present therein. To infer this information, it is important to confront observations with numerical simulations. In this brief article we will focus on the latter.
The transfer of partially polarized light is described by the following linear system of firstorder coupled inhomogeneous ordinary differential equations (ODEs) \begin{equation} \frac{\rm d}{{\rm d} s}\mathbf I(s) = \mathbf K(s)\mathbf I(s) + \boldsymbol{\epsilon}(s)\,. \label{eq:RTE} \end{equation}
In this equation, the symbol $s$ is the spatial coordinate measured along the ray under consideration, $\mathbf{I}$ is the Stokes vector, $\mathbf{K}$ is the propagation matrix, and $\boldsymbol{\epsilon}$ is the emission vector.
The analytic solution of this system of ODEs is known only for very simple atmospheric models, and in practice it is necessary to solve the above equation by means of numerical methods.
Although the system of ODEs in the equation above is linear, which simplifies the analysis, the propagation matrix $\mathbf{K}$ depends on the spatial coordinate $s$, which implies that this system is nonautonomous. Additionally, it exhibits stiffness, which means that extra care must be taken into account to compute a numerical solution because numerical instabilities are just around the corner, and these have the potential to completely invalidate numerical computations.
In their work Oxford Mathematician Alberto Paganini and Gioele Janett from the solar research institute IRSOL in Locarno, Switzerland, have developed a new algorithm to solve the equation above. This algorithm is based on a switching mechanism that is capable of noticing when stiffness kicks in. This allows combining stable methods, which are computationally expensive and are used in the presence of stiffness, with explicit methods, which are computationally inexpensive but of nouse when stiffness arises.
The following plots display the evolution of the Stokes components along the vertical direction for the Fe I line at 6301.50 Å in the proximity of the line core frequency (the Stokes profiles have been computed considering a onedimensional semiempirical model of the solar atmosphere, discretized on a sequence of increasingly refined grids). The black line depicts the reference solution, while the dots denote the numerical solution obtained with the new algorithm. Different dot colors correspond to different methods: Blue dots indicate the use of an explicit method, whereas yellow, orange, and purple dots indicate the use of three variants of stable methods (each triggered by a different degree of instability). These pictures (below) show that the algorithm is capable of switching and choosing the appropriate method whenever necessary and of delivering good approximations of the equation above.
This research has been published in The Astrophysical Journal, Vol 857, Number 2, p. 91 (2018).
