News

Thursday, 6 December 2018

Knots and almost concordance

Knots are isotopy classes of smooth embeddings of $S^1$ in to $S^3$. Intuitively a knot can be thought of as an elastic closed curve in space, that can be deformed without tearing.

Knots are ubiquitous in the study of the topological and geometrical properties of manifolds with dimension $3$ and $4$. This is due to the fact that they can be used to prescribe the attachment instructions for the "building blocks" of these spaces, through a process known as surgery.

 

Figure 1. The connected sum of two knots. Strictly speaking this operation is defined only for oriented knots, but this distinction is irrelevant in the following.

 

There is a well defined notion of addition for two knots, called connected sum and denoted by $\#$, described in Figure3. However, the resulting algebraic structure is quite uninteresting. Namely, the set of knots with the operation $\#$ is just an infinitely generated monoid.

If instead we consider a coarser equivalence relation on the set of embeddings $S^1 \hookrightarrow S^3$, we obtain a group called the smooth concordance group $\mathcal{C}$. Two knots are said to be concordant if there exists a smooth and properly embedded annulus $A \cong S^1 \times [0,1]$ in the product $S^3 \times [0,1]$ interpolating between the knots, as schematically shown in Figure 3.

Figure 2. A schematic picture for a properly embedded annulus connecting two knots in $S^3 \times [0,1]$.
 

 

Knots representing the identity in $\mathcal{C}$ are those bounding a properly embedded disc in the 4-ball. The inverse of the class represented by a knot $K$ is given by the class containing its mirror $-K$, which is just the reflection of $K$ (see Figure 3).

It is possible to define the equivalence relation of concordance for knots in a 3-manifold $Y$ other than the 3-sphere; we denote the resulting set by $\mathcal{C}_Y$. Note that $\mathcal{C}_Y$ can not be a group, since connected sums do not preserve the ambient 3-manifold. It is easy to realise that $\mathcal{C}_Y$ splits along free homotopy classes of loops in $Y$.

Figure 3. A knot and its mirror. It can be obtained by taking any diagram of $K$, and switching all crossings.

 

There is a well-defined and splitting-preserving action of $\mathcal{C}$ (so concordance classes of knots in the 3-sphere) on $\mathcal{C}_Y$, induced by connected sum with a local (i.e. contained in a 3-ball) knot. An equivalence class of concordances up to this action is called an almost-concordance class.

So we can partition the set $\mathcal{K}(Y)$ of knots in a $3$-manifold $Y$ into (first) homology, free homotopy, almost-concordance and smooth concordance classes, as described in Figure 4.

Figure 4. Nested partitions of $\mathcal{K}(Y)$. Click to enlarge.

 

In my paper I defined an invariant of almost-concordance extracted from knot Floer homology, and proved that all 3-manifolds with non-abelian fundamental group and most lens spaces admit infinitely many non almost-concordant classes. Moreover each element in all of these classes represents the trivial first homology class in its ambient $3$-manifold. This result has been subsequently generalised to all 3-manifolds by Friedl-Nagel-Orson-Powell.

Tuesday, 4 December 2018

Oxford Mathematics Public Lectures on the Road - Solihull, 9th January with Marcus du Sautoy

Our Oxford Mathematics Public Lectures have been a huge success both in Oxford and London, and across the world through our live broadcasts. Speakers such as Roger Penrose, Stephen Hawking and Hannah Fry have shared the pleasures and challenges of their subject while not downplaying its most significant element, namely the maths. But this is maths for the curious. And all of us can be curious.

On the back of this success we now want to take the lectures farther afield. On 9th January our first Oxford Mathematics Midlands Public Lecture will take place at Solihull School. With topics ranging from prime numbers to the lottery, from lemmings to bending balls like Beckham, Professor Marcus du Sautoy will provide an entertaining and, perhaps, unexpected approach to explain how mathematics can be used to predict the future. 

Please email external-relations@maths.ox.ac.uk to register

Watch live:
https://facebook.com/OxfordMathematics
https://livestream.com/oxuni/du-Sautoy

We are very grateful to Solihull School for hosting this lecture.

The Oxford Mathematics Public Lectures are generously supported by XTX Markets.

Thursday, 29 November 2018

Stochastic homogenization: Deterministic models of random environments

Homogenization theory aims to understand the properties of materials with complicated microstructures, such as those arising from flaws in a manufacturing process or from randomly deposited impurities. The goal is to identify an effective model that provides an accurate approximation of the original material. Oxford Mathematician Benjamin Fehrman discusses his research. 

"The practical considerations for identifying a simplified model are twofold:

(1) Approximation cost: Some bulk properties of materials, like the conductivity of a metallic composite, are strongly influenced by the material's composition at the microscale. This means that, in order to effectively simulate the behavior of such materials numerically, it is necessary to use prohibitively expensive approximation schemes.

(2) Randomness: Since the material's composition at the microscale is oftentimes the result of imperfections, it may be impossible to specify its small-scale structure exactly. It will be, at best, possible to obtain a statistical description of its flaws or impurities. That is, from our eyes, the material is effectively a random environment.

The simplest random environment is a periodic material, which is essentially deterministic, like a periodic composite of metals. In general, however, the randomness can be remarkably diverse, such as a composition of multiple materials distributed like a random tiling or as impurities distributed like a random point process.


The identification of the effective model is based on the intuition that, on small scales, the microscopic effects average out provided the random distribution of imperfections is (i) stationary and (ii) ergodic - assumptions which roughly guarantee that (i) imperfections are equally likely to occur at every point in the material and that (ii) each fixed realization of the material is representative of the random family as a whole. These assumptions are the minimal necessary to prove the existence of an effective model for the material, but they are by no means sufficient in general.

In terms of the conductance of a periodic composite of metals, visually speaking homogenization asserts that, whenever the periodic scale is sufficiently small, the conductance of the black and white composite behaves as though the material consists of a single shade of grey.

The random environment is indexed by a probability space $(\Omega,\mathcal{F},\mathbb{P})$, where elements $\omega\in\Omega$ index the realizations of the environment. The microscopic scale of the flaws or impurities is quantified by $\epsilon\in(0,1)$. The properties of the random material are then characterized, for instance, by solutions to partial differential equations of the type:$$F\left(\nabla^2u^\epsilon, \nabla u^\epsilon, \frac{x}{\epsilon}, \omega\right)=0,$$

such as the linear elliptic equation in divergence form: $$-\nabla\cdot a\left(\frac{x}{\epsilon},\omega\right)\nabla u^\epsilon=0.$$

The aim of homogenization theory is to identify a deterministic, effective environment whose properties are described by equations of the type: $$\overline{F}\left(\nabla^2\overline{u}, \nabla\overline{u}\right)=0,$$

such that, for almost every $\omega\in\Omega$, as $\epsilon\rightarrow 0,$ $$u^\epsilon\rightarrow\overline{u}.$$

In terms of the example, this amounts to identifying a constant coefficient field $\overline{a}$ such that, for almost every $\omega\in\Omega$, as $\epsilon\rightarrow 0$, $$u^\epsilon\rightarrow \overline{u},$$

for the solution $\overline{u}$ of the equation: $$-\nabla\cdot\overline{a}\nabla\overline{u}=0.$$

Observe that these equations are homogenous in the sense that they have no explicit dependence on the spatial variable.

The two fundamental objectives of the field are therefore the following:

(1) Identifying the effective environment: The identification of $\overline{F}$ generally involves a complicated nonlinear averaging even for linear equations. In particular, it is very much not the case that $\overline{F}$ is simply the expectation of the original equation.

(2) Quantifying the convergence: In terms of practical applications, it is important to quantify the convergence of the $\{u^\epsilon\}_{\epsilon\in(0,1)}$ to $\overline{u}$. This quantification will tell us for what scales $\epsilon\in(0,1)$ we can expect the effective model to be a good approximation for the original material."

Click below for more on Ben's research:

'On the existence of an invariant measure for isotropic diffusions in random environment'
'On the exit time and stochastic homogenization of isotropic diffusions in large domains'
'A Liouville theorem for stationary and ergodic ensembles of parabolic systems'
 

Monday, 26 November 2018

Xenia de la Ossa awarded the Dean’s Distinguished Visiting Professorship by the Fields Institute in Toronto

Oxford Mathematician Xenia de la Ossa has been awarded the Dean’s Distinguished Visiting Professorship by the Fields Institute in Toronto and the Mathematics Department of Toronto University for the Fall of 2019.  Xenia will be associated with the thematic programme on Homological algebra of mirror symmetry.

Xenia's research interests are in Mathematical Physics, Geometry and Theoretical Physics, specifically in the mathematical structures arising in String Theory. 

Friday, 23 November 2018

The fluid mechanics of kidney stone removal

The discomfort experienced when a kidney stone passes through the ureter is often compared to the pain of childbirth. Severe pain can indicate that the stone is too large to naturally dislodge, and surgical intervention may be required. A ureteroscope is inserted into the ureter (passing first through the urethra and the bladder) in a procedure called ureteroscopy. Via a miniscule light and a camera on the scope tip, the patient’s ureter and kidney are viewed by a urologist. The field-of-view is obstructed by blood and stone particles, so a saline solution flows from a bag hanging above the patient, through a long, thin channel (the working channel) that runs through the shaft of the ureteroscope. The fluid flows out of the scope tip, clearing the area in front of the camera and exits the body, flowing in the opposite direction along the outside of the scope through an access sheath, a rigid tube that surrounds the scope. Stones are removed by auxiliary working tools of varying sizes which are introduced through the working channel, providing an undesirable resistance to the flow.

The flow of saline solution is vital for successful ureteroscopy, and understanding and improving this process, known as irrigation, is the subject of this research, carried out by a team comprising Boston Scientific, a medical manufacturing company, Ben Turney, a urologist based at the Nuffield Department of Surgical Sciences in Oxford, and Oxford Mathematicians Sarah Waters, Derek Moulton, and Jessica Williams.

The team apply mathematical modelling techniques to ureteroscope irrigation, based on systematic reductions of the Navier-Stokes equations. Due to the resistance to flow created by working tools, there is a complex relationship between driving pressure, scope geometry, and flow rate properties. The objective has been to understand and exploit that relationship to increase flow for a given driving pressure drop. The team have shown that increased flow and decreased kidney pressures can be accomplished through the use of non-circular cross-sectional shapes for the working channel and the access sheath. These results have led to the filing of a joint patent with Boston Scientific. To complement the reduced analytical models, the team are performing numerical simulations to gain further insight into the flow patterns and resulting pressures within the kidney for a given operating set-up.

Due to the real-world application of this modelling, it is vital that the predictions are validated via experiments. The researchers have performed bench-top flow tests to confirm their analytical models, and particle imaging velocimetry (PIV) to compare against their numerical simulations for the flow within the kidney. This work in constructing a mathematical framework to describe ureteroscope irrigation has significant potential in quantifying irrigation flow and improving scope design.

 

Images:

Left: A diagram of the urinary system. The ureteroscope is inserted into the urethra, passing through the bladder, ureter, and into the kidney.

Right: An idealised ureteroscopy set-up. The bag of saline solution is at a height, above the patient. The ureteroscope shaft, containing a working channel, is inserted into the patient. The fluid is driven through the working channel with flow rate by the applied pressure drop, and returns back through an access sheath.

 

Left: The predicted flow rate through a working channel of circular cross-section containing a working tool of circular cross-section (shaded region). Upper black line of the working tool is at the edge of the channel, the lower line is in the centre. This is compared with experimental data from bench-top experiments (red data points). The dashed and dotted lines are for working channels of elliptical cross-sections with elliptical eccentricity values 0.53 and 0.71. The working tool is in the position that optimises the flow (at the edge of the channel).

Right: Streamlines for simulated flow exiting the working channel into the kidney and returning back through the access sheath. Computed using open-source finite element library oomph-lib.

Thursday, 8 November 2018

Simulating polarized light

The Sun has been emitting light and illuminating the Earth for more than four billion years. By analyzing the properties of solar light we can infer a wealth of information about what happens on the Sun. A particularly fascinating (and often overlooked) property of light is its polarization state, which characterizes the orientation of the oscillation in a transverse wave. By measuring light polarization, we can gather precious information about the physical conditions of the solar atmosphere and the magnetic fields present therein. To infer this information, it is important to confront observations with numerical simulations. In this brief article we will focus on the latter.

The transfer of partially polarized light is described by the following linear system of first-order coupled inhomogeneous ordinary differential equations (ODEs) \begin{equation} \frac{\rm d}{{\rm d} s}\mathbf I(s) = -\mathbf K(s)\mathbf I(s) + \boldsymbol{\epsilon}(s)\,. \label{eq:RTE} \end{equation}

In this equation, the symbol $s$ is the spatial coordinate measured along the ray under consideration, $\mathbf{I}$ is the Stokes vector, $\mathbf{K}$ is the propagation matrix, and $\boldsymbol{\epsilon}$ is the emission vector.

The analytic solution of this system of ODEs is known only for very simple atmospheric models, and in practice it is necessary to solve the above equation by means of numerical methods.

Although the system of ODEs in the equation above is linear, which simplifies the analysis, the propagation matrix $\mathbf{K}$ depends on the spatial coordinate $s$, which implies that this system is nonautonomous. Additionally, it exhibits stiffness, which means that extra care must be taken into account to compute a numerical solution because numerical instabilities are just around the corner, and these have the potential to completely invalidate numerical computations.

In their work Oxford Mathematician Alberto Paganini and Gioele Janett from the solar research institute IRSOL in Locarno, Switzerland, have developed a new algorithm to solve the equation above. This algorithm is based on a switching mechanism that is capable of noticing when stiffness kicks in. This allows combining stable methods, which are computationally expensive and are used in the presence of stiffness, with explicit methods, which are computationally inexpensive but of no-use when stiffness arises.

The following plots display the evolution of the Stokes components along the vertical direction for the Fe I line at 6301.50 Å in the proximity of the line core frequency (the Stokes profiles have been computed considering a one-dimensional semi-empirical model of the solar atmosphere, discretized on a sequence of increasingly refined grids). The black line depicts the reference solution, while the dots denote the numerical solution obtained with the new algorithm. Different dot colors correspond to different methods: Blue dots indicate the use of an explicit method, whereas yellow, orange, and purple dots indicate the use of three variants of stable methods (each triggered by a different degree of instability). These pictures (below) show that the algorithm is capable of switching and choosing the appropriate method whenever necessary and of delivering good approximations of the equation above.

                                                             


                                                                 

This research has been published in The Astrophysical Journal, Vol 857, Number 2, p. 91 (2018).

Tuesday, 6 November 2018

Conformal Cyclic Cosmology. Roger Penrose and Hannah Fry - Oxford Mathematics London Public Lecture now online

He calls it a "crazy idea." Then again, he points out, so is the idea of inflation as a way of explaining the beginnings of our Universe.

In our Oxford Mathematics London Public Lecture at the Science Museum in London, Roger Penrose revealed his latest research. In both his talk and his subsequent conversation with fellow mathematician and broadcaster Hannah Fry, Roger speculated on a veritable chain reaction of universes, which he says has been backed by evidence of events that took place before the Big Bang. With Conformal Cyclic Cosmology he argues that, instead of a single Big Bang, the universe cycles from one aeon to the next. Each universe leaves subtle imprints on the next when it pops into being.  Energy can 'burst through' from one universe to the next, at what he calls ‘Hawking points.’

In addition to his latest research Roger also reflects on his own approach to his subject ("big-headedness") and his own time at school where he was actually dropped down a maths class. So we are not alone, universally or personally speaking.

The Oxford Mathematics Public Lectures are generously supported by XTX Markets.

Photos courtesy of the Science Museum Group.

 

 

Tuesday, 6 November 2018

Improving techniques for optimising noisy functions

The problem of optimisation – that is, finding the maximum or minimum of an ‘objective’ function – is one of the most important problems in computational mathematics. Optimisation problems are ubiquitous: traders might optimise their portfolio to maximise (expected) revenue, engineers may optimise the design of a product to maximise efficiency, data scientists minimise the prediction error of machine learning models, and scientists may want to estimate parameters using experimental data. In real-world settings, uncertainties and errors are unavoidable, and this can cause stochastic noise to be present in the objective.

Most methods for optimisation rely on being able to evaluate both the objective and its derivatives.  Access to first derivatives is important for finding uphill or downhill directions, which tell us where to search next for optima, and when to terminate the method. However, when the objective has stochastic noise, it is no longer differentiable, and standard optimisation methods do not work. Instead, we must develop ‘derivative-free’ optimisation methods; that is, we have to answer the question “how do you get to the top of a hill when you don’t know which way is up?”. We achieve this by constructing models of the landscape based on sampling objective values – this approach is based on rigorous mathematical principles, and has provable guarantees of success. The figure above shows a noisy landscape, and the points tested by a derivative-free method searching for the true minimum (bottom centre, in green).

Oxford Mathematicians Lindon Roberts and Coralia Cartis, together with Jan Fiala and Benjamin Marteau from Numerical Algorithms Group Ltd (NAG), a British scientific computing company, have developed a new derivative-free method for optimising noisy and expensive objectives. The method automatically detects when the information in the objective value is overwhelmed by noise, and kick-starts the method to bring more information into the models of the landscape. This approach requires fewer evaluations of the (possibly expensive) objective, runs faster and is more scalable, but produces as good solutions as other state-of-the-art methods. Their ideas are being commercialised by NAG and will soon be available in their widely-used software library. This technique is also being applied to parameter estimation for noisy climate simulations, to help scientists find optimal parameters that fit observational climate data, thus helping quantify the sensitivity of our climate to CO2 emissions.

This work is supported by the EPSRC Centre for Doctoral Training in Industrially-Focused Mathematical Modelling. 

Monday, 5 November 2018

Structure or randomness in metric diophantine approximation?

Diophantine approximation is about how well real numbers can be approximated by rationals. Say I give you a real number $\alpha$, and I ask you to approximate it by a rational number $a/q$, where $q$ is not too large. A naive strategy would be to first choose $q$ arbitrarily, and to then choose the nearest integer $a$ to $q \alpha$. This would give $| \alpha - a/q| \le 1/(2q)$, and $\pi \approx 3.14$. Dirichlet, introducing the pigeonhole principle, showed non-constructively that there are infinitely many solutions to $| \alpha - a/q| \le 1/q^2$, and one can use continued fractions to find such approximations, for instance $\pi \approx 22/7$. 

Metric diophantine approximation is about the typical rate of approximation. There are values of $\alpha$, such as the golden ratio, for which one can't do much better than Dirichlet's theorem. However, for all $\alpha$ away from a set of Lebesgue measure zero, one can beat it by a factor of $\log q$ and more. Khintchine's theorem is prototypical, asserting that if $\psi: \mathbb N \to [0, \infty)$ is decreasing then \[ \mathrm{meas} \{ \alpha \in [0,1]: \exists^\infty (q,a) \in \mathbb N \times \mathbb Z \quad | \alpha - a/q| < \psi(q)/q \} = \begin{cases} 1, & \text{if } \sum_{q=1}^\infty \psi(q) = \infty \\ 0,&\text{if } \sum_{q=1}^\infty \psi(q) < \infty. \end{cases} \] One can prove these sorts of results using the Borel-Cantelli lemmas, from probability theory: making a ball of radius $\psi(q)/q$ around each $a/q$, and grouping together the ones with the same $q$, the idea is to show that pairs of groups overlap more or less independently.

According to my mathematical upbringing, all phenomena are explained by the dichotomy between structure and randomness: either there is structure present, or else there is (pseudo)randomness. The probabilistic considerations above had initially led me to believe that randomness was the key to understanding metric diophantine approximation, but after working in the area for a while my opinion is closer to the opposite! The denominators of the good approximations to $\alpha$ lie in Bohr sets (after Harald Bohr, brother of the eminent physicist Niels Bohr) \[ B_N(\alpha, \delta) := \{ n \le N: \| n \alpha \| \le \delta \} \subset \mathbb N, \] where $\| \cdot \|$ denotes distance to the nearest integer. A central tenet of additive combinatorics is that Bohr sets look like generalised arithmetic progressions (GAPs).

I built the GAPs using continued fractions, enabling me to make progress towards the infamous Littlewood (c. 1930) and Duffin-Schaeffer (1941) conjectures. The former is about approximating two numbers at once in a multiplicative sense, that is to find approximations $a/q, b/q$ to $\alpha,\beta$ for which \[ \Bigl | \alpha - \frac a q \Bigr | \cdot \Bigl |\beta - \frac b q \Bigr| < \frac {10^{-100}}{q^3}, \] and the latter is about approximation by reduced fractions. With Niclas Technau, we have since developed a higher-dimensional structural theory using the geometry of numbers. Going forward, I hope to establish a Khintchine-type law for multiplicative approximation on planar curves.

Sam Chow, Oxford Mathematics
 

Monday, 29 October 2018

Nick Trefethen awarded honorary degrees by Fribourg and Stellenbosch Universities

Oxford Mathematician Professor Nick Trefethen, Professor of Numerical Analysis and Head of Oxford's Numerical Analysis Group has been awarded honorary degrees by the University of Fribourg in Switzerland and Stellenbosch University in South Africa where Nick was cited for his work in helping to cultivate a new generation of mathematical scientists on the African continent.

Nick's research spans a wide range within numerical analysis and applied mathematics, in particular the numerical solution of differential equations, fluid mechanics and numerical linear algebra. He is also the author of several very successful books which, as the Fribourg award acknowledges, have widened interest and nourished scientific discussion well beyond mathematics. 

Pages