News

Sunday, 27 January 2019

Music & Mathematics - Villiers Quartet in concert in the Mathematical Institute

We often need mathematics and science to understand our lives. But we also need the Arts. And especially music. In fact they often work best together.

The Villiers Quartet are Quartet in Residence at Oxford University and on February 8th we welcome them for the first time to the Andrew Wiles Building, home of Oxford Mathematics for an evening of Haydn, Beethoven and Mozart. 

Haydn - Quartet in G, Op. 77 No.1

Mozart -  Quartet G, K. 387

Beethoven - Quartet in C# minor, Op. 131

For more information and how to book click here.

Thursday, 17 January 2019

Multiple zeta values in deformation quantization

Oxford Mathematician Erik Panzer talks about his and colleagues' work on devising an algorithm to compute Kontsevich's star-product formula explicitly, solving a problem open for more than 20 years.

"The transition from classical mechanics to quantum mechanics is marked by the introduction of non-commutativity. For example, let us consider the case of a particle moving on the real line.

From commutative classical mechanics...

Classically, the state of the particle is described by its position $x$ and its momentum $p$. These coordinates parametrize the phase space, which is the tangent space $M=T^1 \mathbb{R} \cong \mathbb{R}^2$. One can view $x,p\colon M \rightarrow \mathbb{R}$ as smooth functions on the phase space, and the set $A=C^{\infty}(M)$ of all smooth functions on the phase space is an algebra with respect to the (commutative) multiplication of functions, e.g. $x\cdot p = p \cdot x$. The dynamics of the system is determined by a Hamiltonian $\mathcal{H} \in A$, which dictates the time evolution of a state according to \begin{equation*} x'(t) = \{x(t), \mathcal{H}\} \quad\text{and}\quad p'(t) = \{p(t), \mathcal{H}\}, \qquad\qquad(1) \end{equation*} where the Poisson bracket on the phase space is given by \begin{equation*} \{\cdot,\cdot\}\colon M\times M \longrightarrow M, \qquad \{f,g\} = \frac{\partial f}{\partial x} \frac{\partial g}{\partial p} - \frac{\partial f}{\partial p} \frac{\partial g}{\partial x}. \end{equation*}

...to non-commutative quantum mechanics.

In the quantum world, the state is described by a wave function $\psi$ that lives in a Hilbert space $L^2(\mathbb{R})$ of square-integrable functions on $\mathbb{R}$. Position and momentum now are described by operators $\hat{x},\hat{p}$ that act on this Hilbert space, namely \begin{equation*} \hat{x} \psi(x) = x \cdot \psi(x) \quad\text{and}\quad \hat{p} \psi(x) = -\mathrm{i}\hbar\frac{\partial}{\partial x} \psi(x) \end{equation*} where $\hbar$ is the (very small) reduced Planck constant. Note that these operators on the Hilbert space do not commute, $\hat{x}\hat{p} \neq \hat{p}\hat{x}$, and the precise commutator turns out to be \begin{equation*} [\hat{x},\hat{p}] = \hat{x}\hat{p}-\hat{p}\hat{x} = \mathrm{i}\hbar = \mathrm{i}\hbar \{x,p\}. \end{equation*} The observable quantity associated to an operator $\hat{f}$ is its expection value $\langle {\psi}| \hat{f} |\psi \rangle$, which is common notation in physics for the scalar product of $\psi$ with $\hat{f}\psi$. The time evolution of such an observable is determined by the Hamiltonian operator $\hat{\mathcal{H}}$ in a way very similar to (1): \begin{equation*} \mathrm{i}\hbar \frac{\partial \langle \psi(t) | \hat{f}| \psi(t)\rangle}{\partial t} = \langle \psi(t)| \big[\hat{f},\hat{\mathcal{H}} \big] |\psi(t)\rangle. \qquad\qquad(2) \end{equation*}

Deformation quantization...

Let us now generalise the example: For each classical quantity $f\in A=C^{\infty}(M)$, there should be a quantum analogue $\hat{f}$ that acts on some Hilbert space. We can think of this quantization $f\mapsto \hat{f}$ as an embedding of the commutative algebra $A$ into a non-commutative algebra of operators. Comparing (1) and (2), we see that quantization identifies the commutator $[\cdot,\cdot]$ with $i\hbar$ times the Poisson bracket $\{\cdot,\cdot\}$. The goal of deformation quantization is to describe this non-commutative structure algebraically on $A$, without reference to the Hilbert space. Namely, can we find a non-commutative product $f\star g$ of smooth functions such that \begin{equation*} \hat{f} \hat{g} = \widehat{f\star g} \quad\text{?} \end{equation*} Since the non-commutativity comes in only at order $\hbar$, it follows from the above that we should have \begin{equation*} f \star g = f\cdot g + \frac{\mathrm{i}\hbar}{2} \{f,g\} + \mathcal{O}(\hbar^2), \qquad\qquad(3)\end{equation*} i.e. the star-product is commutative to leading order (and therefore called a deformation of the commutative product), and the first order correction in $\hbar$ is determined by the Poisson structure.

...has a universal solution. 

Maxim Kontsevich proved in 1997 that such quantizations always exist: We can allow for an arbitrary smooth manifold $M$ with a Poisson structure - that means a Lie bracket $\{\cdot,\cdot\}$ on $A=C^{\infty}(M)$ which acts in both slots as a derivation (Leibniz rule).

Theorem (Kontsevich). Given an arbitrary Poisson manifold $(M,\{\cdot,\cdot\})$, there does indeed exist an associative product $\star$ on the algebra $A[[\hbar]]$ of formal power series in $\hbar$, such that equation (3) holds.

In fact, Kontsevich gives an explicit formula for this star-product in the form \begin{equation*} f \star g = f \cdot g + \sum_{n=1}^{\infty} \frac{(\mathrm{i}\hbar)^n}{n!} \sum_{\Gamma \in G_n} c(\Gamma) \cdot B_{\Gamma}(f,g), \end{equation*} where at each order $n$ in $\hbar$, the sum is over a finite set $G_n$ of certain directed graphs, like

and so on. The term $B_{\Gamma}(f,g)$ is a bidifferential operator acting on $f$ and $g$, which is defined in terms of the Poisson structure. It can be written down very easily directly in terms of the graph. The remaining ingredients in the formula are the real numbers $c(\Gamma) \in \mathbb{R}$. These are universal constants, which means that they do not depend on the Poisson structure or $f$ or $g$. Once these constants are known, the star-product for any given Poisson structure can be written down explicitly.

What are the weights?

These weights are defined as integrals over configurations of points in the upper half-plane $\mathbb{H}$. The simplest example is the unique graph in $G_1$, which gives (exercise!)

This is precisely the factor $1/2$ in front of the Poisson bracket in (3). But for higher orders, these integrals become much more complicated, and until very recently it was not known how they could be computed. Due to this problem, the star product was explicitly known only up to order $\hbar^{\leq 3}$.

It was conjectured, however, that the weights should be expressible in terms of multiple zeta values, which are sums of the form
\begin{equation*}
\zeta(n_1,\ldots,n_d)
=
\sum_{1\leq k_1<\cdots<k_d} \frac{1}{k_1^{n_1} \cdots k_d^{n_d}} \in \mathbb{R},
\end{equation*}
indexed by integers $n_1,\ldots,n_d$. They generalize the Riemann zeta function $\zeta(n)$, and they play an important role in the theory of periods and motives. In particular, there is a (conjectural) Galois theory for multiple zeta values.

In recent work with Peter Banks and Brent Pym, we developed an algorithm that can evaluate the weight integrals $c(\Gamma)$ for arbitrary graphs $\Gamma$:

Theorem (Banks, Panzer, Pym). For an arbitrary graph $\Gamma\in G_n$, the number $c(\Gamma)$ is a rational linear combination of multiple zeta values, normalized by $(2\mathrm{i}\pi)^{n_1+\ldots+n_d}$.

We calculated and tabulated the weights of all 50821 graphs that appear in $G_n$ for $n\leq 6$, which gives the star-product for all Poisson structures up to order $\hbar^{\leq 6}$. This allows us to study the quantizations in explicit examples and compare them to other quantization methods. Furthermore, our result shows that the Galois group of multiple zeta values acts on the space of star-products.

This research started off with a project by Peter Banks over the summer 2016, when Brent Pym was still in Oxford. Since then we have developed our proof and implemented all steps in computer programs, which are starproducts (publicly available} and automatize the computation of the star product and Kontsevich weights. The underlying techniques make heavy use of the theory of multiple polylogarithms on the moduli space $\mathfrak{M}_{0,n}$ of marked genus 0 curves, a generalization of Stokes' theorem to manifolds with corners and a theory of single-valued integration, influenced by work of Francis Brown (Oxford) and Oliver Schnetz (Erlangen-Nürnberg)."

Related links:

Tuesday, 15 January 2019

Oxford Mathematics to LIVE STREAM an Undergraduate lecture

For the first time, on February 14th at 10am Oxford Mathematics will be LIVE STREAMING a 1st Year undergraduate lecture. In addition we will film (not live) a real tutorial based on that lecture.

After the huge success of making an undergraduate lecture widely available via social media last term, we know there is an appetite to better understand Oxford teaching. In turn we want to demystify what we do, showing that it is both familiar but also distinctive.

The details:
LIVE Oxford Mathematics Student Lecture - James Sparks: 1st Year Undergraduate lecture on 'Dynamics', the mathematics of how things change with time
14th February, 10am-11am UK time

Watch live and ask questions of our mathematicians as you watch

https://www.facebook.com/OxfordMathematics
https://livestream.com/oxuni/undergraduate-lecture

For more information about the 'Dynamics' course: https://courses.maths.ox.ac.uk/node/37555

The lecture will remain available if you can't watch live.

Interviews with students:
We shall also be filming short interviews with the students as they leave the lecture, asking them to explain what happens next. These will be posted on our social media pages.

Watch a Tutorial:
The real tutorial based on the lecture (with a tutor and two students) will be filmed the following week and made available shortly afterwards
https://www.youtube.com/channel/UCLnGGRG__uGSPLBLzyhg8dQ

For more information and updates:
https://www.maths.ox.ac.uk
https://twitter.com/OxUniMaths
https://facebook.com/OxfordMathematics

Friday, 11 January 2019

Michael Atiyah 1929-2019

We are very sorry to hear of the death of Michael Atiyah. Michael was a giant of mathematics. He held many positions including Savilian Professor of Geometry here in Oxford, President of the Royal Society, Master of Trinity College, Cambridge, the founding Directorship of the Isaac Newton Institute and Chancellor of the University of Leicester. From 1997, he was an honorary professor in the University of Edinburgh. He was awarded the Fields Medal in 1966 and the Abel Prize in 2004. 

Michael's work spanned many fields. Together with Hirzebruch, he laid the foundations for topological K-theory, an important tool in algebraic topology which describes ways in which spaces can be twisted. His Atiyah–Singer index theorem, proved with Singer in 1963, not only vastly generalised classical results from the 19th century such as the Riemann-Roch theorem and the Gauss-Bonnet theorem, the work of his teacher Hodge in the 1930s on harmonic integrals, and also Hirzebruch’s work, but also provided an entirely new bridge between analysis and topology which could also act as a mechanism for giving structure to identities in fields as far apart as number theory and group representations.

His more recent work was inspired by theoretical physics and coincided with the arrival of Roger Penrose in Oxford. The two exchanged ideas and realised how modern ideas in algebraic geometry formed the appropriate framework for Penrose’s approach to the equations of mathematical physics. This activity came to a head, following a visit of Singer in 1977, when a problem posed by the physicists on the Yang-Mills equations was solved by a mixture of Penrose’s techniques and some recent sophisticated pure mathematics in the theory of holomorphic vector bundles. As his ideas developed Michael, at the urging of Ed Witten, began to consider quantum field theory more seriously and ultimately he became one of the founders of what is loosely called “quantum mathematics”.

Michael gave his time generously in the promotion of his subject. In May 2018 he gave a very entertaining Public Lecture here in Oxford. His title? 'Numbers are serious but they are also fun.'

 

Tuesday, 8 January 2019

Functional calculus for operators

When mathematicians solve a differential equation, they are usually converting unbounded operators (such as differentiation) which are represented in the equation into bounded operators (such as integration) which represent the solutions.  It is rarely possible to give a solution explicitly, but general theory can often show whether a solution exists, whether it is unique, and what properties it has.  For this, one often needs to apply suitable (bounded) functions $f$ to unbounded operators $A$ and obtain bounded operators $f(A)$ with good properties.  This is the rationale behind the theory of (bounded) functional calculus of (unbounded) operators.   Applications include the ability to find the precise rate of decay of energy of damped waves and many systems of similar type.   

Oxford Mathematician Charles Batty and collaborators have recently developed a bounded functional calculus which provides a unified and direct approach to various general results.  They extend the scope of functional calculus to more functions and provide improved estimates for some functions which have already been considered.  To apply the results, one only has to check whether a given function $f$ lies in the appropriate class of functions by checking a simple condition on the first derivative.

The calculus is a natural (and strict) extension of the classical Hille-Phillips functional calculus, and it is compatible with the other well-known functional calculi.   It satisfies the standard properties of functional calculi, provides a unified and direct approach to a number of norm-estimates in the literature, and allows improvements of some of them.

 

Thursday, 3 January 2019

The Framed Standard Model for particles with possible implications for dark matter

Oxford Mathematician Tsou Sheung Tsun talks about her work on building the Framed Standard Model and the exciting directions it has taken her.

"I have been working, in collaboration with José Bordes (Valencia) and Chan Hong-Mo (Rutherford-Appleton Laboratory), to build the Framed Standard Model (FSM) for some time now. The initial aim of the FSM is to give geometric meaning to (fermion) generations and the Higgs field(s). The surprise is that doing so has enabled one not only to reproduce some details of the standard model with fewer parameters but also to make testable new predictions, possibly even for dark matter. I find this really quite exciting.

It is well known that general relativity is solidly based on geometry. It would be nice if one can say the same for particle physics. The first steps are hopeful, since gauge theory has geometric significance as a fibre bundle over spacetime, and the gauge bosons are components of its connection.

The standard model (SM) of particle physics is a gauge theory based on the gauge group $SU(3) \times SU(2) \times U(1)$, ignoring discrete identifications for simplicity. The gauge bosons are the photon $\gamma$, $W^\pm, Z$ and the colour gluons. To these are added, however, the scalar Higgs fields, and the leptons and quarks, for which no geometric significance is usually sought nor given. Besides, the latter two have to come in three generations, as represented by the three rows of the table below.

In a gauge theory the field variables transform under the gauge group. and to describe their transformation, we need to specify a local (spacetime dependent) frame, with reference to a global (fixed) frame via a transformation matrix. We suggest therefore that it is natural to incorporate both the local and global symmetries, and to introduce the elements of the transformation matrix as dynamical variables, which we call framons. Consequently, the global $SU(3)$ symmetry can play the role of generations, and the $SU(2)$ framons can give rise to the Higgs field.

The FSM takes the basic structure of the SM, without supersymmetry, in four spacetime dimensions, adds to it a naturally occuring global symmetry, and uses 't Hooft's confinement picture instead of the usual symmetry breaking picture.

As a result of this suggestion, we find that many details of the SM can be explained. Indeed, already by one-loop renormalization, we are able to reproduce the masses of the quarks and leptons, and their mixing parameters including the neutrino oscillation angles, using 7 parameters, as compared to the 17 free parameters of the SM. It also gives an explanation of the strong CP problem without postulating the existence of axions.

In addition, the FSM has predictions which are testable in the near future. What is special about the FSM is the presence of the colour $SU(3)$ framons. They will have effects which make the FSM deviate from the SM. Let me mention one of these which is of immediate relevance to LHC experiments. The FSM has a slightly larger prediction for the mass of the $W$ boson than the SM (see figure). It would be very interesting to see if future improved measurements of the $W$ mass would agree better with FSM.

(Figure 1: Measurements of the W boson mass compared to the SM prediction (mauve) and the FSM predictions (green) at two different vaccuum expectation values).

One other possible implication is that some of the bound states involving the $SU(3)$ framons might be suitable candidates for dark matter. Now dark matter is one of the astrophysical mysteries still largely unresolved. We know that constituents of dark matter, whatever they may be, have mass but hardly interact with our world of quarks and leptons. These FSM candidates have exactly these properties, but we need more study to see if they are valid candidates for dark matter. If they were, then it would be a very interesting implication for the FSM." 

For a fuller explanation of the work click here.

Wednesday, 12 December 2018

Constraining Nonequilibrium Physics

Statistical mechanics (or thermodynamics) is a way of understanding large systems of interacting objects, such as particles in fluids and gases, chemicals in solution, or people meandering through a crowded street. Large macroscopic systems require prohibitively large systems of equations, and so equilibrium thermodynamics gives us a way to average out all of these details and understand the typical behaviour of the large scale system. Typical quantities of liquid, for instance, contain more than $10^{23}$ molecules, and this is far too many equations for modern supercomputers, or even Oxford students, to handle. This theory of averaging, which has been refined and extended for over a century, gives us a way to take equations modelling individuals, and derive equations governing bulk behaviour which are fundamentally easier to analyze. But it is not applicable to so-called fluctuating or nonequilibrium systems, where a net flow of mass or energy keeps the system away from equilibrium. Such systems are increasingly of interest to modern day scientists and engineers. Essentially all biological systems, for instance, are kept away from thermodynamic equilibrium, which in biology is also known as death.

Unfortunately, while mathematicians and physicists have struggled for many years to develop a theory capable of understanding nonequilibrium systems in general, many fundamental problems remain, and there is no consensus in the community about what approaches to take. This has led to several non-equivalent formulations which can give different predictions of physical phenomena, and far away from thermodynamic equilibrium these theories become increasingly difficult to reconcile. Near to equilibrium, on the other hand, most of these theories all become compatible with Linear Nonequilibrium Thermodynamics (LNET), which is attained in the limit of small (but nonzero) fluxes. Essentially this is just applying Taylor's Theorem from calculus: equilibrium thermodynamics is the first term in an expansion of most plausible theories of thermodynamics, and LNET is the first contribution of fluctuations to the system.

LNET can be characterised by an entropy production involving a matrix of "phenomenological coefficients," $L_{ij}$, relating thermodynamic fluxes and forces and which can often be determined experimentally. These coefficients give great insight into how to build a consistent macroscopic description of many nonequilibrium systems, such as the thermochemistry in Lithium-Ion batteries, or the dynamics of protein folding. While LNET has proven useful in many such applications, it is still far from a complete theory, especially regarding the coupling of different physical processes. An important tool for understanding such coupled processes are the Onsager-Casimir Reciprocal Relations, which essentially state that the phenomenological matrix described above has to have a certain symmetry property, so that (for state variables which are not changed by time reversal), $L_{ij}=L_{ji}$. These relations are still hotly contested in the community, though they have proven to be very powerful and seemingly consistent with many physical systems. These relations are useful both in terms of constraining plausible models, as well as in making the determination of these coefficients easier (as one would only need to find a subset of them experimentally in order to construct the entire matrix).

Contemporary nonequilibrium thermodynamics has become heavily invested in studying multiple coupled processes, and in this case the matrix $L_{ij}$ plays a central role in the theory. These coupled processes are of utmost interest as they bring insight into otherwise perhaps non-intuitive phenomena. For example, one chemical reaction can run against its natural direction (negative Gibbs energy) by using a positive source of entropy from another process, such as heat flow or another chemical reaction. These ideas are behind explanations of the classical experiment of Duncan-Tor which is outside of the classical Fickian concept of diffusion but fits closely with the Maxwell-Stefan model (which essentially is the coupled analogue of Fick's diffusion law). Further illustrations include the important electrophoretic drag in PE fuel cell membranes, thermodiffusion coupling (Soret's effect), which has been used in isotope separation in the Manhattan project, and similarly Seebeck's and Peltier's thermoelectric effects, which have found many applications. All of these phenomena are surprising in that they run against classical intuition, and cannot be explained by equilibrium thermodynamics.

Recently, Vaclav Klika from the Czech Technical University in Prague, and Oxford Mathematician Andrew Krause, have further contributed to our understanding of LNET by finding functional constraints that these coefficients must satisfy. The research, published in the Journal of Physical Chemistry Letters, shows that any dependence of these coefficients on state variables (such as temperature) must be shared among some or all of the phenomenological coefficients, and they cannot vary independently of one another. Additionally, while the Onsager-Casimir relations need certain assumptions to derive which are presently debated in the community, some version of these functional constraints must hold in general systems with conservative state variables. If the Onsager-Casimir relations are applicable to a system, then these functional constraints are even more powerful, showing that any dependence the phenomenological coefficients have on state variables must be the same for the whole matrix, and hence can be determined much more easily through experiments. More provocatively, these constraints suggest that many well-studied models in the literature have employed constitutive relations which are not thermodynamically consistent, and so would need to be revisited in light of these results.

While there is still a tremendous amount of work left to do in extending these tools more generally, this research has shown the power of mathematics in the physical and life sciences, and will hopefully prove a useful step toward developing a more complete understanding of nonequilibrium systems.

Thursday, 6 December 2018

Knots and almost concordance

Knots are isotopy classes of smooth embeddings of $S^1$ in to $S^3$. Intuitively a knot can be thought of as an elastic closed curve in space, that can be deformed without tearing. Oxford Mathematician Daniele Celoria explains.

"Knots are ubiquitous in the study of the topological and geometrical properties of manifolds with dimension $3$ and $4$. This is due to the fact that they can be used to prescribe the attachment instructions for the "building blocks" of these spaces, through a process known as surgery.

 

Figure 1. The connected sum of two knots. Strictly speaking this operation is defined only for oriented knots, but this distinction is irrelevant in the following.

 

There is a well defined notion of addition for two knots, called connected sum and denoted by $\#$, described in Figure3. However, the resulting algebraic structure is quite uninteresting. Namely, the set of knots with the operation $\#$ is just an infinitely generated monoid.

If instead we consider a coarser equivalence relation on the set of embeddings $S^1 \hookrightarrow S^3$, we obtain a group called the smooth concordance group $\mathcal{C}$. Two knots are said to be concordant if there exists a smooth and properly embedded annulus $A \cong S^1 \times [0,1]$ in the product $S^3 \times [0,1]$ interpolating between the knots, as schematically shown in Figure 3.

Figure 2. A schematic picture for a properly embedded annulus connecting two knots in $S^3 \times [0,1]$.
 

 

Knots representing the identity in $\mathcal{C}$ are those bounding a properly embedded disc in the 4-ball. The inverse of the class represented by a knot $K$ is given by the class containing its mirror $-K$, which is just the reflection of $K$ (see Figure 3).

It is possible to define the equivalence relation of concordance for knots in a 3-manifold $Y$ other than the 3-sphere; we denote the resulting set by $\mathcal{C}_Y$. Note that $\mathcal{C}_Y$ can not be a group, since connected sums do not preserve the ambient 3-manifold. It is easy to realise that $\mathcal{C}_Y$ splits along free homotopy classes of loops in $Y$.

Figure 3. A knot and its mirror. It can be obtained by taking any diagram of $K$, and switching all crossings.

 

There is a well-defined and splitting-preserving action of $\mathcal{C}$ (so concordance classes of knots in the 3-sphere) on $\mathcal{C}_Y$, induced by connected sum with a local (i.e. contained in a 3-ball) knot. An equivalence class of concordances up to this action is called an almost-concordance class.

So we can partition the set $\mathcal{K}(Y)$ of knots in a $3$-manifold $Y$ into (first) homology, free homotopy, almost-concordance and smooth concordance classes, as described in Figure 4.

Figure 4. Nested partitions of $\mathcal{K}(Y)$. Click to enlarge.

 

In my paper I defined an invariant of almost-concordance extracted from knot Floer homology, and proved that all 3-manifolds with non-abelian fundamental group and most lens spaces admit infinitely many non almost-concordant classes. Moreover each element in all of these classes represents the trivial first homology class in its ambient $3$-manifold. This result has been subsequently generalised to all 3-manifolds by Friedl-Nagel-Orson-Powell."

Tuesday, 4 December 2018

Oxford Mathematics Public Lectures on the Road - Solihull, 9th January with Marcus du Sautoy

Our Oxford Mathematics Public Lectures have been a huge success both in Oxford and London, and across the world through our live broadcasts. Speakers such as Roger Penrose, Stephen Hawking and Hannah Fry have shared the pleasures and challenges of their subject while not downplaying its most significant element, namely the maths. But this is maths for the curious. And all of us can be curious.

On the back of this success we now want to take the lectures farther afield. On 9th January our first Oxford Mathematics Midlands Public Lecture will take place at Solihull School. With topics ranging from prime numbers to the lottery, from lemmings to bending balls like Beckham, Professor Marcus du Sautoy will provide an entertaining and, perhaps, unexpected approach to explain how mathematics can be used to predict the future. 

Please email external-relations@maths.ox.ac.uk to register

Watch live:
https://facebook.com/OxfordMathematics
https://livestream.com/oxuni/du-Sautoy

We are very grateful to Solihull School for hosting this lecture.

The Oxford Mathematics Public Lectures are generously supported by XTX Markets.

Thursday, 29 November 2018

Stochastic homogenization: Deterministic models of random environments

Homogenization theory aims to understand the properties of materials with complicated microstructures, such as those arising from flaws in a manufacturing process or from randomly deposited impurities. The goal is to identify an effective model that provides an accurate approximation of the original material. Oxford Mathematician Benjamin Fehrman discusses his research. 

"The practical considerations for identifying a simplified model are twofold:

(1) Approximation cost: Some bulk properties of materials, like the conductivity of a metallic composite, are strongly influenced by the material's composition at the microscale. This means that, in order to effectively simulate the behavior of such materials numerically, it is necessary to use prohibitively expensive approximation schemes.

(2) Randomness: Since the material's composition at the microscale is oftentimes the result of imperfections, it may be impossible to specify its small-scale structure exactly. It will be, at best, possible to obtain a statistical description of its flaws or impurities. That is, from our eyes, the material is effectively a random environment.

The simplest random environment is a periodic material, which is essentially deterministic, like a periodic composite of metals. In general, however, the randomness can be remarkably diverse, such as a composition of multiple materials distributed like a random tiling or as impurities distributed like a random point process.


The identification of the effective model is based on the intuition that, on small scales, the microscopic effects average out provided the random distribution of imperfections is (i) stationary and (ii) ergodic - assumptions which roughly guarantee that (i) imperfections are equally likely to occur at every point in the material and that (ii) each fixed realization of the material is representative of the random family as a whole. These assumptions are the minimal necessary to prove the existence of an effective model for the material, but they are by no means sufficient in general.

In terms of the conductance of a periodic composite of metals, visually speaking homogenization asserts that, whenever the periodic scale is sufficiently small, the conductance of the black and white composite behaves as though the material consists of a single shade of grey.

The random environment is indexed by a probability space $(\Omega,\mathcal{F},\mathbb{P})$, where elements $\omega\in\Omega$ index the realizations of the environment. The microscopic scale of the flaws or impurities is quantified by $\epsilon\in(0,1)$. The properties of the random material are then characterized, for instance, by solutions to partial differential equations of the type:$$F\left(\nabla^2u^\epsilon, \nabla u^\epsilon, \frac{x}{\epsilon}, \omega\right)=0,$$

such as the linear elliptic equation in divergence form: $$-\nabla\cdot a\left(\frac{x}{\epsilon},\omega\right)\nabla u^\epsilon=0.$$

The aim of homogenization theory is to identify a deterministic, effective environment whose properties are described by equations of the type: $$\overline{F}\left(\nabla^2\overline{u}, \nabla\overline{u}\right)=0,$$

such that, for almost every $\omega\in\Omega$, as $\epsilon\rightarrow 0,$ $$u^\epsilon\rightarrow\overline{u}.$$

In terms of the example, this amounts to identifying a constant coefficient field $\overline{a}$ such that, for almost every $\omega\in\Omega$, as $\epsilon\rightarrow 0$, $$u^\epsilon\rightarrow \overline{u},$$

for the solution $\overline{u}$ of the equation: $$-\nabla\cdot\overline{a}\nabla\overline{u}=0.$$

Observe that these equations are homogenous in the sense that they have no explicit dependence on the spatial variable.

The two fundamental objectives of the field are therefore the following:

(1) Identifying the effective environment: The identification of $\overline{F}$ generally involves a complicated nonlinear averaging even for linear equations. In particular, it is very much not the case that $\overline{F}$ is simply the expectation of the original equation.

(2) Quantifying the convergence: In terms of practical applications, it is important to quantify the convergence of the $\{u^\epsilon\}_{\epsilon\in(0,1)}$ to $\overline{u}$. This quantification will tell us for what scales $\epsilon\in(0,1)$ we can expect the effective model to be a good approximation for the original material."

Click below for more on Ben's research:

'On the existence of an invariant measure for isotropic diffusions in random environment'
'On the exit time and stochastic homogenization of isotropic diffusions in large domains'
'A Liouville theorem for stationary and ergodic ensembles of parabolic systems'
 

Pages