Monday, 4 February 2019

Classification of geometric spaces in F-theory

Oxford Mathematician Yinan Wang talks about his and colleagues' work on classification of elliptic Calabi-Yau manifolds and geometric solutions of F-theory.

"In the past century, the unification of gravitational force and particle physics was the ultimate dream for many theoretical physicists. String theory is currently the most well-established example of such a grand unification theory. In the past decades, research in this field has produced many fruitful applications in quantum field theory, condensed matter physics, quantum information theory and pure mathematics.

Nonetheless, the string theory framework is complicated since there are many different versions of string theory: I, IIA, IIB, heterotic, M-theory. Moreover, string theory lives in a very high dimensional spacetime: 10 or 11 dimensions including the time direction. To get a description of our real world four-dimensional physics, we need to put this higher dimensional theory on a very small space (this procedure is called "compactification''). There could be a zillion of such geometric spaces, and their total number was completely unknown.

In our recent work, we studied the compactification of F-theory, which is a geometric description of IIB string theory. This framework unifies the M-theory solutions in many cases as well. In particular, the geometric spaces in this approach are elliptic Calabi-Yau manifolds. They can be thought as having an additional torus over each point on a "base'' space.


Figure 1: A picture of elliptic Calabi-Yau manifold

We partially classified the sets of four-dimensional and six-dimensional bases. They equivalently have two or three complex dimensions if one describes them using complex numbers. In particular, we probed the huge connected network of complex 3D bases, which was estimated to contain more than $10^{3,000}$ nodes. The F-theory compactification on an elliptic Calabi-Yau manifold over such bases will give rise to different 4D physics. Interestingly, we found that the 4D physical model on a typical geometric space is quite different from our known particle physics. The gauge groups in the F-theory models are usually $SU(2)$, $F_4$, $G_2$ and $E_8$ in term of Lie algebra classification, while our real world particle physics has $SU(3)\times SU(2)\times U(1)$ gauge group. Moreover, there are a number of mysterious "strongly coupled'' sectors in a typical F-theory model, without any known gauge theory description. There are many things to be explored about these strongly coupled sectors in the future, which requires novel quantum field theory and geometric techniques. Finally, we hope to figure out whether our particle physics standard model can be realized on such a typical F-theory construction."


Figure 2: A part of the network of complex 3D base geometries

For more on the probing of networks click here
For the work on strongly coupled sector of a typical F-theory model click here

Friday, 1 February 2019

Urban Geometry: Looking for shapes and patterns in an urban setting. Photography Exhibition 4-21 February

Looking for shapes and patterns isn't only a mathematical pursuit of course. Artists are also drawn to geometry. Our latest Oxford Mathematics photography exhibition is 'Urban Geometry' by Ania Ready & Magda Wolna. Ania and Magda describe their work: 

"Human eyes are naturally drawn to shapes and patterns, regardless of whether they look at modern buildings or vast landscapes. We decided to focus on the geometrical beauty of the urban environment. We explore various aspects of it, or to borrow from the shared photographic and geometrical vocabulary, various “angles” of it. We play with lines, focal points, repetitions and also with our Polish heritage."

Ania and Magda are Oxfordshire-based photographers and members of the Oxford Photographic Society. The exhibition runs from 4-21 February 2019.

Thursday, 31 January 2019

Reconstructing the number of edges from a partial deck

Oxford Mathematician Carla Groenland talks about her and Oxford colleagues' work on graph reconstruction.

A graph $G$ consists of a set of vertices $V(G)$ and a set of edges $E(G)$ which may connect two (distinct) vertices. (There are no self-loops or multiple edges.)

A very basic question about graphs is: Is a graph determined by its induced subgraphs? For a graph $G$ and a vertex $v$, a card of the graph is an induced subgraph $G-v$ obtained by removing the vertex $v$ and all adjacent edges. The deck of the graph is the collection of cards $\{G-v:v\in V(G)\}$, allowing multiples. An example of a deck of card is given below.

Below the cards, we see how the cards can be obtained by removing vertices from a single graph, removing one at a time and removing each vertex exactly once. Can two non-isomorphic graphs have the same deck of cards? Yes, if the graphs have two vertices.

If we see a single vertex twice, then we know the original graph had two vertices but there is no way for us to know whether there is an edge between them.

More than 60 years ago, Kelly and Ulam made the beautiful conjecture that if two graphs on at least three vertices have the same deck, they must be isomorphic. In 1977, Bondy and Hemminger wrote the following about this conjecture:

"The Reconstruction Conjecture is generally regarded as one of the foremost unsolved problems in graph theory. Indeed, Harary (1969) has even classified it as a "graphical disease'' because of its contagious nature. According to reliable sources, it was discovered in Wisconsin in 1941 by Kelly and Ulam, and claimed its first victim (P. J. Kelly) in 1942. There are now more than sixty recorded cases, and relapses occur frequently (this article being a case in point)."

There are many subtleties in the problem which might not be apparent at first sight; for example, if two vertices $u$ and $v$ have the same card (i.e. $G-u\cong G-v$) then they are not necessarily "similar'' in the graph, that is, there does not necessarily exist a graph automorphism mapping $u$ to $v$.

Rather than reconstructing the entire graph, can we at least read off some information about the graph? If a graph has vertices $v_1,\dots,v_n$ then \[ \sum_{i=1}^n |E(G-v_i)|=(n-2)|E(G)| \] since each edge appears in all cards except for the two corresponding to vertices it is adjacent to. So we can read off the number of edges of the graph (the size) if we are given a full deck of cards. What if we have only access to a subset of the cards? In Size reconstructibility of graphs, Alex Scott, Hannah Guggiari and I describe a way to reconstruct the size of the graph if at most $\frac1{20}\sqrt{n}$ cards are missing from the deck (for $n$ large). The best previous result in this direction was the case in which two cards are missing.

Our proof works as follows:

* We first note that if we have been given the cards $G-v_1,\dots,G-v_{n-k}$, we can still compute \[ \widetilde{|E(G)|}=\frac{\sum_{i=1}^{n-k} |E(G-v_i)|}{n-2-k}\approx|E(G)| \] which is an (over)estimate on the number of edges.

* The degree $d(v)$ of a vertex is the number of edges adjacent to it. Note that on the card $G-v$, exactly the edges adjacent to $v$ are missing. Hence $|E(G)|-|E(G-v)|=d(v)$. We will try to approximate the degree sequence $(d_t)$, where $d_t$ gives the number of vertices of degree $t$, in two different ways.

* Firstly, using our estimate on the number of edges, we can still approximate the degree of the vertices of the cards that have been given: \[ \widetilde{d(v)}=\widetilde{|E(G)|}-|E(G-v)|.\] Since we overestimate the number of edges, we also overestimate $d(v)$ for every vertex, but always by the same amount $\alpha = \widetilde{|E(G)|}-|E(G)|.$ This means our approximated degree sequence $(\widetilde{d}_t)$ is the actual degree sequence $(d_t),$ but shifted to the right, and moreover some values have been underestimated (since some of the cards are missing). If we could recover the shift $\alpha,$ we could find $|E(G)|$ since we know $\widetilde{|E(G)|}.$ 

* Secondly, we discover many small values $d_t$ exactly. We use these to either reconstruct the entire degree sequence (then we can read off the number of edges from this) or discover some large values. Suppose that we find a segment as in the picture below: one large value, and to one side of it a bunch of known values, many of which are small. We "match up'' the known values to various shifts of $(\widetilde{d}_t).$ For the correct shift, the error will be small, whereas for any other shift the error is lower bounded by the difference between large and small in the figure below. This allows us to recover $\alpha$ and then the number of edges.

Some interesting open problems include:

* The original graph reconstruction conjecture, for which the edge variant is also unknown. (The directed graph, infinite graph and hypergraph analogues are false.) This is also unknown for "nice'' graph classes such as bipartite graphs or planar graphs.

* Improving our result beyond $\sqrt{n}$ or extending it to recovering different information about the graph, such as the degree sequence or the number of triangles.

Thursday, 31 January 2019

Jon Keating appointed to the Sedleian Professorship of Natural Philosophy

Oxford Mathematics is delighted to announce that Prof. Jon Keating FRS, the Henry Overton Wills Professor of Mathematics in Bristol, and Chair of the Heilbronn Institute for Mathematical Research, has been appointed to the Sedleian Professorship of Natural Philosophy in the University of Oxford.

Jon has wide-ranging interests but is best known for his research in random matrix theory and its applications to quantum chaos, number theory and the Riemann zeta function.  In November, he will be the next President of the LMS.

The Sedleian is regarded as the oldest of Oxford's scientific chairs and holders are simultaneously elected to fellowships at Queen's College, Oxford. Recent holders have included Brooke Benjamin (1979-1995) who did highly influential work in the areas of mathematical analysis and fluid mechanics and most recently Sir John Ball (1996-2018), who is distinguished for his work in the mathematical theory of elasticity, materials science, the calculus of variations, and infinite-dimensional dynamical systems.

Sunday, 27 January 2019

Music & Mathematics - Villiers Quartet in concert in the Mathematical Institute

We often need mathematics and science to understand our lives. But we also need the Arts. And especially music. In fact they often work best together.

The Villiers Quartet are Quartet in Residence at Oxford University and on February 8th we welcome them for the first time to the Andrew Wiles Building, home of Oxford Mathematics for an evening of Haydn, Beethoven and Mozart. 

Haydn - Quartet in G, Op. 77 No.1

Mozart -  Quartet G, K. 387

Beethoven - Quartet in C# minor, Op. 131

For more information and how to book click here.

Thursday, 17 January 2019

Multiple zeta values in deformation quantization

Oxford Mathematician Erik Panzer talks about his and colleagues' work on devising an algorithm to compute Kontsevich's star-product formula explicitly, solving a problem open for more than 20 years.

"The transition from classical mechanics to quantum mechanics is marked by the introduction of non-commutativity. For example, let us consider the case of a particle moving on the real line.

From commutative classical mechanics...

Classically, the state of the particle is described by its position $x$ and its momentum $p$. These coordinates parametrize the phase space, which is the tangent space $M=T^1 \mathbb{R} \cong \mathbb{R}^2$. One can view $x,p\colon M \rightarrow \mathbb{R}$ as smooth functions on the phase space, and the set $A=C^{\infty}(M)$ of all smooth functions on the phase space is an algebra with respect to the (commutative) multiplication of functions, e.g. $x\cdot p = p \cdot x$. The dynamics of the system is determined by a Hamiltonian $\mathcal{H} \in A$, which dictates the time evolution of a state according to \begin{equation*} x'(t) = \{x(t), \mathcal{H}\} \quad\text{and}\quad p'(t) = \{p(t), \mathcal{H}\}, \qquad\qquad(1) \end{equation*} where the Poisson bracket on the phase space is given by \begin{equation*} \{\cdot,\cdot\}\colon M\times M \longrightarrow M, \qquad \{f,g\} = \frac{\partial f}{\partial x} \frac{\partial g}{\partial p} - \frac{\partial f}{\partial p} \frac{\partial g}{\partial x}. \end{equation*} non-commutative quantum mechanics.

In the quantum world, the state is described by a wave function $\psi$ that lives in a Hilbert space $L^2(\mathbb{R})$ of square-integrable functions on $\mathbb{R}$. Position and momentum now are described by operators $\hat{x},\hat{p}$ that act on this Hilbert space, namely \begin{equation*} \hat{x} \psi(x) = x \cdot \psi(x) \quad\text{and}\quad \hat{p} \psi(x) = -\mathrm{i}\hbar\frac{\partial}{\partial x} \psi(x) \end{equation*} where $\hbar$ is the (very small) reduced Planck constant. Note that these operators on the Hilbert space do not commute, $\hat{x}\hat{p} \neq \hat{p}\hat{x}$, and the precise commutator turns out to be \begin{equation*} [\hat{x},\hat{p}] = \hat{x}\hat{p}-\hat{p}\hat{x} = \mathrm{i}\hbar = \mathrm{i}\hbar \{x,p\}. \end{equation*} The observable quantity associated to an operator $\hat{f}$ is its expection value $\langle {\psi}| \hat{f} |\psi \rangle$, which is common notation in physics for the scalar product of $\psi$ with $\hat{f}\psi$. The time evolution of such an observable is determined by the Hamiltonian operator $\hat{\mathcal{H}}$ in a way very similar to (1): \begin{equation*} \mathrm{i}\hbar \frac{\partial \langle \psi(t) | \hat{f}| \psi(t)\rangle}{\partial t} = \langle \psi(t)| \big[\hat{f},\hat{\mathcal{H}} \big] |\psi(t)\rangle. \qquad\qquad(2) \end{equation*}

Deformation quantization...

Let us now generalise the example: For each classical quantity $f\in A=C^{\infty}(M)$, there should be a quantum analogue $\hat{f}$ that acts on some Hilbert space. We can think of this quantization $f\mapsto \hat{f}$ as an embedding of the commutative algebra $A$ into a non-commutative algebra of operators. Comparing (1) and (2), we see that quantization identifies the commutator $[\cdot,\cdot]$ with $i\hbar$ times the Poisson bracket $\{\cdot,\cdot\}$. The goal of deformation quantization is to describe this non-commutative structure algebraically on $A$, without reference to the Hilbert space. Namely, can we find a non-commutative product $f\star g$ of smooth functions such that \begin{equation*} \hat{f} \hat{g} = \widehat{f\star g} \quad\text{?} \end{equation*} Since the non-commutativity comes in only at order $\hbar$, it follows from the above that we should have \begin{equation*} f \star g = f\cdot g + \frac{\mathrm{i}\hbar}{2} \{f,g\} + \mathcal{O}(\hbar^2), \qquad\qquad(3)\end{equation*} i.e. the star-product is commutative to leading order (and therefore called a deformation of the commutative product), and the first order correction in $\hbar$ is determined by the Poisson structure.

...has a universal solution. 

Maxim Kontsevich proved in 1997 that such quantizations always exist: We can allow for an arbitrary smooth manifold $M$ with a Poisson structure - that means a Lie bracket $\{\cdot,\cdot\}$ on $A=C^{\infty}(M)$ which acts in both slots as a derivation (Leibniz rule).

Theorem (Kontsevich). Given an arbitrary Poisson manifold $(M,\{\cdot,\cdot\})$, there does indeed exist an associative product $\star$ on the algebra $A[[\hbar]]$ of formal power series in $\hbar$, such that equation (3) holds.

In fact, Kontsevich gives an explicit formula for this star-product in the form \begin{equation*} f \star g = f \cdot g + \sum_{n=1}^{\infty} \frac{(\mathrm{i}\hbar)^n}{n!} \sum_{\Gamma \in G_n} c(\Gamma) \cdot B_{\Gamma}(f,g), \end{equation*} where at each order $n$ in $\hbar$, the sum is over a finite set $G_n$ of certain directed graphs, like

and so on. The term $B_{\Gamma}(f,g)$ is a bidifferential operator acting on $f$ and $g$, which is defined in terms of the Poisson structure. It can be written down very easily directly in terms of the graph. The remaining ingredients in the formula are the real numbers $c(\Gamma) \in \mathbb{R}$. These are universal constants, which means that they do not depend on the Poisson structure or $f$ or $g$. Once these constants are known, the star-product for any given Poisson structure can be written down explicitly.

What are the weights?

These weights are defined as integrals over configurations of points in the upper half-plane $\mathbb{H}$. The simplest example is the unique graph in $G_1$, which gives (exercise!)

This is precisely the factor $1/2$ in front of the Poisson bracket in (3). But for higher orders, these integrals become much more complicated, and until very recently it was not known how they could be computed. Due to this problem, the star product was explicitly known only up to order $\hbar^{\leq 3}$.

It was conjectured, however, that the weights should be expressible in terms of multiple zeta values, which are sums of the form
\sum_{1\leq k_1<\cdots<k_d} \frac{1}{k_1^{n_1} \cdots k_d^{n_d}} \in \mathbb{R},
indexed by integers $n_1,\ldots,n_d$. They generalize the Riemann zeta function $\zeta(n)$, and they play an important role in the theory of periods and motives. In particular, there is a (conjectural) Galois theory for multiple zeta values.

In recent work with Peter Banks and Brent Pym, we developed an algorithm that can evaluate the weight integrals $c(\Gamma)$ for arbitrary graphs $\Gamma$:

Theorem (Banks, Panzer, Pym). For an arbitrary graph $\Gamma\in G_n$, the number $c(\Gamma)$ is a rational linear combination of multiple zeta values, normalized by $(2\mathrm{i}\pi)^{n_1+\ldots+n_d}$.

We calculated and tabulated the weights of all 50821 graphs that appear in $G_n$ for $n\leq 6$, which gives the star-product for all Poisson structures up to order $\hbar^{\leq 6}$. This allows us to study the quantizations in explicit examples and compare them to other quantization methods. Furthermore, our result shows that the Galois group of multiple zeta values acts on the space of star-products.

This research started off with a project by Peter Banks over the summer 2016, when Brent Pym was still in Oxford. Since then we have developed our proof and implemented all steps in computer programs, which are starproducts (publicly available} and automatize the computation of the star product and Kontsevich weights. The underlying techniques make heavy use of the theory of multiple polylogarithms on the moduli space $\mathfrak{M}_{0,n}$ of marked genus 0 curves, a generalization of Stokes' theorem to manifolds with corners and a theory of single-valued integration, influenced by work of Francis Brown (Oxford) and Oliver Schnetz (Erlangen-Nürnberg)."

Related links:

Tuesday, 15 January 2019

Oxford Mathematics to LIVE STREAM an Undergraduate lecture

For the first time, on February 14th at 10am Oxford Mathematics will be LIVE STREAMING a 1st Year undergraduate lecture. In addition we will film (not live) a real tutorial based on that lecture.

After the huge success of making an undergraduate lecture widely available via social media last term, we know there is an appetite to better understand Oxford teaching. In turn we want to demystify what we do, showing that it is both familiar but also distinctive.

The details:
LIVE Oxford Mathematics Student Lecture - James Sparks: 1st Year Undergraduate lecture on 'Dynamics', the mathematics of how things change with time
14th February, 10am-11am UK time

Watch live and ask questions of our mathematicians as you watch

For more information about the 'Dynamics' course:

The lecture will remain available if you can't watch live.

Interviews with students:
We shall also be filming short interviews with the students as they leave the lecture, asking them to explain what happens next. These will be posted on our social media pages.

Watch a Tutorial:
The real tutorial based on the lecture (with a tutor and two students) will be filmed the following week and made available shortly afterwards

For more information and updates:

Friday, 11 January 2019

Michael Atiyah 1929-2019

We are very sorry to hear of the death of Michael Atiyah. Michael was a giant of mathematics. He held many positions including Savilian Professor of Geometry here in Oxford, President of the Royal Society, Master of Trinity College, Cambridge, the founding Directorship of the Isaac Newton Institute and Chancellor of the University of Leicester. From 1997, he was an honorary professor in the University of Edinburgh. He was awarded the Fields Medal in 1966 and the Abel Prize in 2004. 

Michael's work spanned many fields. Together with Hirzebruch, he laid the foundations for topological K-theory, an important tool in algebraic topology which describes ways in which spaces can be twisted. His Atiyah–Singer index theorem, proved with Singer in 1963, not only vastly generalised classical results from the 19th century such as the Riemann-Roch theorem and the Gauss-Bonnet theorem, the work of his teacher Hodge in the 1930s on harmonic integrals, and also Hirzebruch’s work, but also provided an entirely new bridge between analysis and topology which could also act as a mechanism for giving structure to identities in fields as far apart as number theory and group representations.

His more recent work was inspired by theoretical physics and coincided with the arrival of Roger Penrose in Oxford. The two exchanged ideas and realised how modern ideas in algebraic geometry formed the appropriate framework for Penrose’s approach to the equations of mathematical physics. This activity came to a head, following a visit of Singer in 1977, when a problem posed by the physicists on the Yang-Mills equations was solved by a mixture of Penrose’s techniques and some recent sophisticated pure mathematics in the theory of holomorphic vector bundles. As his ideas developed Michael, at the urging of Ed Witten, began to consider quantum field theory more seriously and ultimately he became one of the founders of what is loosely called “quantum mathematics”.

Michael gave his time generously in the promotion of his subject. In May 2018 he gave a very entertaining Public Lecture here in Oxford. His title? 'Numbers are serious but they are also fun.'


Tuesday, 8 January 2019

Functional calculus for operators

When mathematicians solve a differential equation, they are usually converting unbounded operators (such as differentiation) which are represented in the equation into bounded operators (such as integration) which represent the solutions.  It is rarely possible to give a solution explicitly, but general theory can often show whether a solution exists, whether it is unique, and what properties it has.  For this, one often needs to apply suitable (bounded) functions $f$ to unbounded operators $A$ and obtain bounded operators $f(A)$ with good properties.  This is the rationale behind the theory of (bounded) functional calculus of (unbounded) operators.   Applications include the ability to find the precise rate of decay of energy of damped waves and many systems of similar type.   

Oxford Mathematician Charles Batty and collaborators have recently developed a bounded functional calculus which provides a unified and direct approach to various general results.  They extend the scope of functional calculus to more functions and provide improved estimates for some functions which have already been considered.  To apply the results, one only has to check whether a given function $f$ lies in the appropriate class of functions by checking a simple condition on the first derivative.

The calculus is a natural (and strict) extension of the classical Hille-Phillips functional calculus, and it is compatible with the other well-known functional calculi.   It satisfies the standard properties of functional calculi, provides a unified and direct approach to a number of norm-estimates in the literature, and allows improvements of some of them.


Thursday, 3 January 2019

The Framed Standard Model for particles with possible implications for dark matter

Oxford Mathematician Tsou Sheung Tsun talks about her work on building the Framed Standard Model and the exciting directions it has taken her.

"I have been working, in collaboration with José Bordes (Valencia) and Chan Hong-Mo (Rutherford-Appleton Laboratory), to build the Framed Standard Model (FSM) for some time now. The initial aim of the FSM is to give geometric meaning to (fermion) generations and the Higgs field(s). The surprise is that doing so has enabled one not only to reproduce some details of the standard model with fewer parameters but also to make testable new predictions, possibly even for dark matter. I find this really quite exciting.

It is well known that general relativity is solidly based on geometry. It would be nice if one can say the same for particle physics. The first steps are hopeful, since gauge theory has geometric significance as a fibre bundle over spacetime, and the gauge bosons are components of its connection.

The standard model (SM) of particle physics is a gauge theory based on the gauge group $SU(3) \times SU(2) \times U(1)$, ignoring discrete identifications for simplicity. The gauge bosons are the photon $\gamma$, $W^\pm, Z$ and the colour gluons. To these are added, however, the scalar Higgs fields, and the leptons and quarks, for which no geometric significance is usually sought nor given. Besides, the latter two have to come in three generations, as represented by the three rows of the table below.

In a gauge theory the field variables transform under the gauge group. and to describe their transformation, we need to specify a local (spacetime dependent) frame, with reference to a global (fixed) frame via a transformation matrix. We suggest therefore that it is natural to incorporate both the local and global symmetries, and to introduce the elements of the transformation matrix as dynamical variables, which we call framons. Consequently, the global $SU(3)$ symmetry can play the role of generations, and the $SU(2)$ framons can give rise to the Higgs field.

The FSM takes the basic structure of the SM, without supersymmetry, in four spacetime dimensions, adds to it a naturally occuring global symmetry, and uses 't Hooft's confinement picture instead of the usual symmetry breaking picture.

As a result of this suggestion, we find that many details of the SM can be explained. Indeed, already by one-loop renormalization, we are able to reproduce the masses of the quarks and leptons, and their mixing parameters including the neutrino oscillation angles, using 7 parameters, as compared to the 17 free parameters of the SM. It also gives an explanation of the strong CP problem without postulating the existence of axions.

In addition, the FSM has predictions which are testable in the near future. What is special about the FSM is the presence of the colour $SU(3)$ framons. They will have effects which make the FSM deviate from the SM. Let me mention one of these which is of immediate relevance to LHC experiments. The FSM has a slightly larger prediction for the mass of the $W$ boson than the SM (see figure). It would be very interesting to see if future improved measurements of the $W$ mass would agree better with FSM.

(Figure 1: Measurements of the W boson mass compared to the SM prediction (mauve) and the FSM predictions (green) at two different vaccuum expectation values).

One other possible implication is that some of the bound states involving the $SU(3)$ framons might be suitable candidates for dark matter. Now dark matter is one of the astrophysical mysteries still largely unresolved. We know that constituents of dark matter, whatever they may be, have mass but hardly interact with our world of quarks and leptons. These FSM candidates have exactly these properties, but we need more study to see if they are valid candidates for dark matter. If they were, then it would be a very interesting implication for the FSM." 

For a fuller explanation of the work click here.