News

Sunday, 22 November 2020

Our latest Online Student Lecture - 2nd Year Linear Algebra

The latest in our Autumn 2020 series of lectures is the first lecture in Alan Lauder's Second Year Linear Algebra Course. In this lecture Alan (with help from Cosi) explains to students how the course will unfold before going on to talk specifically about Vector Spaces and Linear Maps.

All lectures are followed by tutorials where students meet their tutor in pairs to go through the lecture and associated worksheet. The course materials and worksheets can be found here.

That's Cosi on the left.

 

 

 

 

Friday, 20 November 2020

Numerically solving parametric PDEs with deep learning to break the curse of dimensionality

Oxford Mathematician Markus Dablander talks about his collaboration with Julius Berner and Philipp Grohs from the University of Vienna. Together they developed a new deep-learning-based method for the computationally efficient solution of high-dimensional parametric Kolmogorov PDEs.

"Kolmogorov PDEs are linear parabolic partial differential equations of the form \begin{equation*} \label{kol_pde_intro} \tfrac{\partial u_\gamma }{\partial t} = \tfrac{1}{2} \text{Trace}\big(\sigma_\gamma [\sigma_\gamma ]^{*}\nabla_x^2 u_\gamma \big) + \langle \mu_\gamma , \nabla_x u_\gamma \rangle, \quad u_\gamma (x,0) = \varphi_\gamma(x). \end{equation*} The functions \begin{equation*}\varphi_\gamma : \mathbb{R}^d \rightarrow \mathbb{R} \quad \text{(initial condition)}, \quad \sigma_\gamma : \mathbb{R}^d \rightarrow \mathbb{R}^{d \times d}, \quad \mu_\gamma : \mathbb{R}^d \rightarrow \mathbb{R}^{d} \quad \text{(coefficient maps)}, \end{equation*} are continuous, and are implicitly determined by a real parameter vector $\gamma \in D $ whereby $D$ is a compact set in Euclidean space. Equations of this type represent a broad class of problems and frequently appear in practical applications in physics and financial engineering. In particular, the heat equation from physical modelling as well as the widely-known Black-Scholes equation from computational finance are important special cases of Kolmogorov PDEs.

Typically, one is interested in finding the (viscosity) solution \begin{equation*} u_\gamma : [v,w]^d \times [0,T] \rightarrow \mathbb{R} \end{equation*} of a given Kolmogorov PDE on a predefined space-time region of the form $[v,w]^{d} \times [0,T]$. In almost all cases, however, Kolmogorov PDEs cannot be solved explicitly. Furthermore, standard numerical solution algorithms for PDEs, in particular those based on a discretisation of the considered domain, are known to suffer from the so-called curse of dimensionality. This means that their computational cost grows exponentially in the dimension of the domain, which makes these techniques unusable in high dimensions. The development of new, computationally efficient methods for the numerical solution of Kolmogorov PDEs is therefore of high interest for applied scientists.

We were able to develop a novel deep learning algorithm capable of efficiently approximating the solutions $(u_\gamma )_{\gamma \in D}$ of a whole family of potentially very high-dimensional $\gamma$-parametrised Kolmogorov PDEs on a full space-time region. Specifically, our proposed method allows us to train a single deep neural network \begin{equation*} \Phi\colon D \times [v,w]^d \times [0,T] \rightarrow \mathbb{R} \end{equation*} to approximate the parametric solution map \begin{align*} \label{eq:gen_sol_map} \bar{u} : D \times [v,w]^d \times [0,T] \rightarrow \mathbb{R}, \quad (\gamma, x, t) \mapsto \bar{u}(\gamma, x, t) := u_\gamma (x,t), \end{align*} of a family of $\gamma$-parametrized Kolmogorov PDEs on the generalised domain $D \times [v,w]^d \times [0,T]$. The key idea of our novel algorithm is to reformulate the parametric solution map $\bar{u}$ of the $\gamma$-parametrized Kolmogorov PDE as the regression function of an appropriately chosen supervised statistical learning problem. This reformulation is based on an application of the so-called Feynman-Kac formula, which links the theory of linear parabolic PDEs to the theory of stochastic differential equations via $$ \bar{u}(\gamma,x,t) = \mathbb{E}[\varphi_{\gamma}(S_{\gamma,x,t})]. $$ Here $S_{\gamma,x,t}$ is the solution of an associated $\gamma$-parametrised stochastic differential equation with starting point $x$ at time $t$: $$ dS_{\gamma,x,t} = \mu_\gamma   (S_{\gamma,x,t}) dt + \sigma_\gamma  (S_{\gamma,x,t}) dB_t, \quad S_{\gamma,x,0} = x. $$ The resulting stochastic framework can be exploited to prove that $\bar{u}$ must be the solution of a specifically constructed statistical learning problem. Realisations of the predictor variable of this learning problem can be simulated by drawing uniformly distributed samples of the domain $(\gamma_i, x_i, t_i) \in D \times [v,w]^d \times [0,T]$ while realisations of the dependent target variable can be generated by simulating realisations $\varphi_{\gamma_i}(s_{\gamma_i, x_i, t_i})$ of $\varphi_{\gamma_i}(S_{\gamma_i, x_i, t_i})$. Thus, a potentially infinite amount of independent and identically distributed training points can be simulated by solving for $S_{\gamma_i, x_i, t_i}$ via simple standard numerical techniques such as the Euler-Maruyama scheme. The simulated predictor-target-pairs $((\gamma_i, x_i, t_i)  ,\varphi_{\gamma_i}(s_{\gamma_i, x_i, t_i}))$ can then be used to train the deep network $\Phi$ to approximate the solution $\bar{u}.$ An illustration of this training workflow is depicted in the figure below.

                                                                                                

We mathematically investigated the approximation and generalisation errors of our constructed learning algorithm for important special cases and discovered that, remarkably, it does not suffer from the curse of dimensionality. In addition to our theoretical findings, we were able to also observe numerically that the computational cost of our our proposed technique does indeed grow polynomially rather than exponentially in the problem dimension; this makes very high-dimensional settings computationally accessible for our method.

Our work was accepted for publication at the 2020 Conference on Neural Information Processing Systems (NeurIPS). A preprint can be accessed here.

Below is a picture of the Multilevel architecture we used for our deep network $\Phi$ (click to enlarge)."

                                                                                  

 

Monday, 16 November 2020

Generalising Leighton's Graph Covering Theorem

Oxford Mathematician Daniel Woodhouse talks about the theorem that motivates much of his research.

"According to lore, denizens of the Prussian city of Königsberg would spend their Sunday afternoons wandering the streets. From this pastime came a puzzle - could a wanderer cross each of the city's seven bridges without crossing the same bridge twice? The path of the River Pregel divides the city into four distinct regions, including a central isolated island, so the question was more complicated than a matter of crossing back and forth from one side of a river to the other.

                                               

If a positive solution existed, then someone would surely have found it, so it could be guessed that the answer was negative. In principle, you could write down an exhaustive list of all possible sequences of bridge crossings, but this would be a very crude kind of proof. The problem eventually came to the attention of Leonhard Euler, who set about finding an explanation that offered more insight into the nature of the problem. In 1736 Euler wrote up his solution, providing arguments that not only applied to the bridges of Königsberg, but to any arrangement of rivers and bridges. Initially, it seems from his correspondence that Euler was inclined to dismiss the intellectual merits of the problem, not regarding it as mathematical at all. But in the introduction of his paper he declares it to belong to a kind of mathematics only previously speculated: the geometry of position ("Geometriam situs''). Today this kind of mathematics is better known as topology.

                                               

The kind of mathematics that Euler would have been familiar with - geometry - concerned notions of distance, area, and angles. Yet considering the problem of the bridges from a purely mathematical point of view, such quantities become irrelevant. The exact shape that the river takes and the distance between bridges are relevant to the individual planning on spending an afternoon walking over them, but less so to the mathematician. Indeed, the relevant information can be reduced to a list of the distinct land masses, and the bridges they connect. In modern terminology the problem can be redrawn as a graph:

                                                                               

Each land mass is now represented by a vertex and each bridge is now an edge joining the corresponding vertices. We might say that the graph is topologically equivalent to the original diagram of the river and the bridges. I have labelled the vertices according to how Euler labelled them in his previous diagram. I have also added arrows indicating an orientation of the edges. Graphs are now ubiquitous in mathematics and science, and topology is only one perspective of study. You should not equate the graph with the way that it has been drawn above. The same graph could be drawn in many ways:

Let us now move on from Euler's problem and take the topological perspective one step further. We imagine the wanderer of the city is now a wanderer of our graph instead. They travels from vertex to vertex via the edges. Suppose that our wander has no map of the world they are living in. That is to say the wanderer has no idea what graph they are wandering around; whichever vertex they have arrived at, all they can see are the edges immediately connected to the vertex on which they stand. They can remember the route that they travelled to get there, but no more. Frustratingly, since we are now in a graph, and not the city of Königsberg, there are no landmarks with which the wanderer can orient themselves. If they return to a vertex they have no way of knowing. As we look down on our wanderer we can see exactly where they are, while they remain completely unaware. We can also take a look at the graph as they imagine it:

                                               

In the wanderer's imagination, how do they envision the graph? Since they never know if they have arrived at a vertex they have previously visited (without doubling back on themselves) this graph cannot contain any closed cycles - a path the begins and ends at the same vertex without repeating any other edge or vertex. A graph that does not contain any closed cycles is called a tree.

                                               

This tree, let's call it $\widetilde X$, is labelled in the picture according to which edge in $X$ it really corresponds to. These labels are invisible to our wanderer - if the labels were known then they could deduce what graph they were wandering about on. There is one particularly important vertex in $X$ and $\widetilde{X}$: the point at which our wanderer begins their journey, which we call the basepoint. This mapping between the graph $X$ and the universal cover $\widetilde X$ is usually written as a function $\widetilde X \rightarrow X$. The important feature here is that given a path that our wanderer takes in $X$, we can track the path they imagine they are taking in $\widetilde X$. Conversely, given a path in $\widetilde X$, using the labelling that only we can see, we can deduce the path that our wanderer has taken in $X$.

It is impossible for our wanderer to deduce what the graph $X$ is from $\widetilde X$. This is because $X$ isn't the only graph with universal cover $\widetilde X$. Setting aside Königsberg for a moment, consider the following graphs:

                                               

These graphs are obviously distinct, just by counting the vertices. But to a wanderer of these graphs, all they know is that whatever edge they choose to wander down, they will arrive at a vertex incident to precisely three edges. Wherever they go, the world to them looks exactly the same. There is no way for them to deduce if they are living in $X_1$, $X_2$, or $X_3$. Another way to say this is that $X_1$, $X_2$, and $X_3$ have the same universal cover $T_3$ - the 3-valent tree.

                                               

This isn't all that our wanderer could imagine. In principle they could try imagining that they are wandering around any graph we like. But that imagined graph should at least be consistent with what they actually experience. You can't arrive at a vertex with three incident edges but imagine there are four incident edges - that would present a contradiction to our wanderer and the illusion would be ruined. As a simple example, consider the reverse situation to the above: suppose our wanderer imagines they are navigating the Königsberg graph $X$, but in reality they are inside $\widetilde X$:

                                               

Provided that the basepoints match up as before, the wanderer would encounter no contradiction between what they are imagining and what they actually experience. But the situation would be particularly deceptive: as depicted above, our wanderer could imagine that they had walked a closed loop, when in fact they had not. This misconception is particularly bad for topologists, so we rule them out. Imagined graphs that do not cause the misconception are called covers, and there exist many of them aside from the universal cover. This definition for a cover is quite confusing, which is why we usually work with an equivalent and more practical condition in terms of labelling the edges of the cover according to the corresponding edges in the original graph. For example:

                                                            

Note also that the cover given above is finite, while the universal cover is infinite. Unless the graph $X$ is itself a tree, the universal cover will always be infinite. Indeed, if $X$ is not a tree, then there is a closed cycle that our wanderer can walk around indefinitely, quite unaware that they are trapped in a closed loop.

A large part of my research is focused on the the following theorem:

Theorem: Let $X_1$ and $X_2$ be finite graphs that have the same universal cover. Then they have a common finite cover.

Here is a picture of two wanderers living on two graphs with the same universal cover, imagining that they are living on a common cover:

                                                            

This theorem was conjectured in an 1980 paper by the computer scientist Dana Angluin. Angluin was interested in distributed systems, which you can imagine as being a graph with a computer at each vertex and each edge corresponding to a 'port' connecting the neighbouring computers, allowing them to communicate. If this picture seems familiar that is because the internet is an example. The first complete proof of this theorem was given in a 1982 paper by Frank Thomas Leighton. Leighton is also a computer scientist and most of his research in graph theory was closely related to similar applications in computer science. The kind of problems that concerned Angluin and Leighton more closely resemble the kind of problem that Euler was considering than my own research. On the strength of the insights graph theory offers, Leighton co-founded Akamai Technologies in 1998, and today they are one of the internet's largest content delivery networks. He also retains an affiliation as a professor of applied mathematics at MIT and I have been told he still teaches an occasional course (I have the same teaching obligations as a billionaire, it turns out).

My own research has involved applying Leighton's theorem to the field of geometry and topology. Leighton's original proof was very much the argument of a computer scientist, and finding a new proof that offered a different theoretical framework has allowed me to solve previously unapproachable problems. There are two goals: the first is to generalise the theorem to higher dimensional topological spaces; and the second is to apply the generalisations to topological spaces or groups that decompose into a graph like structure. For example, in a recent preprint with fellow Oxford Mathematician Sam Shepherd we are able to prove quasi-isometric rigidity for a large family of hyperbolic groups. There are close connections to the theory of $3$-manifolds, their JSJ decompositions, and the recent resolution of the virtual Haken conjecture. In particular I believe that the right setting to generalise Leighton's theorem is special cube complexes."

Sunday, 15 November 2020

Vicky Neale's 'Analysis' Student Lecture now on YouTube

The second in the series of Student Lectures that we are making publicly available this Autumn is from Vicky Neale. Vicky is one of our most popular lecturers and this lecture is from her First Year Analysis course. 

The course introduces students to a rigorous definition of convergence, allowing them to develop their previous understanding of sequences and series and to prove key results about convergence, leading on to subsequent Analysis courses addressing continuity, differentiability and integrability of functions.

All lectures are followed by tutorials where students meet their tutor in pairs to go through the lecture and associated worksheet.

 

 

Saturday, 14 November 2020

Oxford Mathematics Online Public Lecture: Anna Seigal - Ideas for a Complex World

Oxford Mathematics Online Public Lecture: Anna Seigal - Ideas for a Complex World

Thursday 19 November, 5-6pm. No need to register, watching details below (and the talk will stay up afterwards).

Humans have been processing information in the world for a long time, finding patterns and learning from our surroundings to solve problems. Today, scientists make sense of complex problems by gathering vast amounts of data, and analysing them with quantitative methods. These methods are important tools to understand the issues facing us: the spread of disease, climate change, or even political movements. But this quantitative toolbox can seem far removed from our individual approaches for processing information in our day-to-day lives. This disconnect and inaccessibility leads to the scientific tools becoming entangled in politics and questions of trust.

In this talk, Anna will describe how some of the ideas at the heart of science’s quantitative tools are familiar to us all. We’ll see how mathematics enables us to turn the ideas into tools. As a society, if we can better connect with the ideas driving this toolbox, we can see when to use (and not to use) the available tools, what’s missing from the toolbox, and how we might come up with new ideas to drive our future understanding of the world around us.

Anna Seigal is a Hooke Research Fellow in the Mathematical Institute at the University of Oxford and a Junior Research Fellow at The Queen's College.

Watch live (no need to register):
Oxford Mathematics Twitter
Oxford Mathematics Facebook
Oxford Mathematics Livestream
Oxford Mathematics YouTube

The Oxford Mathematics Public Lectures are generously supported by XTX Markets.

Thursday, 12 November 2020

Fernando Alday appointed Rouse Ball Professor of Mathematics

Fernando Alday has been appointed Rouse Ball Professor of Mathematics in the University of Oxford. The Rouse Ball Professorship of Mathematics is one of the senior chairs in the Mathematics Department in Oxford (and also in Cambridge). The two positions were founded in 1927 by a bequest from the mathematician W. W. Rouse Ball. Previous Rouse Ball Professors include Charles Coulson, Philip Candelas (the retiring Rouse Ball and best known for his work on String Theory) and recent Nobel Laureate Roger Penrose. The Rouse Ball Professor in Oxford is a Fellow of Wadham College.

Fernando Alday is an Argentinean Theoretical Physicist and Mathematician. He did his undergraduate at Centro Atomico Bariloche, Argentina, and his DPhil at SISSA, Italy, under the supervision of Edi Gava and Kumar Narain. He joined Oxford in 2010, after doing Postdocs at Utrecht University in the Netherlands and at the Institute for Advanced Study in the US. 

Following the tradition of Rouse Ball chairs at Oxford, Fernando is well known for the development of mathematical tools to understand fundamental questions in Quantum Field Theory and Quantum Gravity. Fernando's most important contributions involve surprising dualities among different theories and observables in high energy theoretical physics. One of these dualities relates scattering amplitudes to minimal surfaces/soap bubbles in anti-de-Sitter space, while another, known as the AGT correspondence, relates correlation functions in a two-dimensional theory to the spectrum of four-dimensional gauge theories. More recently, Fernando has been developing mathematical tools to compute string and M-theory amplitudes in curved space-time, a subject still in its infancy.

Tuesday, 10 November 2020

Product Replacement algorithm and property (T)

Oxford Mathematician Dawid Kielak talks about his joint work with Marek Kaluba and Piotr Nowak in which they used a computer* to prove a theorem about the behaviour of other computers.

"Here is a practical problem: imagine being given a very large but finite object $X$. Using your bare hands, you construct a small number of elements $g_1, \dots, g_n$ of the automorphism group $\mathrm{Aut}(X)$ (or maybe you were given those; perhaps you even know that they generate $\mathrm{Aut}(X)$). Now, let us say that given two elements of $\mathrm{Aut}(X)$ you can check how they act on $X$, and in particular you can check if these elements are the same. Your goal is to figure out which finite group $G = \langle g_1, \dots, g_n \rangle$ is.

Since you can check which words in the generators $g_1, \dots, g_n$ coincide (you can solve the word problem in $\mathrm{Aut}(X)$), you could construct the Cayley graph of $G$. But now imagine that $X$ was a $10$-dimensional vector space over the finite field $\mathbb{F}_p$ where $p = 101$. The group $\mathrm{GL}_{10}(\mathbb{F}_{101})$ has more elements than there are particles in the observable universe... so good luck with the Cayley graph!

We need a smarter trick. We will quickly construct a bunch of random elements of $G$, and try to guess what $G$ is from what we can say about these random elements. And here the adjective quickly is key. One way to construct these random elements is to use a Monte Carlo type algorithm of Laszlo Babai from 1991. This algorithm requires $\log |G|^5$ operations to generate elements that are more-or-less random. Unfortunately, $\log |G|^5$ is still too large for practical purposes. Another way of generating random elements was proposed by Charles Leedham-Green and Leonard Soicher around 1995; this is the Product Replacement algorithm which works as follows: instead of trying to randomly move around the group $G$, you try to walk around its generating sets. More specifically, what you do is you start with your tuple $(g_1, \dots, g_n)$, and you randomly multiple one of the entries on either the left or the right by another entry (or its inverse). You repeat this process many times, and at the end you select one of the entries of the last tuple at random, and this is the output element. The magic of the Product Replacement algorithm is that in practice it works extremely well. It is implemented in both GAP and MAGMA, so if you ever tried to construct a random element of a finite group using any of these programmes, you probably used Product Replacement.

And now we start to do natural science: we spend weeks sitting on a tree top observing algorithms in their natural habitat, and we notice that this Product Replacement is very efficient. What is the reason for this? A first putative answer came in 2000: Alexander Lubotzky and Igor Pak showed that the effectiveness of Product Replacement would follow from the automorphism group $\mathrm{Aut}(F_n)$ of a free group $F_n$ exhibiting Kazhdan's Property $(T)$.

Property $(T)$. Before getting to the definition, let us first ask a philosophical question: how can we understand a group? At the very beginning, starting with Évariste Galois in the 1830s, groups were collections of automorphisms of finite objects, and they were understood via their actions. Groups became abstract objects some 50 years later in the work of Arthur Cayley; and not only did they become abstract - they were now allowed to be infinite! How would you study an abstract, infinite group? It seems that it took another 50 years until Emmy Noether popularised homomorphisms as a tool for understanding groups (as well as rings and other algebraic structures). The basic idea behind using a homomorphism is to map your group to something you understand well, and then to study what information you have lost on the way (which is of course encoded in the kernel of the homomorphism). Which begs another question: what are the groups which we do understand well?

(Before answering this question, let me go on a tangent. What happens next in the development of group theory? Well, we go a full circle, and now we are back to studying actions, i.e., again we try to view groups as automorphisms of objects. Learning the lesson from Noether, we try to come up with objects to act on that we understand well; the price is that these actions are not always faithful and some information may be lost. Formally speaking, we look at homomorphisms $H \to \mathrm{Aut}(Y)$, and the obvious question which arises is: how is this different from simply studying homomorphisms? My personal opinion is that it is psychologically different: our brains, so used to thinking about who did what to whom, are well-trained in thinking about actors (elements of $H$) performing actions (on the set $Y$). I think that using this point of view allows us to tap into additional brainpower, and this is why thinking of actions is so prevalent nowadays, and so effective.)

Back to maths: what sort of groups do we understand well? Automorphisms of objects which we understand even better - vector spaces of finite dimension. This line of thought, understanding groups by mapping them to groups of matrices over fields, is called representation theory. It is a classical and fantastically useful piece of mathematics, particularly helpful in trying to understand finite groups.

When the groups are infinite, acting on finite dimensional vector spaces might not be enough. So we relax the assumptions, and now try to look at actions on infinite dimensional vector spaces. But that is way too general to be useful, and a convenient middle ground is to look at actions on Hilbert spaces by linear isometries (these are unitary representations). Just to remind you - Hilbert spaces may be infinite dimensional, but they have a notion of an angle between two crossing lines, and this makes their geometry more tractable.

To define property $(T)$ we need to be slightly more general and consider actions on Hilbert spaces by isometries which are not necessarily linear, which amounts to saying that they do not need to fix the $0$ of the Hilbert space. Now we are ready: a group has property if and only if its every action by isometries on a Hilbert space fixes a point. Roughly speaking, this means that the group has no actions on Hilbert spaces besides the unitary representations.

Automorphisms of $F_n$. It is time for the last of the dramatis personae. The insight of Lubotzky and Pak was that if you want to explain a phenomenon happening to a random walk among generating sets of $G$ for every finite $G$, you should look at the same random walk among generators of the free group. The interesting point here is that if you take $a_1, \dots, a_n$ to be a generating set of the free group $F_n$ of rank $n$, then every operation performed by the Product Replacement algorithm is equal to evaluating an automorphism of $F_n$ on every element in the tuple. And now we are really studying a random walk in $\mathrm{Aut}(F_n)$! Random walks on groups with property $(T)$ mix very rapidly, which means that already few operations performed by the Product Replacement algorithm get you very close to a random element.

This was the state of affairs in 2000. It was known that $\mathrm{Aut}(F_1) \cong \mathbb{Z} / 2 \mathbb{Z}$ has property $(T)$ for trivial reasons; $\mathrm{Aut}(F_2) \cong \mathrm{GL}_2(\mathbb{Z})$ and $\mathrm{Aut}(F_3)$ were known not to have property $(T)$. It was also known that $\mathrm{GL}_m(\mathbb{Z})$ has property $(T)$ if and only if $m\geqslant 3$, so the fact that for small values of $n$ the groups $\mathrm{Aut}(F_n)$ (which are in many ways analogous to $\mathrm{GL}_n(\mathbb{Z})$) did not have property $(T)$ was not necessarily seen as a big problem. And there was the experimentally confirmed but theoretically mysterious effectiveness of Product Replacement.

The first signs of a breakthrough appeared in March 2017. Marek Kaluba and Piotr Nowak used a new approach designed by Narutaka Ozawa to prove property $(T)$ for $\mathrm{GL}_n(\mathbb{Z})$ with $n \in \{3,4,5\}$ by solving a very large optimisation problem on a computer. Their method was potentially applicable to $\mathrm{Aut(F_n)}$, and indeed in December 2017 Kaluba-Nowak-Ozawa proved property $(T)$ for $\mathrm{Aut(F_5)}$.

I joined the project in summer 2018. In September we devised a strategy which reduced the infinite collection of questions, "Does $\mathrm{Aut(F_n)}$ have property $(T)$?", indexed by natural numbers, to a single question: "Can a special element of the group ring $\mathbb Z \mathrm{Aut(F_5)}$ be written as a sum of Hermitian squares?". This is the sort of question that one can ask a computer about: the machine will attempt to find the desired sum of Hermitian squares. And that is what we did. Every 6 minutes the computer would produce a result like this:

We set up a twitter account which would tweet the outcomes, and for a couple of weeks looking at these numbers was my favourite pastime. It was the first thing I checked in the morning, and the last before I went to bed. And then on the 12th of October the computation was finished: together with Kaluba and Nowak, we established Property $(T)$ for $\mathrm{Aut}(F_n)$ for every $n>5$. (The solution to the remaining case of $\mathrm{Aut(F_4)}$ was recently announced by Martin Nitsche.)

In December 2018 we put the preprint on the arXiv. It was the end of this project, but "a beginning of a beautiful friendship'': we have been collaborating with Marek and Piotr ever since." 

* For those who are interested here is the accompanying code which comes with detailed instructions on how to run it; one can run the code on a laptop and prove something very close to our main theorem in about 2 hours.

 

 

Sunday, 8 November 2020

Oxford Mathematics Student Lectures now on YouTube

Over the next few weeks we shall we making a wide range of our undergraduate lectures available via our YouTube channel to add to those that are already there. The aim is not to provide detailed tuition but to give an insight in to the student experience in Oxford. However, we will be putting up one full course as part of the Autumn series.

Topics covered will include First Year Analysis, Second Year Linear Algebra and Probability and Third Year Geometry of Surfaces and Mathematical History. We would add that all lectures are online this term, but some of the tutorials that are given by tutors to pairs of students on the back of the lectures are taking place in person.

So we start with Professor Derek Moulton and a lecture from his First Year Geometry course. As Derek says in his introduction, geometry is ubiquitous in mathematics. Why not take a look?

Thursday, 5 November 2020

MAT in 10 minutes - The Oxford Mathematics Admissions Test

Yesterday over 5000 applicants took the Mathematics Admissions Test, the entrance test used for Undergraduate Mathematics at Oxford, and other courses at Oxford and Warwick University and Imperial College London.

It's a two and a half hour exam. Here (below) Dr James Munro gives you all the answers in 10 minutes.

Question paper available here. And yes, there was a typo in Q4. Full statement here.

 

 

 

 

 

 

 

 

 

 

Tuesday, 3 November 2020

Robin Thompson awarded the Journal of Clinical Medicine Outstanding Research Award 2020

Oxford Mathematician Robin Thompson has been awarded the Journal of Clinical Medicine “Outstanding Research Award 2020” for his contribution of using mathematical models to represent the epidemiological or evolutionary behavior of infectious disease outbreaks.

Robin's paper “Novel Coronavirus Outbreak in Wuhan, China, 2020: Intense Surveillance Is Vital for Preventing Sustained Transmission in New Locations” published in February 2020 was recognised as recommended reading by the World Health Organization. The aim of the paper was not only to generate forecasts of specific outbreaks but also to understand how diseases spread such that forecasts can be made, and control interventions can be planned with more precision.

In April Robin gave an Oxford Mathematics Public Lecture explaining for a general audience how mathematicians model infectious diseases. Much of the content has now passed in to common parlance. The lecture can be found below.

 

 

 

 

 

 

 

 

 

 

Pages