Wednesday, 20 September 2017 
We have an exciting series of Oxford Mathematics Public Lectures this Autumn. Summary below and full details here. All will be podcast and on Facebook Live. We also have a London Lecture by Andrew Wiles on 28 November (details will follow separately). Please email externalrelations@maths.ox.ac.uk to register for the lectures below.
Closing the Gap: the quest to understand prime numbers  Vicky Neale
18 October, 5.006.00pm, Lecture Theatre 1, Mathematical Institute, Oxford

Maths v Disease  Julia Gog
1 November, 5.006.00pm, Lecture Theatre 1, Mathematical Institute, Oxford

The Seduction of Curves: The Lines of Beauty That Connect Mathematics, Art and The Nude  Allan McRobie
13 November, 5.006.00pm, Lecture Theatre 1, Mathematical Institute, Oxford

Oxford Mathematics Christmas Public Lecture  Alex Bellos, title tbc
6 December, 5.006.00pm, Lecture Theatre 1, Mathematical Institute, Oxford

Please email externalrelations@maths.ox.ac.uk to register

Thursday, 14 September 2017 
As part of our series of research articles focusing on the rigour and intricacies of mathematics and its problems, Oxford Mathematician David Hume discusses his work on networks and expanders.
"A network is a collection of vertices (points) and edges (lines connecting two vertices). They are used to encode everything from transport infrastructure to social media interactions, and from the behaviour of subatomic particles to the structure of a group of symmetries. A common theme throughout these applications, and therefore of interest to civil engineers, advertisers, physicists, and mathematicians (amongst others), is that it is important to know how well connected a given network is. For example, is it possible that two major road closures make it impossible to drive from London to Oxford? An efficient road network should ensure that there are multiple ways to get between any two important places, but we cannot simply tarmac everything! As another example, if as an advertiser, you post adverts on a social media platform, how do you ensure that you reach as many people as possible, without paying to post to every single account?
Given a network, we say its cut size is the smallest number of vertices you need to remove, so that the remaining pieces have at most half the original number of vertices in them (in our examples: how many roads need to close before half the population are unable to drive to visit the other half, or how many people need to ignore your advert so that less than half of the users of social media will see it).
Let us say that a family of networks, with increasing numbers of vertices, is called an expander if the cut size of each network is proportional to the number of vertices (1) , and each vertex in a network is the end of at most a fixed number of edges. In theory this would be an optimal solution for a transport network as we can connect as many cities as we need to without needing to work out how to manage the traffic lights at a junction where 5,000 roads all converge. In practice, expanders are as incompatible with the geometry of our world as it is possible for any collection of networks to be.
(2)
Expanders, however, are still very interesting and naturally occur in diverse areas: in errorcorrecting codes in computer science; number theory; and in group theory, where my personal interest lies.
It is, in general very difficult to construct a family of expanders, even though randomly choosing larger and larger networks in which every vertex meets exactly three edges will almost surely produce an expander. The first construction of a family was done by Grigory Margulis  they came from the structure networks of finite groups of symmetries. Other constructions have since been found, most notably a construction of Ramanujan graphs (expanders which, in a particular sense, have the largest possible ratio between their cutsize and their number of vertices), and the fantastically named ZigZag product (3) , which builds expanders inductively, starting from two very simple networks.
One question, which seems to have avoided much attention, is the following: how many different expanders are there? To answer this, we first have to deal with the rather sensitive question of what exactly do we mean by different? Does adding one edge change the expander? If so, then the above question is not really very interesting. A more interesting example is provided by Manor Mendel and Assaf Naor: they prove that there are two different expanders so that however you try to associate the vertices in one with the vertices in another, you must either move vertices close that were very far apart before, or else move vertices far apart which previously were very close. In mathematical terms, they are not coarsely equivalent  we cannot even approximately preserve how close vertices are.
In my work, I show that there is a collection of expanders (we can even insist that they are Ramanujan graphs), which is impossible to ennumerate (it is uncountable), such that no pair of them are coarsely equivalent. The technique is to show that for any coarsely equivalent networks, the largest cut size of any network contained in the first with at most n vertices is proportional to the largest cut size of any network contained in the second with at most n vertices. By constructing expanders where these two values are not proportional, we rule out the possibility of such coarse equivalences between them.
The behaviour of cut sizes which is used above to rule out coarse equivalences is of much interest for networks which are not expanders. In my current work I am exploring how cut sizes behave for networks which are 'negatively curved at large scale': this is an area of particular interest in group theory, and plays a key role in the recent proofs of important conjectures in lowdimensional topology: the virtually Haken and virtually fibred conjectures. For such 'negatively curved' groups, cut sizes seem to be related to the dimension of an associated fractal 'at infinity'. With John Mackay and Romain Tessera, we have established this link for an interesting collection of such networks, and are working on developing the technology needed to generalise our results."
(1) This is not the traditional definition, but one of my results proves that a network is an expander in the definition given here if and only it contains an expander in the traditional sense
(2) Two networks with highlighted collections of vertices demonstrating the value of the cut size
(3) The header image of this article is the ZigZag product of a cycle of length 6 with a cycle of length 4

Monday, 11 September 2017 
Medicines are key to disease treatment but are not without risk. Some patients get untoward side effects, some get insufficient relief. The human genome project promises to revolutionise modern healthcare. However, there are 3 billion places where a human’s DNA can be different. Just where are the genes of interest in sufferers of complex chronic conditions? Which genes are implicated the most in which disease in which patients? Which genes are involved in a beneficial response to a medicine? Which genes might be predictive of druginduced adverse events? Collaborative industrial research by Oxford Mathematics' Clive Bowman seeks to tackle these areas to enable drug discovery companies to develop appropriate treatments.
The Royal Society Industrial Fellowship research at the Oxford Centre for Industrial and Applied Mathematics (OCIAM) extends stochastic insights from communication theory into producing easytointerpret visualisations for biotech use. Interacting determinants of the illnesses or adverse syndromes can be displayed as heatmaps or coded networks that highlight potential targets against which chemists can rationally design drugs. All types of measured data can be used simultaneously and dummy synthetic indicators such as pathways or other ontologies can be added for clarity. Heterogeneity is displayed automatically allowing understanding of why some people get a severe disease (or drug response) and others a mild syndrome, as well as other variations, for example due to someone’s ethnicity.
Helped by this mathematics the hope is that the right drug can be designed for the right patient and suffering alleviated efficiently with the minimum risk for the individual. For fuller detail on Clive's work please click here.
The image above shows a drug adverse event example (please click on the image). Clockwise from top left: Drug molecule (by Fvasconcellos); heat map showing patients with severe (red) or mild (blue) syndrome in multidimensional information space (courtesy of Dr O Delrieu); two aetiological subnetworks to syndrome; 3D animation display of results with dummy indicator variables.

Friday, 1 September 2017 
Researchers from Oxford Mathematics and Imperial College London have provided a “'mathematical thought experiment' to inspire caution in biologists measuring heterogeneity in cell populations.
As technologies for gene sequencing and microscopy improve, biologists and biomedical researchers are increasingly able to distinguish heterogeneity in cell populations. And some of these differences in cellular behaviours can have important implications for biological functions, such as stem cells in embryonic development, or invasive malignant cells in the onset of cancer. But where will this trend of looking for heterogeneity lead? With a good enough microscope, every cell may look different. But is this meaningful?
To illustrate their point, Linus Schumacher and Oxford Mathematicians Ruth Baker and Philip Maini focused on an example of heterogeneity in migrating cell populations. They used statistics relating to delays in the correlation between individual cells' movements to examine whether it is possible to infer heterogeneities in cell behaviours. This idea originally stems from analysing the movements of birds, but has since been applied to cells too. By measuring when the movement of two cells (or birds) is most aligned, we learn if cells (or birds) move and turn simultaneously (no delay in correlations), or follow each other (delays in correlations). This is of importance to biologists interested in understanding if a subset of cells is leading metastatic invasion, for example, or the migration of cells in the developing embryo.
Using a minimal mathematical model for cell migration, Schumacher, Baker and Maini show that correlations in movement patterns are not necessarily a good indicator of heterogeneity: even a population of identical cells can appear heterogeneous, due to chance correlations and limited sample sizes. What’s more, when the authors explicitly included heterogeneity in their model to describe experimentally measured data, the model of a homogeneous cell population could describe the data just as well (albeit for different parameter values), heavily limiting what can be concluded from such measurements.
Thus, we have learnt that heterogeneity can naively be inferred from cell tracking data, but it may not be so meaningful. And the implications reach further than a particular type of data and specific statistical analysis. In an associated commentary, Paul Macklin of Indiana University illustrates a corollary of the main work: cell populations that divide with a fixed rate, or a distribution of division rates, can have the same distribution of cell cycle times (which could be measured experimentally). In this case, heterogeneity (whether it is real or not) is unimportant in understanding the observed biological phenomenon.
Lead author Linus Schumacher got the idea for this study while finishing his DPhil at the Wolfson Centre for Mathematical Biology in Oxford, and was enabled to continue working on it through an EPSRC Doctoral Prize award. The research appears on the cover of the August issue of Cell Systems.

Tuesday, 29 August 2017 
Taxation and death may be inevitable but what about crime? It is ubiquitous and seems to have been around for as long as human beings themselves. A disease we cannot shake. However, therein lies an idea, one that Oxford Mathematician Soumya Banerjee and colleagues have used as the basis for understanding and quantifying crime.
Their startingpoint is that crime is analogous to a pathogenic infection and the police response to it is similar to an immune response. Moreover, the biological immune system is also engaged in an arms race with pathogens. These analogies enable an immune system inspired theory of crime and violence in human societies, especially in large agglomerations like cities.
An immune system inspired theory of crime can provide a new perspective on the dynamics of violence in societies. The competitive dynamics between police and criminals has similarities to how the immune system is involved in the arms race with invading pathogens. Cities have properties similar to biological organisms  the police and military forces would be the immune system that protects against invading internal and external forces.
Police are activated by crime just like immune system cells are activated by specialized cells called dendritic cells. Noncriminals are turned to criminals in the presence of crime. Hence crime is like a virus. This specifically simulates a spread of disorder. Police also remove criminals similar to how Tcells kill and remove infected cells.
The work has implications for public policy, ranging from how much financial resource to invest in crime fighting, to optimal policing strategies, preplacement of police, and the number of police to be allocated to different cities. The research can also be applied to other forms of violence in human societies (like terrorism) and violence in other primate societies and social insects such as ants. Although still an extremely ambitious goal, in the era of big data we may be able to predict behaviours of large ensembles of people without being able predict actions of individuals.
The researchers hope that will this be the first step towards a quantitative theory of violence and conflict in human societies, one that contributes further to the pressing debate about how to design smarter and more efficient cities that can scale and be sustainable despite population increase  a debate that mathematicians, especially in Oxford, are fully engaged in.
For a fuller explanation of the theory and a more detailed demonstration of the mathematics click here and here for PDF.

Wednesday, 16 August 2017 
Oxford Mathematician Ulrike Tillmann FRS has been elected a member of the Council of the Royal Society. The Council consists of between 20 and 24 Fellows and is chaired by the President.
Founded in the 1660s, the Royal Society’s fundamental purpose is to recognise, promote, and support excellence in science and to encourage the development and use of science for the benefit of humanity. The Royal Society's motto 'Nullius in verba' is taken to mean 'take nobody's word for it'.
Ulrike specialises in algebraic topology and has made important contributions to the study of the moduli space of algebraic curves.

Tuesday, 15 August 2017 
How does the skin develop follicles and eventually sprout hair? Research from a team including Oxford Mathematicians Ruth Baker and Linus Schumacher addresses this question using insights gleaned from organoids, 3D assemblies of cells possessing rudimentary skin structure and function, including the ability to grow hair.
In the study, the team started with dissociated skin cells from a newborn mouse. They then took hundreds of timelapse movies to analyse the collective cell behaviour. They observed that these cells formed organoids by moving through six distinct phases: 1) dissociated cells; 2) aggregated cells; 3) cysts; 4) coalesced cysts; 5) layered skin; and 6) skin with follicles, which robustly produce hair after being transplanted onto the back of a host mouse. By contrast, dissociated skin cells from an adult mouse only reached phase 2  aggregation  before stalling in their development and failing to produce hair.
To understand the forces at play, the scientists analysed the molecular events and physical processes that drove successful organoid formation with newborn mouse cells. "We used a combination of bioinformatics and molecular screenings" said coauthor Mingxing Lei from the University of Southern California. At various time points, they observed increased activity in genes related to: the protein collagen; the blood sugarregulating hormone insulin; the formation of cellular sheets; the adhesion, death or differentiation of cells; and many other processes. In addition to determining which genes were active and when, the scientists also determined where in the organoid this activity took place. Next, they blocked the activity of specific genes to confirm their roles in organoid development.
By carefully studying these developmental processes, the scientists obtained a molecular "how to" guide for driving individual skin cells to selforganise into organoids that can produce hair. They then applied this "how to" guide to the stalled organoids derived from adult mouse skin cells. By providing the right molecular and genetic cues in the proper sequence, they were able to stimulate these adult organoids to continue their development and eventually produce hair. In fact, the adult organoids produced 40 percent as much hair as the newborn organoids  a significant improvement.
"Normally, many ageing individuals do not grow hair well, because adult cells gradually lose their regenerative ability," said ChengMing Chuong from the team. "With our new findings, we are able to make adult mouse cells produce hair again. In the future, this work can inspire a strategy for stimulating hair growth in patients with conditions ranging from alopecia to baldness."

Wednesday, 9 August 2017 
An Oxfordled project to improve the lives of people living in cities in developing countries has been awarded £7 million.
An international team working on The PEAK Program and led by Professor Michael Keith, CoDirector of the University of Oxford Future of Cities programme and involving researchers from all four academic divisions across Oxford including Oxford Mathematicians Peter Grindrod and Neave Clery has received the grant from the Global Challenges Research Fund (GCRF) funded through the UK’s Economic and Social Research Council (ESRC).
The funds will be used over five years to foster a generation of urban scholars working in the field of humanities, science and social science to enable cities to meet the needs of their future inhabitants and help manage their growth. Michael Keith said “We aim to grow a new generation of interdisciplinary urbanists and a network of smarter cities working together across Africa, China, India, Colombia and the UK.”
In particular the mathematics of urban living, with a growing wave of data becoming available, and its potential input into policy, is a critical part of any future urban planning. The PEAK grant will support Neave and three other Oxford Mathematics Postdoctoral Researchers (PDRAs) who will spend time at partner sites abroad  in turn PDRAs from abroad will visit Oxford to share learning.

Wednesday, 2 August 2017 
With the passing of Landon T. Clay on 29 July, Oxford Mathematics has lost a treasured friend whose committed support and generosity were key factors in the recent development of the Mathematical Institute. The support of Landon and his wife Lavinia was the indispensible mainstay of the project to create the magnificent new home for Oxford Mathematics in the Andrew Wiles Building; the building is a symbol of the enduring legacy of their insightful, incisive support for mathematics and science. Landon's membership of the University of Oxford's Chancellor’s Court of Benefactors also recognised the breadth of his support for many parts of the University, always with a sharp emphasis on supporting excellence.
Landon Clay was the Founder of the Clay Mathematics Institute, which has had a profoundly beneficial effect on the progress and appreciation of research into fundamental mathematics. He will perhaps be best remembered for his inspired creation of the Millennium Prizes: these have the crucial feature that they draw the public’s attention to the fundamental importance of the prize problems themselves, in contrast to the focus on the prizewinners as is the case with the other great prizes of mathematics.
The Clay Mathematics Institute, directed from the President’s Office in the Andrew Wiles Building, supports mathematical excellence in many other ways. In particular, the Clay Research Fellowships give the brightest young mathematicians in the world five years of freedom to develop their ideas free of financial concerns and institutional demands. The fruits of this programme can be implied from the fact that three of the four Fields Medallists at the International Congress in 2014 were former Clay Fellows.
The ramifications of Landon Clay’s generous and astutely directed support for mathematics will echo long into the future. A fuller account of his life and the range of his philanthropy can be found on the Clay Mathematics Institute website.
Photograph by Robert Schoen, 2004

Friday, 28 July 2017 
It is an intriguing fact that the 3dimensional world in which we live is, from a mathematical point of view, rather special. Dimension 3 is very different from dimension 4 and these both have very different theories from that of dimensions 5 and above. The study of space in dimensions 2, 3 and 4 is the field of lowdimensional topology, the research area of Oxford Mathematician Marc Lackenby.
One of the reasons that 3dimensional space is different from the others is the presence of knots. A knot is just a piece of string that is usually closed up to form a loop (mathematically, it is a smoothly embedded simple closed curve). It is a familiar everyday fact that there are many different knots, the simplest two being the unknot and the trefoil shown below. However, if you put a knotted piece of string into 4dimensional space, you can always unknot it.
The existence of nontrivial knots is a key feature of 3dimensional space, and so it is a worthwhile goal to attempt to classify knots. One is immediately led to the following simple questions: given two knot diagrams, how can we decide whether they are the same knot? In fact, how can we even decide whether a knot diagram represents the unknot? These questions are simple to state, but actually are very difficult to answer. What is needed is an algorithm that can definitively resolve such questions in finite time. It is known that similar problems in high dimensions are unsolvable, but the situation in dimension 3 is tractable, just.
It is an old theorem (dating back to the 1920s) that any two diagrams of a knot differ by a sequence of Reidemeister moves, which are local modifications to a diagram, shown below:
This has the following algorithmic consequence: if two diagrams represent the same knot, then it will always be possible to prove this, as follows. Apply all possible Reidemeister moves to one of the diagrams. Then apply all possible Reidemeister moves to each of the resulting collection of diagrams, and so on. If the two knots are the same, this procedure will eventually reach the second diagram and so you will have proved that the two knots are equivalent. But if the knots are different, this process will not terminate. So, to turn this into an effective algorithm to decide whether two knots are the same, one needs to be given, in advance, an upper bound on the number of Reidemeister moves required to relate two diagrams of a knot. The search for such a bound is what Marc Lackenby has been working on recently. He has shown that for any diagram of the unknot with c crossings, there is a sequence of at most $(236\ c)^{11}$ moves that takes it to the diagram with no crossings. The bound $(236\ c)^{11}$ may seem large, but it is actually much smaller than what was known previously, which was an exponential function of c. The existence of such a polynomial bound had been a wellknown longstanding problem. To prove this theorem, Marc had to use a wide variety of different techniques from across lowdimensional topology. His paper was recently published in the Annals of Mathematics.
This polynomial bound is not the end of the story. The procedure for deciding whether a knot is the unknot using Reidemeister moves is simple but not particularly efficient. Even with the polynomial bound on the number of moves, the running time of the algorithm is an exponential function of the initial crossing number c. Can one do better than this? Noone knows, but Marc is currently working on this problem, and hopes to find an algorithm that runs in subexponential time.
