Thursday, 14 September 2017

A continuum of expanders

As part of our series of research articles focusing on the rigour and intricacies of mathematics and its problems, Oxford Mathematician David Hume discusses his work on networks and expanders.

"A network is a collection of vertices (points) and edges (lines connecting two vertices). They are used to encode everything from transport infrastructure to social media interactions, and from the behaviour of subatomic particles to the structure of a group of symmetries. A common theme throughout these applications, and therefore of interest to civil engineers, advertisers, physicists, and mathematicians (amongst others), is that it is important to know how well connected a given network is. For example, is it possible that two major road closures make it impossible to drive from London to Oxford? An efficient road network should ensure that there are multiple ways to get between any two important places, but we cannot simply tarmac everything! As another example, if as an advertiser, you post adverts on a social media platform, how do you ensure that you reach as many people as possible, without paying to post to every single account?

Given a network, we say its cut size is the smallest number of vertices you need to remove, so that the remaining pieces have at most half the original number of vertices in them (in our examples: how many roads need to close before half the population are unable to drive to visit the other half, or how many people need to ignore your advert so that less than half of the users of social media will see it).

Let us say that a family of networks, with increasing numbers of vertices, is called an expander if the cut size of each network is proportional to the number of vertices (1) , and each vertex in a network is the end of at most a fixed number of edges. In theory this would be an optimal solution for a transport network as we can connect as many cities as we need to without needing to work out how to manage the traffic lights at a junction where 5,000 roads all converge. In practice, expanders are as incompatible with the geometry of our world as it is possible for any collection of networks to be.


Expanders, however, are still very interesting and naturally occur in diverse areas: in error-correcting codes in computer science; number theory; and in group theory, where my personal interest lies.

It is, in general very difficult to construct a family of expanders, even though randomly choosing larger and larger networks in which every vertex meets exactly three edges will almost surely produce an expander. The first construction of a family was done by Grigory Margulis - they came from the structure networks of finite groups of symmetries. Other constructions have since been found, most notably a construction of Ramanujan graphs (expanders which, in a particular sense, have the largest possible ratio between their cut-size and their number of vertices), and the fantastically named Zig-Zag product (3) , which builds expanders inductively, starting from two very simple networks.

One question, which seems to have avoided much attention, is the following: how many different expanders are there? To answer this, we first have to deal with the rather sensitive question of what exactly do we mean by different? Does adding one edge change the expander? If so, then the above question is not really very interesting. A more interesting example is provided by Manor Mendel and Assaf Naor: they prove that there are two different expanders so that however you try to associate the vertices in one with the vertices in another, you must either move vertices close that were very far apart before, or else move vertices far apart which previously were very close. In mathematical terms, they are not coarsely equivalent - we cannot even approximately preserve how close vertices are.

In my work, I show that there is a collection of expanders (we can even insist that they are Ramanujan graphs), which is impossible to ennumerate (it is uncountable), such that no pair of them are coarsely equivalent. The technique is to show that for any coarsely equivalent networks, the largest cut size of any network contained in the first with at most n vertices is proportional to the largest cut size of any network contained in the second with at most n vertices. By constructing expanders where these two values are not proportional, we rule out the possibility of such coarse equivalences between them.

The behaviour of cut sizes which is used above to rule out coarse equivalences is of much interest for networks which are not expanders. In my current work I am exploring how cut sizes behave for networks which are 'negatively curved at large scale': this is an area of particular interest in group theory, and plays a key role in the recent proofs of important conjectures in low-dimensional topology: the virtually Haken and virtually fibred conjectures. For such 'negatively curved' groups, cut sizes seem to be related to the dimension of an associated fractal 'at infinity'. With John Mackay and Romain Tessera, we have established this link for an interesting collection of such networks, and are working on developing the technology needed to generalise our results."

(1) This is not the traditional definition, but one of my results proves that a network is an expander in the definition given here if and only it contains an expander in the traditional sense

(2) Two networks with highlighted collections of vertices demonstrating the value of the cut size

(3) The header image of this article is the Zig-Zag product of a cycle of length 6 with a cycle of length 4 

Monday, 11 September 2017

Searching the genome haystack - Where is the disease? Where is the drug risk?

Medicines are key to disease treatment but are not without risk. Some patients get untoward side effects, some get insufficient relief. The human genome project promises to revolutionise modern health-care. However, there are 3 billion places where a human’s DNA can be different. Just where are the genes of interest in sufferers of complex chronic conditions? Which genes are implicated the most in which disease in which patients? Which genes are involved in a beneficial response to a medicine? Which genes might be predictive of drug-induced adverse events? Collaborative industrial research by Oxford Mathematics' Clive Bowman seeks to tackle these areas to enable drug discovery companies to develop appropriate treatments.

The Royal Society Industrial Fellowship research at the Oxford Centre for Industrial and Applied Mathematics (OCIAM) extends stochastic insights from communication theory into producing easy-to-interpret visualisations for biotech use. Interacting determinants of the illnesses or adverse syndromes can be displayed as heatmaps or coded networks that highlight potential targets against which chemists can rationally design drugs. All types of measured data can be used simultaneously and dummy synthetic indicators such as pathways or other ontologies can be added for clarity. Heterogeneity is displayed automatically allowing understanding of why some people get a severe disease (or drug response) and others a mild syndrome, as well as other variations, for example due to someone’s ethnicity.

Helped by this mathematics the hope is that the right drug can be designed for the right patient and suffering alleviated efficiently with the minimum risk for the individual. For fuller detail on Clive's work please click here.

The image above shows a drug adverse event example (please click on the image). Clockwise from top left: Drug molecule (by Fvasconcellos); heat map showing patients with severe (red) or mild (blue) syndrome in multidimensional information space (courtesy of Dr O Delrieu); two aetiological subnetworks to syndrome; 3D animation display of results with dummy indicator variables.

Friday, 1 September 2017

Heterogeneity in cell populations - a cautionary tale

Researchers from Oxford Mathematics and Imperial College London have provided a “'mathematical thought experiment' to inspire caution in biologists measuring heterogeneity in cell populations. 

As technologies for gene sequencing and microscopy improve, biologists and biomedical researchers are increasingly able to distinguish heterogeneity in cell populations. And some of these differences in cellular behaviours can have important implications for biological functions, such as stem cells in embryonic development, or invasive malignant cells in the onset of cancer. But where will this trend of looking for heterogeneity lead? With a good enough microscope, every cell may look different. But is this meaningful?

To illustrate their point, Linus Schumacher and Oxford Mathematicians Ruth Baker and Philip Maini focused on an example of heterogeneity in migrating cell populations. They used statistics relating to delays in the correlation between individual cells' movements to examine whether it is possible to infer heterogeneities in cell behaviours. This idea originally stems from analysing the movements of birds, but has since been applied to cells too. By measuring when the movement of two cells (or birds) is most aligned, we learn if cells (or birds) move and turn simultaneously (no delay in correlations), or follow each other (delays in correlations). This is of importance to biologists interested in understanding if a subset of cells is leading metastatic invasion, for example, or the migration of cells in the developing embryo.

Using a minimal mathematical model for cell migration, Schumacher, Baker and Maini show that correlations in movement patterns are not necessarily a good indicator of heterogeneity: even a population of identical cells can appear heterogeneous, due to chance correlations and limited sample sizes. What’s more, when the authors explicitly included heterogeneity in their model to describe experimentally measured data, the model of a homogeneous cell population could describe the data just as well (albeit for different parameter values), heavily limiting what can be concluded from such measurements.

Thus, we have learnt that heterogeneity can naively be inferred from cell tracking data, but it may not be so meaningful. And the implications reach further than a particular type of data and specific statistical analysis. In an associated commentary, Paul Macklin of Indiana University illustrates a corollary of the main work: cell populations that divide with a fixed rate, or a distribution of division rates, can have the same distribution of cell cycle times (which could be measured experimentally). In this case, heterogeneity (whether it is real or not) is unimportant in understanding the observed biological phenomenon.

Lead author Linus Schumacher got the idea for this study while finishing his DPhil at the Wolfson Centre for Mathematical Biology in Oxford, and was enabled to continue working on it through an EPSRC Doctoral Prize award. The research appears on the cover of the August issue of Cell Systems.

Tuesday, 29 August 2017

How our immune systems could help us understand crime

Taxation and death may be inevitable but what about crime? It is ubiquitous and seems to have been around for as long as human beings themselves. A disease we cannot shake. However, therein lies an idea, one that Oxford Mathematician Soumya Banerjee and colleagues have used as the basis for understanding and quantifying crime.

Their starting-point is that crime is analogous to a pathogenic infection and the police response to it is similar to an immune response. Moreover, the biological immune system is also engaged in an arms race with pathogens. These analogies enable an immune system inspired theory of crime and violence in human societies, especially in large agglomerations like cities.

An immune system inspired theory of crime can provide a new perspective on the dynamics of violence in societies. The competitive dynamics between police and criminals has similarities to how the immune system is involved in the arms race with invading pathogens. Cities have properties similar to biological organisms - the police and military forces would be the immune system that protects against invading internal and external forces.

Police are activated by crime just like immune system cells are activated by specialized cells called dendritic cells. Non-criminals are turned to criminals in the presence of crime. Hence crime is like a virus. This specifically simulates a spread of disorder.  Police also remove criminals similar to how T-cells kill and remove infected cells.

The work has implications for public policy, ranging from how much financial resource to invest in crime fighting, to optimal policing strategies, pre-placement of police, and the number of police to be allocated to different cities. The research can also be applied to other forms of violence in human societies (like terrorism) and violence in other primate societies and social insects such as ants. Although still an extremely ambitious goal, in the era of big data we may be able to predict behaviours of large ensembles of people without being able predict actions of individuals.

The researchers hope that will this be the first step towards a quantitative theory of violence and conflict in human societies, one that contributes further to the pressing debate about how to design smarter and more efficient cities that can scale and be sustainable despite population increase - a debate that mathematicians, especially in Oxford, are fully engaged in.

For a fuller explanation of the theory and a more detailed demonstration of the mathematics click here and here for PDF.

Wednesday, 16 August 2017

Oxford Mathematician Ulrike Tillmann elected to Royal Society Council

Oxford Mathematician Ulrike Tillmann FRS has been elected a member of the Council of the Royal Society. The Council consists of between 20 and 24 Fellows and is chaired by the President.

Founded in the 1660s, the Royal Society’s fundamental purpose is to recognise, promote, and support excellence in science and to encourage the development and use of science for the benefit of humanity. The Royal Society's motto 'Nullius in verba' is taken to mean 'take nobody's word for it'. 

Ulrike specialises in algebraic topology and has made important contributions to the study of the moduli space of algebraic curves.

Tuesday, 15 August 2017

Hair today, gone tomorrow. But have scientists found a new way to stimulate hair growth?

How does the skin develop follicles and eventually sprout hair? Research from a team including Oxford Mathematicians Ruth Baker and Linus Schumacher addresses this question using insights gleaned from organoids, 3D assemblies of cells possessing rudimentary skin structure and function, including the ability to grow hair.

In the study, the team started with dissociated skin cells from a newborn mouse. They then took hundreds of timelapse movies to analyse the collective cell behaviour. They observed that these cells formed organoids by moving through six distinct phases: 1) dissociated cells; 2) aggregated cells; 3) cysts; 4) coalesced cysts; 5) layered skin; and 6) skin with follicles, which robustly produce hair after being transplanted onto the back of a host mouse. By contrast, dissociated skin cells from an adult mouse only reached phase 2 - aggregation - before stalling in their development and failing to produce hair.

To understand the forces at play, the scientists analysed the molecular events and physical processes that drove successful organoid formation with newborn mouse cells. "We used a combination of bioinformatics and molecular screenings" said co-author Mingxing Lei from the University of Southern California. At various time points, they observed increased activity in genes related to: the protein collagen; the blood sugar-regulating hormone insulin; the formation of cellular sheets; the adhesion, death or differentiation of cells; and many other processes. In addition to determining which genes were active and when, the scientists also determined where in the organoid this activity took place. Next, they blocked the activity of specific genes to confirm their roles in organoid development.

By carefully studying these developmental processes, the scientists obtained a molecular "how to" guide for driving individual skin cells to self-organise into organoids that can produce hair. They then applied this "how to" guide to the stalled organoids derived from adult mouse skin cells. By providing the right molecular and genetic cues in the proper sequence, they were able to stimulate these adult organoids to continue their development and eventually produce hair. In fact, the adult organoids produced 40 percent as much hair as the newborn organoids - a significant improvement.

"Normally, many ageing individuals do not grow hair well, because adult cells gradually lose their regenerative ability," said Cheng-Ming Chuong from the team. "With our new findings, we are able to make adult mouse cells produce hair again. In the future, this work can inspire a strategy for stimulating hair growth in patients with conditions ranging from alopecia to baldness."

Wednesday, 9 August 2017

Oxford-led project to improve urban living in developing countries awarded £7m

An Oxford-led project to improve the lives of people living in cities in developing countries has been awarded £7 million.

An international team working on The PEAK Program and led by Professor Michael Keith, Co-Director of the University of Oxford Future of Cities programme and involving researchers from all four academic divisions across Oxford including Oxford Mathematicians Peter Grindrod and Neave Clery has received the grant from the Global Challenges Research Fund (GCRF) funded through the UK’s Economic and Social Research Council (ESRC).

The funds will be used over five years to foster a generation of urban scholars working in the field of humanities, science and social science to enable cities to meet the needs of their future inhabitants and help manage their growth. Michael Keith said “We aim to grow a new generation of interdisciplinary urbanists and a network of smarter cities working together across Africa, China, India, Colombia and the UK.”

In particular the mathematics of urban living, with a growing wave of data becoming available, and its potential input into policy, is a critical part of any future urban planning. The PEAK grant will support Neave and three other Oxford Mathematics Postdoctoral Researchers (PDRAs) who will spend time at partner sites abroad - in turn PDRAs from abroad will visit Oxford to share learning.  

Wednesday, 2 August 2017

Landon Clay, founder of the Clay Mathematics Institute and generous supporter of Oxford Mathematics

With the passing of Landon T. Clay on 29 July, Oxford Mathematics has lost a treasured friend whose committed support and generosity were key factors in the recent development of the Mathematical Institute. The support of Landon and his wife Lavinia was the indispensible mainstay of the project to create the magnificent new home for Oxford Mathematics in the Andrew Wiles Building; the building is a symbol of the enduring legacy of their insightful, incisive support for mathematics and science. Landon's membership of the University of Oxford's Chancellor’s Court of Benefactors also recognised the breadth of his support for many parts of the University, always with a sharp emphasis on supporting excellence.

Landon Clay was the Founder of the Clay Mathematics Institute, which has had a profoundly beneficial effect on the progress and appreciation of research into fundamental mathematics. He will perhaps be best remembered for his inspired creation of the Millennium Prizes: these have the crucial feature that they draw the public’s attention to the fundamental importance of the prize problems themselves, in contrast to the focus on the prize-winners as is the case with the other great prizes of mathematics.

The Clay Mathematics Institute, directed from the President’s Office in the Andrew Wiles Building, supports mathematical excellence in many other ways. In particular, the Clay Research Fellowships give the brightest young mathematicians in the world five years of freedom to develop their ideas free of financial concerns and institutional demands. The fruits of this programme can be implied from the fact that three of the four Fields Medallists at the International Congress in 2014 were former Clay Fellows.

The ramifications of Landon Clay’s generous and astutely directed support for mathematics will echo long into the future. A fuller account of his life and the range of his philanthropy can be found on the Clay Mathematics Institute website.

Photograph by Robert Schoen, 2004

Friday, 28 July 2017

Knots and the nature of 3-dimensional space

It is an intriguing fact that the 3-dimensional world in which we live is, from a mathematical point of view, rather special. Dimension 3 is very different from dimension 4 and these both have very different theories from that of dimensions 5 and above. The study of space in dimensions 2, 3 and 4 is the field of low-dimensional topology, the research area of Oxford Mathematician Marc Lackenby.

One of the reasons that 3-dimensional space is different from the others is the presence of knots. A knot is just a piece of string that is usually closed up to form a loop (mathematically, it is a smoothly embedded simple closed curve). It is a familiar everyday fact that there are many different knots, the simplest two being the unknot and the trefoil shown below. However, if you put a knotted piece of string into 4-dimensional space, you can always unknot it.


The existence of non-trivial knots is a key feature of 3-dimensional space, and so it is a worthwhile goal to attempt to classify knots. One is immediately led to the following simple questions: given two knot diagrams, how can we decide whether they are the same knot? In fact, how can we even decide whether a knot diagram represents the unknot? These questions are simple to state, but actually are very difficult to answer. What is needed is an algorithm that can definitively resolve such questions in finite time. It is known that similar problems in high dimensions are unsolvable, but the situation in dimension 3 is tractable, just.

It is an old theorem (dating back to the 1920s) that any two diagrams of a knot differ by a sequence of Reidemeister moves, which are local modifications to a diagram, shown below:

This has the following algorithmic consequence: if two diagrams represent the same knot, then it will always be possible to prove this, as follows. Apply all possible Reidemeister moves to one of the diagrams. Then apply all possible Reidemeister moves to each of the resulting collection of diagrams, and so on. If the two knots are the same, this procedure will eventually reach the second diagram and so you will have proved that the two knots are equivalent. But if the knots are different, this process will not terminate. So, to turn this into an effective algorithm to decide whether two knots are the same, one needs to be given, in advance, an upper bound on the number of Reidemeister moves required to relate two diagrams of a knot. The search for such a bound is what Marc Lackenby has been working on recently. He has shown that for any diagram of the unknot with c crossings, there is a sequence of at most $(236\ c)^{11}$ moves that takes it to the diagram with no crossings. The bound $(236\ c)^{11}$ may seem large, but it is actually much smaller than what was known previously, which was an exponential function of c. The existence of such a polynomial bound had been a well-known longstanding problem. To prove this theorem, Marc had to use a wide variety of different techniques from across low-dimensional topology. His paper was recently published in the Annals of Mathematics.

This polynomial bound is not the end of the story. The procedure for deciding whether a knot is the unknot using Reidemeister moves is simple but not particularly efficient. Even with the polynomial bound on the number of moves, the running time of the algorithm is an exponential function of the initial crossing number c. Can one do better than this? No-one knows, but Marc is currently working on this problem, and hopes to find an algorithm that runs in sub-exponential time.

Friday, 28 July 2017

Numerical Analyst Nick Trefethen on the pleasures and significance of his subject

Oxford Mathematician Nick Trefethen was recently awarded the George Pólya Prize for Mathematical Exposition by the Society for Industrial and Applied Mathematics (SIAM) "for the exceptionally well-expressed accumulated insights found in his books, papers, essays, and talks." Here Nick refllects on the award, his approach to mathematics and the ever-expanding role of Numercial Analysis in the world.

Congratulations on your award, how did you react when you found out you had won?

I was thrilled. There are many accolades to dream of achieving in an academic career but I am one of the relatively few mathematicians who love to write. So, to be acknowledged for mathematical exposition is important to me. My mother was a writer and I guess it is in my blood.

What is Numerical Analysis?

Much of science and engineering involves solving problems in mathematics, but these can rarely be solved on paper. They have to be solved with a computer, and to do this you need algorithms. 

Numerical Analysis is the field devoted to developing those algorithms.  Its applications are everywhere. For example, weather forecasting and climate modelling, designing airplanes or power plants, creating new materials, studying biological populations, it is simply everywhere.

It is the hands-on exploratory way to do mathematics. I like to think of it as the fastest laboratory discipline. I can conceive an experiment and in the next 10 minutes, I can run it. You get the joy of being a scientist without the months of work setting up the experiment.

How does it work in practice?

Everything I do is exploratory through a computer and focused around solving problems such as differential equations, while still addressing basic issues. In my forthcoming book Exploring ODEs (Ordinary Differential Equations) for example, every concept measured is illustrated as you go using our automated software system, Chebfun.

How has your research advanced the field?

Most of my own research is not directly tied to applications, more to the development of fundamental algorithms and software.

But, I have been involved in two key physical applications in my career. One was in connection with transition to turbulence of fluid flows, such as flow in a pipe; and recently in explaining how a Faraday cage works, such as the screen on your microwave oven that keeps the microwaves inside the device, while letting the light escape so that you can keep an eye on your food.

You got a lot of attention for your alternative Body Mass Index (BMI) formula, how did you come up with it?

My alternative BMI formula was not based on scientific research. But, then again, the original BMI formula wasn’t based on much research either. I actually wrote a letter to The Economist with my theory. They published it and it spread through the media amazingly.

As a mathematician, unless you’re Professor Andrew Wiles or Stephen Hawking for example, you are fortunate to have the opportunity to be well known within the field and invisible to the general public at the same time. The BMI interest was all very uncomfortable and unexpected.

Why do you think so few mathematicians are strong communicators?

I don’t think this is necessarily the case. One of the reasons that British universities are so strong academically, is the Research Excellence Framework, through which contributions are measured. But, on the other hand the structure has exacerbated the myth that writing books is a waste of time for academic scientists. The irony is that in any real sense, writing books is what gives you longevity and impact.

At the last REF the two things that mattered most to me, that I felt had had the most impact, were my latest book and my software project, and neither were mentioned.

In academia we play a very conservative game and try to only talk about our latest research paper. The things that actually give you impact are not always measured.

What are you working on at the moment?

I just finished writing my latest book on ODEs (due to be published later this year), which I am very excited about.

Have you always had a passion for mathematics?

My father was an engineer and I sometimes think of myself as one too - or perhaps a physicist doing maths. Numerical Analysis is a combination of mathematics and computer science, so your motivations are slightly different. Like so many in my field, I have studied and held faculty positions in both areas.

What is next for you?

I am due to start a sabbatical in Lyon, France later this year. I'll be working on a new project, but if you don’t mind, I won’t go into detail. A lot of people say that they are driven by solving a certain applied problem, but I am really a curiosity-driven mathematician. I am driven by the way the field and the algorithms are moving. I am going to try and take the next step in a particular area. I just need to work on my French.

What do you think can be done to support public engagement with mathematics?

I think the change may come through technology, almost by accident. You will have noticed over the last few decades, that people have naturally become more comfortable with computers, and I think that may expand in other interesting directions.

The public’s love/hate relationship with mathematics has been pervasive throughout my career.  As a Professor, whenever you get to border control you get asked about your title. ‘What are you a Professor of?’ When you reply, the general response is ‘oh I hated maths.’ But, sometimes you'll get ‘I loved maths, it was my best subject’, which is heartening.

What has been your career highlight to date?

Coming to Oxford was a big deal, as was being elected to the Royal Society. It meant a lot to me, especially because I am an American. It represented being accepted by my new country.

Are there any research problems that you wish you had solved first?

I’m actually going to a conference in California, where 60 people will try to prove a particular theorem; Crouzeix’s Conjecture. By the end of the week I will probably be kicking myself that I wasn’t the guy to find the final piece of the puzzle.