News

Wednesday, 6 March 2019

Three Oxford Mathematicians to present their research in the House of Commons

Three Oxford Mathematicians, Kristian Kiradjiev, Liam Brown and Tom Crawford are to present their research in Parliament at this year’s STEM for Britain competition at the House of Commons on 13th March. This prestigious competition provides an opportunity for researchers to communicate their research to parliamentarians.  

Kristian’s poster covers his research into the mathematical modelling of flue-gas purification, Liam's poster researches computational models of cancer immunotherapy while Tom is researching the spread of pollution in the ocean.

Judged by leading academics, the gold medalist receives £2,000, while silver and bronze receive £1,250 and £750 respectively. 

Thursday, 28 February 2019

Heather Harrington awarded the Adams Prize

Oxford Mathematics' Heather Harrington is the joint winner of the 2019 Adams Prize. The prize is one of the University of Cambridge's oldest and most prestigious prizes. Named after the mathematician John Couch Adams and endowed by members of St John's College, it commemorates Adams's role in the discovery of the planet Neptune. Previous prize-winners include James Clerk Maxwell, Roger Penrose and Stephen Hawking.

This year's Prize has been awarded for achievements in the field of The Mathematics of Networks. Heather's work uses mathematical and statistical techniques including numerical algebraic geometry, Bayesian statistics, network science and optimisation, in order to solve interdisciplinary problems. She is the Co-Director of the recently established Centre for Topological Data Analysis.

Tuesday, 26 February 2019

Bendotaxis - when droplets are self-propelled in response to bending

We’re all familiar with liquid droplets moving under gravity (especially if you live somewhere as rainy as Oxford). However, emerging applications such as lab-on-a-chip technologies require precise control of extremely small droplets; on these scales, the forces associated with surface tension become dominant over gravity, and it is therefore not practical to rely on the weight of the drops for motion. Many active processes (requiring external energy inputs), such as those involving the use of temperature gradients, electric fields, and mechanical actuation, have been used successfully to move small droplets. Recently, however, there has been increasing interest in passive processes, which do not require external driving. One example of this is durotaxis, in which droplets spontaneously move in response to rigidity gradients (similar to the active motion of biological cells, which generally move to stiffer regions of a deformable substrate). Here, the suffix ‘taxis’ refers to the self-propulsive nature of the motion. In a recent study, Oxford Mathematicians Alex Bradley, Finn Box, Ian Hewitt and Dominic Vella introduced another such mechanism; Bendotaxis is self-propelled droplet motion in response to bending. What is particularly interesting is that the motion occurs in the same direction, regardless of whether the drop has an affinity to (referred to as ‘wetting’) the channel walls or not (‘non-wetting’), which is atypical for droplet physics.

A small drop confined to a channel exerts a force on the walls, as a result of surface tension; this force pulls the walls together when the drop wets them, and pushes them apart otherwise. By manipulating the geometry of the channel (leaving one end free, and clamping the other end), the deformation that results from this surface tension force is asymmetric—it creates a tapering in the channel. The drop subsequently moves in response to this tapering, which is towards the free end in both the wetting and non-wetting cases.

Using a combination of scaling arguments and numerical solutions to a mathematical model of the problem, the team were able to verify that it is indeed the capillary induced elastic deformation of the channel that drives the experimentally observed motion. This model allowed them to understand the dynamic nature of bendotaxis, and predict the motion of drops in these deformable channels. In particular, they identified several interesting features of the motion; counter-intuitively, it is predicted (and observed) that the time taken for a drop to move along the channel decreases as it increases in length. However, relatively long channels are susceptible to ‘trapping’, whereby the force exerted by the drop is sufficient to bring the channel walls into contact. It is hoped that understanding the motion will pave the way for its application on a variety of scales - for example, drug delivery on a laboratory-scale, and self-cleaning surfaces on a micro-scale.

Thursday, 21 February 2019

Oxford Mathematics Student Tutorial now online

The Oxford Mathematics educational experience is a journey, a journey like any other educational experience. It builds on what you learn at school. It is not unfamiliar and we don't want it to invisible. But it has aspects that are different. One of these is the tutorial system. Students have lectures. But they also have tutorials based on those lectures where they sit, usually in pairs, with a tutor, go through their work and, critically, get to ask questions. It is their tutorial.

Having streamed the First Year Students' Dynamics lecture last week and interviewed the students as they left the lecture, we now present the tutorial as it happened. Even if you are not a mathematician we hope the lectures and tutorial give you an insight in to how things work in Oxford. And maybe even encourage you, or someone you know, to think about giving Oxford a go. Or just giving maths a go.

 

 

 

 

 

 


 

Wednesday, 20 February 2019

When does one of the central ideas in economic theory work?

The concept of equilibrium is one of the most central ideas in economics. It is one of the core assumptions in the vast majority of economic models, including models used by policymakers on issues ranging from monetary policy to climate change, trade policy and the minimum wage. But is it a good assumption? In a paper just published in Science Advances, Oxford Mathematicians Marco Pangallo, Torsten Heinrich and Doyne Farmer investigate this question in the simple framework of games, and show that when the game gets complicated this assumption is problematic. If these results carry over from games to economics, this raises deep questions about economics models, and when they are useful for understanding the real world.

To understand what equilibrium is, it helps to think about a simple example. Kids love to play tic-tac-toe (also known as noughts and crosses), but at around eight years old they learn that there is a strategy for the second player that always results in a draw.  This strategy is what is called an equilibrium in economics. If all the players in the game are rational they will play an equilibrium strategy. In economics, the word rational means that the player can evaluate every possible move and explore its consequences to their endpoint and choose the best move. Once kids are old enough to discover the equilibrium of tic-tac-toe they quit playing because the same thing always happens and the game is really boring. One way to view this is that, for the purposes of understanding how children play tic-tac-toe, rationality is a good behavioural model for eight year olds but not for six year olds.

In a more complicated game like chess, rationality is never a good behavioural model. The problem is that chess is a much harder game, hard enough that no one can analyse all the possibilities, and the usefulness of the concept of equilibrium breaks down. In chess no one is smart enough to discover the equilibrium, and so the game never gets boring. This illustrates that whether or not rationality is a sensible model of the behaviour of real people depends on the problem they have to solve. If the problem is simple, it is a good behavioural model, but if the problem is hard, it may break down.

Theories in economics nearly universally assume equilibrium from the outset. But is this always a reasonable thing to do? To get insight into this question, Pangallo and collaborators study when equilibrium is a good assumption in games. They don’t just study games like noughts and crosses or chess, but rather they study all possible games of a certain type (called normal form games). They literally make up games at random and have two simulated players play them to see what happens. The simulated players use strategies that do a good job of describing what real people do in psychology experiments. These strategies are simple rules of thumb, like doing what has worked well in the past or picking the move that is most likely to beat the opponent’s recent moves.

The authors find that the prevalence of cycles in the structure of games is a very good indicator of the likelihood that strategies do not converge to equilibrium. This point is illustrated in the figure below. Cycles are indicated with red arrows in the payoff matrices – namely, the tables filled with numbers in the first row. When cycles are present, many learning dynamics are likely to perpetually fluctuate instead of converging to equilibrium. Fluctuating dynamics are colored red or orange in the panels of the second and third rows.

The theory then suggests that equilibrium is likely to be a wrong behavioral model when the game has a cyclical structure. When are cycles prevalent, and when are they rare?

When the game is simple enough, in the sense that the number of actions available to each player is small, cycles are rare. When the game is more complicated, whether or not cycles are common depends on whether or not the game is competitive.  If the incentives of the players are lined up, cycles are rare, even if the game is complicated.  But when the incentives of the players are not lined up and the game gets complicated, cycles are common. 

These results match the intuition about noughts and crosses vs. chess: complicated games are harder to learn and it is harder for players to coordinate on an equilibrium when one player’s gain is the other player’s loss. The main novelty of the paper is that the authors develop a formalism to make all this quantitative. This is confirmed in the figure below, which shows the share of cycles (dashed lines) and the non-convergence frequency of six learning algorithms (markers) as the complicatedness and competitiveness of a game vary. (The payoff correlation Γ is negative when the game is competitive.)

Many of the problems encountered by economic actors are too complicated to model easily using a normal form game. Nonetheless, this work suggests a potentially serious problem. Many situations in economics are complicated and competitive. This raises the possibility that many important theories in economics may be wrong: If the key behavioural assumption of equilibrium is wrong, then the predictions of the model are likely wrong too. In this case new approaches are required that explicitly simulate the behaviour of the players and take into account the fact that real people are not good at solving complicated problems.

 

 

Tuesday, 19 February 2019

Fano Manifolds Old and New

Oxford Mathematician Thomas Prince talks about his work on the construction of Fano manifolds in dimension four and their connection with Calabi-Yau geometry.

"Classical algebraic geometry studies the vanishing loci of finite collections of polynomial equations; usually under some conditions that ensure this locus has some desirable properties. The first objects studied in this subject (in its modern history) were Riemann surfaces, one dimensional objects over the complex numbers, topologically equivalent to $n$-holed tori. The attempt to replicate the classification of curves in the context of algebraic surfaces made by the 'Italian school' led by Castelnuovo and Enriques in the early 20th century led to a fundamental insight: the classification divides naturally into two distinct problems. First one studies a courser 'birational' classification of surfaces, before analysing the surfaces within each birational class. In two dimensions the second problem has a simple solution: these surfaces are related by 'blowing up' and 'blowing down', explicit operations first described by Noether. This became the model and prototype for the modern subject of birational geometry, which developed rapidly in the later 20th century, with fundamental contributions made by Hironaka, Mori, Shafarevich, and others.

In the contemporary treatment of the subject, a particularly privileged role is given to two classes of algebraic (or complex analytic) objects. The Calabi-Yau manifolds, generalising elliptic curves; as well as a particular 'minimal' class of surfaces called K$3$ surfaces, and Fano manifolds. Fano manifolds play a key role in Mori's minimal model program, itself a sweeping higher-dimensional generalisation of key methods used in the classification of surfaces. In particular this program led to the spectacular Mori-Mukai classification of Fano manifolds in dimension three, building on work on Iskovskikh.

The construction of Fano manifolds in dimension four is thus a central open problem, and the focus of ongoing research. My own interest relates to their connection with Calabi-Yau geometry: a very rough analogy would say that a Fano manifold is to a Calabi-Yau manifold what a manifold with boundary is to a manifold. Recent ideas from string theory - in particular from the field of mirror symmetry - have introduced a number of new tools to the study of Calabi-Yau manifolds, particularly following Kontsevich, Strominger-Yau-Zaslow, and Gross-Siebert. Following Givental and Kontsevich the subject of mirror symmetry has also been extended to incorporate Fano manifolds, and suggests an approach to their construction via toric degeneration. The focus of my own research is to develop these insights to produce systematic constructions of Fano manifolds along quite a different line from that taken in birational geometry. Recent progress includes a new construction of surfaces with certain classes of singularities and the classification of 527 'new' Fano fourfolds - obtained in joint work with Coates and Kasprzyk - as complete intersections in 8-dimensional toric manifolds."

Tuesday, 19 February 2019

Oxford Mathematics Student Lecture live streamed for the first time

One of our aims in Oxford Mathematics is to show what it is like to be an Oxford Mathematics student. With that in mind we have started to make student course materials available and last Autumn we filmed and made available a first year lecture on Complex Numbers. And last week, as we promised, we went a step further and livestreamed a first year lecture. James Sparks was our lecturer and Dynamics his subject. In addition, we interviewed students as they left the lecture in preparation for filming a tutorial which will also be made available later this week. 

It has taken over 800 years to get here, but we are delighted to be able to share what we do and show that it is both familiar and challenging. The lecture is below together with the interviews. We welcome your thoughts. The tutorial will follow.


 

 


 

 

 

 

 

Tuesday, 12 February 2019

Love and Maths - first ever live streaming of student lecture this Thursday, 14th Feb,10am

Lecture theatre 1

It's Valentine's Day this Thursday (14th February in case you've forgotten) and Love AND Maths are in the air. For the first time, at 10am Oxford Mathematics will be LIVE STREAMING a 1st Year undergraduate lecture. In addition we will film (not live) a real tutorial based on that lecture.

The details:
LIVE Oxford Mathematics Student Lecture - James Sparks: 1st Year Undergraduate lecture on 'Dynamics', the mathematics of how things change with time
14th February, 10am-11am UK time

Watch live and ask questions of our mathematicians as you watch

https://www.facebook.com/OxfordMathematics
https://livestream.com/oxuni/undergraduate-lecture

For more information about the 'Dynamics' course: https://courses.maths.ox.ac.uk/node/37555

The lecture will remain available if you can't watch live.

Interviews with students:
We shall also be filming short interviews with the students as they leave the lecture, asking them to explain what happens next. These will be posted on our social media pages.

Watch a Tutorial:
The real tutorial based on the lecture (with a tutor and two students) will be filmed the following week and made available shortly afterwards
https://www.youtube.com/channel/UCLnGGRG__uGSPLBLzyhg8dQ

For more information and updates:
https://twitter.com/OxUniMaths
https://facebook.com/OxfordMathematics

Friday, 8 February 2019

Tensor clustering of breast cancer data for network construction - Oxford Mathematics Research

Oxford Mathematicians Anna Seigal, Heather Harrington, Mariano Beguerisse Diaz and colleagues talk about their work on trying to find cancer cell lines with similar responses by clustering them with structural constraints.

"Modern data analysis centres around the comparison of multiple different changing factors or variables. For example, we want to understand how different cells respond to different experimental conditions, under a range of doses, and for various different output measurements, across different timepoints. The structure in such data sets guide the design of new drugs in personalized medicine. 

A key way to find structure in data is by clustering: partitioning the data into subsets within which the data share some similarity. As multi-dimensional data sets become more prevalent, the question of how to cluster them becomes more important. Usual clustering algorithms can be used, but they do not conserve the multi-dimensional structure of the original data, and this flattens the insights that can be made, and hampers the interpretability of the results. 

In our paper, we introduce a method to cluster multi-dimensional data while respecting constraints on the composition of each cluster, designed to attribute differences between clusters to interpretable differences for the application at hand. In our method, a high similarity is not enough to cluster two data points together. We also require that their similarity is compatible with a shared explanation. We do this by placing algebraic constraints on the shapes of the clusters. This method allows for better control of spurious associations in the data than other approaches, by constraining the associations to only retain those with a consistent basis for similarity.

We apply our method on an extensive experimental dataset detailing the temporal phosphorylation response of signaling molecules in genetically diverse breast cancer cell lines in response to different ligands (or experimental conditions).  In this setting, we aim to find sets of experiments whose responses are similar, and to interpret these similarities in terms of the unknown underlying signaling mechanisms.  In our data set each experiment is given by a cell line and a ligand. One example of a mechanistic interpretation we could make from a similarity is that the cell lines in a cluster share a mutation, and the ligands are those whose effect is altered by the mutation. 

We constrain the clusters to be rectangular, i.e. to match a subset of cell lines with a subset of ligands.  The constraints only keep experimental measurements that are compatible with a mechanistic interpretation. This facilitates biological insights to be gleaned from the clusters obtained. In our paper, we present two variations of the algorithm:

1. The method can be applied directly to a dataset (i.e. as a standalone clustering tool).  Similarities between data points are encoded in a similarity tensor, the higher-dimensional analogue of a similarity matrix, and constraints about which clusters are allowed form the equations and inequalities in the entries of an unknown tensor, which encodes the clustering assignment.

2. The method can be applied in combination with other clustering methods, to impose constraints onto pre-existing clusters.  The distance between partitions is given by the number of experiments whose clustering assignment changes.  Hence this method can be used in conjunction with any other state-of-the-art method and preserve the features of an initial clustering that are compatible with the constraints.

In both implementations, the interpretability constraints can be encoded as algebraic inequalities in the entries of the clustering assignment, which gives an integer linear program. This can be solved to optimality, using the branch-and-bound algorithm, to find the best clustering assignment. 

Any other constraints that give linear inequalities can also be used. There are possibilities to apply the methodology to problems with other constraints, such as restricting on the size of cluster, imposing certain combinations of data, or finding communities in networks with quotas."

Wednesday, 6 February 2019

Sharp rates of energy decay for damped waves - Oxford Mathematics Research

Differential equations arising in physics and elsewhere often describe the evolution in time of quantities which also depend on other (typically spatial) variables. Well known examples of such evolution equations include the heat equation and the wave equation. A rigorous, functional analytic approach to the study of linear autonomous evolution equations begins by considering the associated abstract Cauchy problem, \begin{equation}\label{eq:ACP} \hspace{100pt}\left\{\begin{aligned} \dot{u}(t)&=Au(t),\quad t\ge0,\hspace{200pt} (1)\\ u(0)&=x\in X. \end{aligned}\right. \end{equation} Here $A$ is a linear operator (typically unbounded) acting on a suitably chosen Banach space $X$, which is usually a space of functions or a product of such spaces. For instance, in the case of the heat equation on a domain $\Omega$ we might choose $X=L^2(\Omega)$ and let $A$ be the Laplace operator with suitable boundary conditions, and for the wave equation we would take $X$ to be a product of two function spaces corresponding, respectively, to the displacement and the velocity of the wave. Assuming the abstract Cauchy problem to be well posed, there exists a family $(T(t))_{t\ge0}$ of bounded linear operators (a so-called $C_0$-semigroup) acting on (1) such that the solution of (1) is given by $u(t)=T(t)x$, $t\ge0$. Of course, we cannot normally hope to solve (1) exactly, so the operators $T(t)$, $t\ge0$, are in general unknown. The main task is to deduce useful information about the semigroup $(T(t))_{t\ge0}$ from what is known about $A$, in particular its spectral properties.

In concrete applications, the norm on the space $X$ often admits a physical interpretation. An important example of this kind is the wave equation, where $X$ is a Hilbert space with the property that the induced norm of the solution $u(t)=T(t)x$ is related in a very natural way to the energy of the solution at time $t\ge0$. Thus we may study energy decay of waves, a fundamental problem in mathematical physics, by investigating the asymptotic behaviour of the norms $\|u(t)\|$ as $t\to\infty$. In the classical (undamped) wave equation the operators $T(t)$, $t\ge0$, are isometries (even unitary operators), so energy is conserved. On the other hand, as soon as there is some sort of damping, for instance due to air resistance or other frictional forces, we expect the energy of any solution to decay over time. But at what rate? As it turns out, we may associate with any damped wave equation an increasing continuous function $M\colon[0,\infty)\to(0,\infty)$, which captures important spectral properties of the operator $A$ and in all cases of interest will satisfy $M(s)\to\infty$ as $s\to\infty$. In practice, obtaining good estimates on the function $M$ may itself be a non-trivial problem (the precise behaviour of $M$ is determined by the nature of the damping), but at least in principle the function $M$ is part of what one knows about the problem at hand. The question becomes: given the function $M$, what can we say about the rate of energy decay of (sufficiently regular) solutions of our damped wave equation?

It is known that the best result one may hope for is an estimate of the form \begin{equation}\label{eq:opt}\hspace{100pt} \|u(t)\|\le \frac{C}{M^{-1}(ct)} \hspace{200pt} (2)\end{equation} for all sufficiently large values of $t>0$, where $C,c$ are positive constants. It is also known that this best possible rate does not hold in all cases, and that sometimes a certain correction factor is needed. On the other hand, a celebrated result from 2010 due to A. Borichev and Y. Tomilov shows that if we consider the damped wave equation (or more generally any abstract Cauchy problem in which $X$ is a Hilbert space) and if $M(s)$ is proportional to $s^\alpha$ for some $\alpha>0$ and all sufficiently large $s>0$, then we do obtain the best possible rate given by (2). This result has been applied extensively throughout the recent literature on energy decay of damped waves and similar systems. A natural question, then, is whether the best possible estimate in (2) holds only for functions $M$ of this special polynomial type or for other functions as well.

In a recent paper (to appear in Advances in Mathematics) Oxford Mathematician David Seifert and his collaborators proved that one in fact obtains the optimal estimate in (2) for a much larger class of functions $M$, known in the literature as functions with positive increase. This class includes all functions of sufficiently rapid and regular growth, and in particular it includes functions $M(s)$ which are eventually proportional to $s^\alpha\log(s)^\beta$, where $C,\alpha >0$ and $\beta\in\mathbb{R}$. Such functions arise naturally in models of sound waves subject to viscoelastic damping at the boundary. Furthermore, the class of functions with positive increase is in a certain sense the largest possible class for which one could hope to obtain the estimate in (2), as is also shown in the paper. The proofs of these results combine techniques from operator theory and Fourier analysis. One particularly important ingredient is the famous Plancherel theorem, which states that the Fourier transform (suitably scaled) is a unitary operator on the space of square-integrable functions taking values in a Hilbert space. In future work, David and his collaborators hope to extend their results to the setting of more general Banach spaces. In such cases, however, the Plancherel theorem is known not to hold, so new ideas based on the finer geometric properties of Banach spaces are likely to be needed. 

Pages