News

Friday, 7 June 2019

The pros and cons of cell cannibalism - mathematics and medicine join forces to understand the causes of inflammatory and infectious diseases

Certain inflammatory and infectious diseases, including atherosclerosis and tuberculosis, are caused by the accumulation inside immune cells of harmful substances, such as lipids and bacteria. A multidisciplinary study published in Proceedings B of the Royal Society, by researchers from the Universities of Oxford and Sydney, has shown how cell cannibalism contributes to this process.

Hugh Ford, an applied mathematics PhD student with Prof. Mary Myerscough at the University of Sydney, conducted the research whilst a visiting student at Oxford Mathematics. With Prof Myerscough and Prof. Helen Byrne in the Oxford Mathematics’s Wolfson Centre for Mathematical Biology, Hugh developed a mathematical model that accurately describes the accumulation of harmful substances in macrophages, a type of white blood cell, which act as “waste-disposal” cells for the immune system. When there are too many of these substances for macrophages to handle, they are unable to remove the excess from the area before they die. This triggers a cycle in which macrophages that die while removing harmful substances from the arteries leave the substances in situ, causing more macrophages to be recruited to ingest them. The researchers’ model describes how this leads to an accumulation of macrophages. These ingest the dead cells, along with the cholesterol they have accumulated, causing the cycle to accelerate.

The researchers tested their model experimentally, in collaboration with Prof. David Greaves at the Dunn School of Pathology in Oxford. Hugh tracked the accumulation of plastic beads in thousands of macrophages in laboratory experiments, generating data to validate the cascading effect of substance accumulation predicted by the mathematical model. Dr Joshua Bull, an Oxford Mathematics postdoctoral researcher, assisted with the data analysis by adapting his existing image analysis software to enable automatic counting, from thousands of high-resolution images, of individual macrophages and the numbers of plastic microbeads contained within them.

Prof. Greaves from the Dunn School of Pathology said: “Hugh’s mathematical modelling allowed us to do a set of biology experiments that shed new light on the processes that drive diseases. Armed with these new insights we are keen to look for drugs that enhance tissue protection by changing cell behaviour.”

Hugh said the paper contributes to a growing body of evidence that casts cannibalistic cell removal as a double-edged sword. “While on the one hand this process is crucial for tissue stability and the resolution of inflammation, it also perpetuates subcellular accumulation of harmful substances that can then contribute to the development of diseases, such as heart disease and tuberculosis,” 

Wednesday, 5 June 2019

Oxford Mathematics Public Lectures: Marcus du Sautoy - The Creativity Code: How AI is learning to write, paint and think. Full lecture now online

Artificial Intelligence (AI) is a great asset. Artificial Intelligence is a threat to our freedom. Much of the debate around AI seems to focus on these two positions along with a third argument, namely AI could never replicate our creativity or capture what makes us human. We will never go to galleries to look at AI paintings or read AI poetry.

Or perhaps we might? In this fascinating and provocative lecture, Marcus du Sautoy both tests our ability to distinguish between human and machine creativity, and suggests that our creativity may even benefit from that of the machines.

The Oxford Mathematics Public Lectures are generously supported by XTX Markets.

 

 

 

 

 

 

Wednesday, 22 May 2019

Graham Farmelo - The Universe Speaks in Numbers. Latest Oxford Mathematics Public Lecture now available

An old-fashioned tale of romance and estrangement, Graham Farmelo's Oxford Mathematics Public Lecture charts the 350-year relationship between Mathematics and Physics and its prospects for the future. Might things be less dramatic in future? Might they just have to be 'going steady' for a while?

Our Oxford Mathematics Public Lectures are aimed at a general audience who are curious about maths and its many facets. They are all live streamed and available afterwards on our YouTube Channel. For a full list of forthcoming lectures please click here. You are all very welcome.

Oxford Mathematics Public Lectures are generously supported by XTX Markets.

 

 

 

 

 

 

Monday, 13 May 2019

The science of jumping popper toys

Snap-through buckling is a type of instability in which an elastic object rapidly jumps from one state to another. Such instabilities are familiar from everyday life: you have probably been soaked by an umbrella flipping upwards in high winds, while snap-through is harnessed to generate fast motions in applications ranging from soft robotics to artificial heart valves. In biology, snap-through has long been exploited to convert energy stored slowly into explosive movements: both the leaf of the Venus flytrap and the beak of the hummingbird snap-through to catch prey unawares.

Despite the ubiquity of snap-through in nature and engineering, how fast snap-through occurs (i.e. its dynamics) is generally not well understood, with many instances reported of delay phenomena in which snap-through occurs extremely slowly. A striking example is a children’s ‘jumping popper’ toy, which resembles a rubber spherical cap that can be turned inside-out. The inside-out shape remains stable while the cap is held at its edges, but leaving the popper on a surface causes it to snap back to its natural shape and leap upwards. As shown in the figure, the snap back is not immediate: a time delay is observed during which the popper moves very slowly before rapidly accelerating. The delay can be several tens of seconds in duration — much slower than the millisecond or so that would be expected for an elastic instability. Playing around further reveals other unusual features: holding the popper toy for longer before placing it down generally causes a slower snap-back, and the amount of delay is highly unpredictable, varying greatly with each attempt.

In a series of videos launching The Mathematical Observer, a new YouTube channel showcasing the research performed in the Oxford Mathematics Observatory, Oxford Mathematician Michael Gomez (in collaboration with Derek Moulton and Dominic Vella) investigates the science behind the jumping popper toy. Episode one discusses why the popper toy snaps, and the important role played by the geometry of a spherical cap. Episode two focuses on how fast the popper toy snaps, and how its unpredictable nature can arise purely from the mathematical structure of the snap-through transition.

 

 

Thursday, 9 May 2019

The third in our series of filmed student lectures - Ben Green on Integration

Back in October, for the first time, we filmed an actual student lecture, Vicky Neale's lecture on 'Complex Numbers.' We wanted to show what studying at Oxford is really like, how it is not so different to school while at the same time taking things to a more rigorous level. Since we made the film available, over 375,000 people have watched some of it. 

Emboldened, we went one stage further in February and live streamed a lecture (and made it available subsequently), James Sparks on 'Dynamics.' But in addition to the lecture, we also filmed the subsequent tutorial which all students receive, usually in pairs, after lectures, and which is the essential ingredient of the Oxford learning experience. Both have been huge successes.

So we come to the third in our series of filmed student lectures. This is the opening lecture in the 1st Year course on 'Analysis III - Integration.' Prof. Ben Green both links the course to the mathematics our students have already learnt at school and develops that knowledge, taking the students to the next stage. Like all good lectures it recaps and points forward (the course materials accompanying the Integration lectures can be found here).

The lectures and tutorial are all part of our going 'Behind the Scenes' at Oxford Mathematics. We shall we filming our Open Days in July and more lectures in the Autumn. Please send any comments to external-relations@maths.ox.ac.uk

Wednesday, 8 May 2019

Matthew Butler awarded the Lighthill-Thwaites Prize for 2019

Oxford Mathematician Matthew Butler has been awarded the biennial Lighthill-Thwaites Prize for 2019. The prize is awarded by the Institute of Mathematics and its Applications to researchers who have spent no more than five years in full-time study or work since completing their undergraduate degrees.

Matthew's research focuses on fluid dynamics, particulary flows at low Reynolds number involving surface tension and interactions with elastic boundaries. His talk at the British Applied Mathematics Colloquium 2019 where the prize was awarded was entitled 'Sticking with droplets: Insect-inspired modelling of capillary adhesion" and focused on how having a deformable foot can be beneficial when trying to adhere to a substrate using the surface tension of a fluid droplet. In his PhD Matthew is studying insect adhesion, and in particular how insects can utilise physical laws to improve their ability to stick to surfaces.

Oxford Mathematician Doireann O'Kiely won the prize in 2017 and Laura Kimpton, also from Oxford, won it in 2013. Oxford Mathematician Jessica Williams was also a finalist this year.

 

Tuesday, 7 May 2019

The influence of porous-medium microstructure on filtration - or how to design the perfect vacuum cleaner

Oxford Mathematician Ian Griffiths talks about his work with colleagues Galina Printsypar and Maria Bruna on modelling the most efficient filters for uses as diverse as blood purification and domestic vacuum cleaners.

"Filtration forms a vital part of our everyday lives, from the vacuum cleaners and air purifiers that we use to keep our homes clean to filters that are used in the pharmaceutical industry to remove viruses from liquids such as blood. If you’ve ever replaced the filter in your vacuum cleaner you will have seen that it is composed of a nonwoven medium – a fluffy material comprising many fibres laid down in a mat. These fibres trap dust particles as contaminated air passes through, protecting the motor from becoming damaged and clogged by the dust. A key question that we ask when designing these filters is, how does the arrangement of the fibres in the filter affect its ability to remove dust? 

One way in which we could answer this question is to manufacture many different types of filters, with different fibre sizes and arrangements, and then test the performance of each filter. However, this is time consuming and expensive. Moreover, while we would be able to determine which filter is best out of those that we manufactured, we would not know if we could have made an even better one. By developing a mathematical model, we can quickly and easily assess the performance of any type of filter, and determine the optimal design for a specific filtration task, such as a vacuum cleaner or a virus filter.

As contaminants stick onto the surface of the filter fibres they will grow in size and we must capture this in our mathematical model to predict the filter performance. For a filter composed of periodically spaced and uniformly sized fibres, eventually all the fibres will become so loaded with contaminants that they will touch each other. At this point the filter blocks and we can terminate our simulation. However, for a filter with randomly arranged fibres, we may find ourselves in a scenario in which two fibres start off quite close to one another and so touch after trapping only a small amount of contaminant on their surface. At this point the majority of the rest of the filter is still able to trap contaminants and so we do not want to stop our filtration simulation yet. To compare the performance of filters with periodic and random fibre arrangements we therefore introduce an agglomeration algorithm. This provides a way for us to model two touching fibres as a single entity that also traps contaminants, and allows us to continue our simulation.

Filter devices operate in one of two regimes:

Case 1: the flow rate is held constant. This corresponds to vacuum cleaners or air purifiers, which process a specified amount of air per hour, but require more energy to do this as the filter becomes blocked.

Case 2: the pressure difference is held constant. This corresponds to biological and pharmaceutical processes such as virus filtration from blood. A signature of this operating regime is the drop observed in the rate of filtering fluid as the filter clogs up.

We divide our filters into three types, comprising: (i) uniformly sized periodically arranged fibres; (ii) uniformly sized randomly arranged fibres; or (iii) randomly sized and randomly arranged fibres. We compare the performance of each of these three filter types through four different metrics:

The permeability, or ease in which the contaminated fluid can pass through them.
The efficiency, or how much contaminant is trapped by the obstacles.
The dirt-holding capacity, or how much contaminant can be held in total before the filter blocks.
The lifetime, or how long the filter lasts.

Our studies show that:

The permeability is higher for a filter composed of randomly arranged fibres than for a periodic filter. This is due to the ability for the contaminated fluid to find large spaces through which it can pass more easily (see figure below).

In Case 1, a filter composed of a random arrangement of uniformly sized fibres is shown to maintain the lowest pressure drop (i.e., require the least energy) with little compromise to the efficiency of removal and the dirt-holding capacity to that of a regular array of fibres.

In Case 2, a filter composed of randomly arranged fibres of different sizes gives the best removal efficiency and dirt-holding capacity. This comes at a cost of a reduced processing rate of purified fluid when compared with a regular array of fibres, but for examples such as virus filtration the most important objective is removal efficiency, and the processing rate is a secondary issue.

Thus, we find that the randomness in the fibres that we see in the filters in everyday processes can actually offer an advantage to the filtration performance. Since different filtration challenges will place different emphasis on the importance on the four performance metrics, our mathematical model can quickly and easily predict the optimum filter design for a given requirement."   

Left: air flow through a filter composed of a periodic hexagonal array of fibres.

Right: air flow through a filter composed of a random array of fibres. In the random case a 'channel' forms through which the fluid can flow more easily, which increases the permeability.

For more on this work please click here.

Monday, 29 April 2019

Learning from Stochastic Processes

Oxford Mathematician Harald Oberhauser talks about some of his recent research that combines insights from stochastic analysis with machine learning:

"Consider the following scenario: as part of a medical trial a group of $2n$ patients wears a device that records the activity of their hearts - e.g. an electrocardiogram that continuously records the electrical activity of their hearts - for a given time period $[0,T]$. For simplicity, assume measurements are taken in continuous time, hence we get for each patient a continuous path $x=(x(t))_{t \in [0,T]}$, that is an element $x \in C([0,T],\mathbb{R})$, that shows their heart activity for any moment in the time period $[0,T]$. Additionally, the patients are broken up into two subgroups - $n$ patients that take medication and the remaining $n$ patients that form a placebo group that does not take medication. Can one determine a statistically significant difference in the heart activity patterns between these groups based on our measurements?

The above example can be seen as learning about the law of a stochastic process. A stochastic process is simply a random variable $X=(X_t)_{t \in [0,T]}$ that takes values in the space of paths $C([0,T],\mathbb{R})$ and the law $\mu$ of $X$ is a probability measure that assigns a probability to the possible observed paths. The above example amounts to drawing $n$ samples from a path-valued random variable $X$ with unknown law $\mu$ (the medicated group) and $n$ samples from a path-valued random variable $Y$ with unknown law $\nu$ (the placebo group). Our question about a difference between these groups amounts to asking whether $\mu\neq\nu$. Finally, note that this example was very simple, and in typical applications we measure much more than just one quantity over time. Hence, we discuss below the general case of $C([0,T],\mathbb{R}^e)$-valued random variables for any $e \ge 1$ ($e$ can be very large, e.g.~$e\approx 10^3$ appears in standard benchmark data sets).

But before we continue our discussion about path-valued random variables, let's recall the simpler problem of learning the law of a real-valued random variable. A fundamental result in probability is that if $X$ is a bounded, real-valued random variable, then the sequence of moments \begin{align} (1)\quad\label{moments} (\mathbb{E}[X], \mathbb{E}[X^2], \mathbb{E}[X^3], \ldots) \end{align} determines the law of $X$, that is the probability measure $\mu$ on $\mathbb{R}$ given as $\mu(A)= \mathbb{P}(X \in A)$. This allows us to learn about the law of $X$ from independent samples $X_1,\ldots,X_n$ from $X$, since \[\frac{1}{n}\sum_{i=1}^n X_i^m \rightarrow \mathbb{E}[X^m] \text{ as }n\rightarrow \infty\] for every $m\ge 1$ by the law of large numbers. This extends to vector-valued random variables, but the situation for path-valued random variables is more subtle. The good news is that stochastic analysis and a field called "rough path theory'' provides a replacement for monomials in the form of iterated integrals: the tensor \[ \int dX^{\otimes{m}}:= \int _{0 \le t_1\le \cdots \le t_m \le T} dX_{t_1} \otimes \cdots \otimes dX_{t_m} \in (\mathbb{R}^e)^{\otimes m} \] is the natural "monomial'' of order $m$ of the stochastic process $X$. In analogy to (1), we expect that the so-called "expected signature"' of $X$, \begin{align}(2)\quad\label{esig} (\mathbb{E}[\int dX], \mathbb{E}[\int dX^{\otimes{2}}], \mathbb{E}[\int dX^{\otimes 3}], \ldots) \end{align} can characterize the law of $X$. In recent work with Ilya Chevyrev we revisit this question and develop tools that turn such theoretical insights into methods that can answer more applied questions as they arise in statistics and machine learning. As it turns out, the law of the stochastic process $X$ is (essentially) always characterized by (2) after a normalization of tensors that ensures boundedness of this sequence. Mathematically, this requires us to understand phenomena related to the non-compactness of $C([0,1],\mathbb{R}^d)$. Inspired by results in machine learning, we then introduce a metric for laws of stochastic processes, a so-called "maximum mean discrepancy'' \begin{align} d(\mu,\nu)= \sup_{f} |\mathbb{E}[f(X)] - \mathbb{E}[f(Y)]| \end{align} where $\mu$ denotes the law of $X$ and $\nu$ the law of $Y$ and the $\sup$ over a sufficently large class of real-valued functions $f:C([0,T], \mathbb{R}^e) \rightarrow \mathbb{R}$. To compute $d(\mu,\nu)$ we make use of a classic "kernel trick'' that shows that \begin{align} (3) \quad\label{mmd} d(\mu,\nu) = \mathbb{E}[k(X,X')] - 2\mathbb{E}[k(X,Y)] + \mathbb{E}[k(Y,Y')] \end{align} where $X'$ resp.~$Y'$ are independent copies of $X\sim\mu$ resp.~$Y\sim \nu$ and the kernel $k$ is the inner product of the sequences of iterated integrals formed from $X$ and $Y$. Previous work provides algorithms that can very efficiently evaluate $k$. Combined with (3) and the characterization of the law by (2), this allows us to estimate the metric $d(\mu,\nu)$ from finitely many samples.

An immediate application is a two-sample test: we can test the null hypothesis \[ H_0: \mu=\nu \text{ against the alternative }H_1: \mu\ne \nu \] by estimating $d(\mu,\nu)$ and rejecting $H_0$ if this estimate is bigger than a given threshold. Recall our example of two patient groups (medicated and placebo). The question whether there's a statistical difference between these groups can be formalized as such a two-sample hypothesis test. To gain more insight it can be useful to work with synthetic data (after all, the answer to a two-sample test will be a simple yes or no depending on whether to reject the null hypothesis). Therefore consider the following toy example: $\mu$ is the law of a simple random walk and $\nu$ is the law of a path-dependent random walk; Figure 1 shows how samples from these two stochastic processes look; more interestingly Figure 2 shows the histogram of an estimator for $d(\mu,\nu)$. We see that the support is nearly completely disjoint which indicates that the test will perform well and this can be made rigorous and quantitative.

These examples of $e=1$-dimensional paths are overly simplistic - we can already see a difference by looking at the sampled paths. However, this situation changes drastically in the higher-dimensional setting of real-world data; some coordinates might evolve quickly, some slowly, others might not be relevant at all, often the statistically significant differences are in the complex interactions between some coordinates; all subject to noise and variations. Finally, the index $t \in [0,T]$ might not necessarily be time; for example, in a recent paper with Ilya Chevyrev and Vidit Nanda (featured in a previous case-study), the index $t$ is the radius of spheres grown around points in space and the path value for every $t>0$ is determined by a so-called barcode from topological data analysis that captures the topological properties of these spheres of radius $t$"


Figure 1: The left plot shows 30 independent samples from a simple random walk; the right plot shows 30 independent samples from a path-dependent random walk.

 

  

Figure 2: The left plot shows the histogram for an estimator of $d(\mu,\nu)$ if the null hypothesis is true, $\mu=\nu$ (both are simple random walks); the right plot shows the same if the null is false ($\mu$ is a simple random walk and $\nu$ is a path-dependent random walk as shown in Figure 1.

Friday, 26 April 2019

Artur Ekert awarded a Micius Quantum Prize 2019

Oxford Mathematician Artur Ekert has been awarded a Micius Quantum Prize 2019 (Theory category) for his invention of entanglement-based quantum key distribution, entanglement swapping, and entanglement purification. The prizes recognise the scientists who have made outstanding contributions in the field of quantum mechanics and the 2019 prizes focus on the field of quantum communication. 

Artur Ekert is one of the leaders in the Quantum Cryptography field. His research extends over most aspects of information processing in quantum-mechanical systems and brings together theoretical and experimental quantum physics, computer science and information theory. Its scope ranges from deep fundamental issues in physics to prospective commercial exploitation by the computing and communications industries.

Oxford Physicist and close colleague of Artur's, David Deutsch was also awarded a prize in the Quantum Computation Theory Category.

The Micius prizes are awarded by the Micius Quantum Foundation. The Foundation is named after Micius, a Chinese philosopher who lived in the fifth century BC.

 

Tuesday, 23 April 2019

Why do liquids form patterns on solid surfaces?

The formation of liquid drop patterns on solid surfaces is a fundamental process for both industry and nature. Now, a team of scientists including Oxford Mathematician Andreas Münch and colleagues from the Weierstrass Institute in Berlin, and the University of Saarbrücken can explain exactly how it happens.

Controlling and manipulating liquid drop patterns on solid surfaces has a diverse range of applications including in the coating industry as well as in developing tools for use in cell biology. These are also essential for the famous ‘lotus leaf’ effect where the leaf self-cleans. Now, as they explain in an article in Proceedings of the National Academy of Sciences of the United States of America (PNAS), the team has identified that the formation of droplets during a dewetting (retraction) of a thin liquid film from a hydrophobically (water repellent) coated substrate is essentially controlled by the friction at the interface of the liquid and the solid surface.

The mathematical models and the highly adaptive numerical schemes developed by the team predict very distinct droplet patterns that include both the large-scale polygonal structure of ensembles of micro-scale droplets to the small-scale cascades of even smaller satellite droplets (see picture). These results were achieved simply by varying the condition at the liquid-solid boundary, a so-called Navier-slip-type condition.

In the experiments, the surfaces were treated with different surface coatings that made them particularly slippery, thereby reducing the interfacial friction. As a result, the theoretical predictions, from the large-scale to the small-scale patterns were confirmed.

It is now possible for a trained observer to "see" the friction exerted by the surface coating and relate the multi-scale pattern to the corresponding interface condition and the presence of slip. This has great potential, not only for spotting similar conditions in nature, but as a facile, non-invasive tool for designing patterns in many micro-fluidic applications without the need for pre-patterning or lithographic treatment of the surface. These would include bespoke synthesis of monodisperse (uniform) materials or biosensor applications.

Pages