Snap-through buckling is a type of instability in which an elastic object rapidly jumps from one state to another. Such instabilities are familiar from everyday life: you have probably been soaked by an umbrella flipping upwards in high winds, while snap-through is harnessed to generate fast motions in applications ranging from soft robotics to artificial heart valves. In biology, snap-through has long been exploited to convert energy stored slowly into explosive movements: both the leaf of the Venus flytrap and the beak of the hummingbird snap-through to catch prey unawares.

Despite the ubiquity of snap-through in nature and engineering, how fast snap-through occurs (i.e. its dynamics) is generally not well understood, with many instances reported of delay phenomena in which snap-through occurs extremely slowly. A striking example is a children’s ‘jumping popper’ toy, which resembles a rubber spherical cap that can be turned inside-out. The inside-out shape remains stable while the cap is held at its edges, but leaving the popper on a surface causes it to snap back to its natural shape and leap upwards. As shown in the figure, the snap back is not immediate: a time delay is observed during which the popper moves very slowly before rapidly accelerating. The delay can be several tens of seconds in duration — much slower than the millisecond or so that would be expected for an elastic instability. Playing around further reveals other unusual features: holding the popper toy for longer before placing it down generally causes a slower snap-back, and the amount of delay is highly unpredictable, varying greatly with each attempt.

In a series of videos launching The Mathematical Observer, a new YouTube channel showcasing the research performed in the Oxford Mathematics Observatory, Oxford Mathematician Michael Gomez (in collaboration with Derek Moulton and Dominic Vella) investigates the science behind the jumping popper toy. Episode one discusses why the popper toy snaps, and the important role played by the geometry of a spherical cap. Episode two focuses on how fast the popper toy snaps, and how its unpredictable nature can arise purely from the mathematical structure of the snap-through transition.

Back in October, for the first time, we filmed an actual student lecture, Vicky Neale's lecture on 'Complex Numbers.' We wanted to show what studying at Oxford is really like, how it is not so different to school while at the same time taking things to a more rigorous level. Since we made the film available, over 375,000 people have watched some of it.

Emboldened, we went one stage further in February and live streamed a lecture (and made it available subsequently), James Sparks on 'Dynamics.' But in addition to the lecture, we also filmed the subsequent tutorial which all students receive, usually in pairs, after lectures, and which is the essential ingredient of the Oxford learning experience. Both have been huge successes.

So we come to the third in our series of filmed student lectures. This is the opening lecture in the 1st Year course on 'Analysis III - Integration.' Prof. Ben Green both links the course to the mathematics our students have already learnt at school and develops that knowledge, taking the students to the next stage. Like all good lectures it recaps and points forward (the course materials accompanying the Integration lectures can be found here).

The lectures and tutorial are all part of our going 'Behind the Scenes' at Oxford Mathematics. We shall we filming our Open Days in July and more lectures in the Autumn. Please send any comments to external-relations@maths.ox.ac.uk

Oxford Mathematician Matthew Butler has been awarded the biennial Lighthill-Thwaites Prize for 2019. The prize is awarded by the Institute of Mathematics and its Applications to researchers who have spent no more than five years in full-time study or work since completing their undergraduate degrees.

Matthew's research focuses on fluid dynamics, particulary flows at low Reynolds number involving surface tension and interactions with elastic boundaries. His talk at the British Applied Mathematics Colloquium 2019 where the prize was awarded was entitled 'Sticking with droplets: Insect-inspired modelling of capillary adhesion" and focused on how having a deformable foot can be beneficial when trying to adhere to a substrate using the surface tension of a fluid droplet. In his PhD Matthew is studying insect adhesion, and in particular how insects can utilise physical laws to improve their ability to stick to surfaces.

Oxford Mathematician Doireann O'Kiely won the prize in 2017 and Laura Kimpton, also from Oxford, won it in 2013. Oxford Mathematician Jessica Williams was also a finalist this year.

Oxford Mathematician Ian Griffiths talks about his work with colleagues Galina Printsypar and Maria Bruna on modelling the most efficient filters for uses as diverse as blood purification and domestic vacuum cleaners.

"Filtration forms a vital part of our everyday lives, from the vacuum cleaners and air purifiers that we use to keep our homes clean to filters that are used in the pharmaceutical industry to remove viruses from liquids such as blood. If you’ve ever replaced the filter in your vacuum cleaner you will have seen that it is composed of a nonwoven medium – a fluffy material comprising many fibres laid down in a mat. These fibres trap dust particles as contaminated air passes through, protecting the motor from becoming damaged and clogged by the dust. A key question that we ask when designing these filters is, how does the arrangement of the fibres in the filter affect its ability to remove dust?

One way in which we could answer this question is to manufacture many different types of filters, with different fibre sizes and arrangements, and then test the performance of each filter. However, this is time consuming and expensive. Moreover, while we would be able to determine which filter is best out of those that we manufactured, we would not know if we could have made an even better one. By developing a mathematical model, we can quickly and easily assess the performance of any type of filter, and determine the optimal design for a specific filtration task, such as a vacuum cleaner or a virus filter.

As contaminants stick onto the surface of the filter fibres they will grow in size and we must capture this in our mathematical model to predict the filter performance. For a filter composed of periodically spaced and uniformly sized fibres, eventually all the fibres will become so loaded with contaminants that they will touch each other. At this point the filter blocks and we can terminate our simulation. However, for a filter with randomly arranged fibres, we may find ourselves in a scenario in which two fibres start off quite close to one another and so touch after trapping only a small amount of contaminant on their surface. At this point the majority of the rest of the filter is still able to trap contaminants and so we do not want to stop our filtration simulation yet. To compare the performance of filters with periodic and random fibre arrangements we therefore introduce an agglomeration algorithm. This provides a way for us to model two touching fibres as a single entity that also traps contaminants, and allows us to continue our simulation.

Filter devices operate in one of two regimes:

Case 1: the flow rate is held constant. This corresponds to vacuum cleaners or air purifiers, which process a specified amount of air per hour, but require more energy to do this as the filter becomes blocked.

Case 2: the pressure difference is held constant. This corresponds to biological and pharmaceutical processes such as virus filtration from blood. A signature of this operating regime is the drop observed in the rate of filtering fluid as the filter clogs up.

We divide our filters into three types, comprising: (i) uniformly sized periodically arranged fibres; (ii) uniformly sized randomly arranged fibres; or (iii) randomly sized and randomly arranged fibres. We compare the performance of each of these three filter types through four different metrics:

The permeability, or ease in which the contaminated fluid can pass through them.
The efficiency, or how much contaminant is trapped by the obstacles.
The dirt-holding capacity, or how much contaminant can be held in total before the filter blocks.
The lifetime, or how long the filter lasts.

Our studies show that:

The permeability is higher for a filter composed of randomly arranged fibres than for a periodic filter. This is due to the ability for the contaminated fluid to find large spaces through which it can pass more easily (see figure below).

In Case 1, a filter composed of a random arrangement of uniformly sized fibres is shown to maintain the lowest pressure drop (i.e., require the least energy) with little compromise to the efficiency of removal and the dirt-holding capacity to that of a regular array of fibres.

In Case 2, a filter composed of randomly arranged fibres of different sizes gives the best removal efficiency and dirt-holding capacity. This comes at a cost of a reduced processing rate of purified fluid when compared with a regular array of fibres, but for examples such as virus filtration the most important objective is removal efficiency, and the processing rate is a secondary issue.

Thus, we find that the randomness in the fibres that we see in the filters in everyday processes can actually offer an advantage to the filtration performance. Since different filtration challenges will place different emphasis on the importance on the four performance metrics, our mathematical model can quickly and easily predict the optimum filter design for a given requirement."

Left: air flow through a filter composed of a periodic hexagonal array of fibres.

Right: air flow through a filter composed of a random array of fibres. In the random case a 'channel' forms through which the fluid can flow more easily, which increases the permeability.

Oxford Mathematician Harald Oberhauser talks about some of his recent research that combines insights from stochastic analysis with machine learning:

"Consider the following scenario: as part of a medical trial a group of $2n$ patients wears a device that records the activity of their hearts - e.g. an electrocardiogram that continuously records the electrical activity of their hearts - for a given time period $[0,T]$. For simplicity, assume measurements are taken in continuous time, hence we get for each patient a continuous path $x=(x(t))_{t \in [0,T]}$, that is an element $x \in C([0,T],\mathbb{R})$, that shows their heart activity for any moment in the time period $[0,T]$. Additionally, the patients are broken up into two subgroups - $n$ patients that take medication and the remaining $n$ patients that form a placebo group that does not take medication. Can one determine a statistically significant difference in the heart activity patterns between these groups based on our measurements?

The above example can be seen as learning about the law of a stochastic process. A stochastic process is simply a random variable $X=(X_t)_{t \in [0,T]}$ that takes values in the space of paths $C([0,T],\mathbb{R})$ and the law $\mu$ of $X$ is a probability measure that assigns a probability to the possible observed paths. The above example amounts to drawing $n$ samples from a path-valued random variable $X$ with unknown law $\mu$ (the medicated group) and $n$ samples from a path-valued random variable $Y$ with unknown law $\nu$ (the placebo group). Our question about a difference between these groups amounts to asking whether $\mu\neq\nu$. Finally, note that this example was very simple, and in typical applications we measure much more than just one quantity over time. Hence, we discuss below the general case of $C([0,T],\mathbb{R}^e)$-valued random variables for any $e \ge 1$ ($e$ can be very large, e.g.~$e\approx 10^3$ appears in standard benchmark data sets).

But before we continue our discussion about path-valued random variables, let's recall the simpler problem of learning the law of a real-valued random variable. A fundamental result in probability is that if $X$ is a bounded, real-valued random variable, then the sequence of moments \begin{align} (1)\quad\label{moments} (\mathbb{E}[X], \mathbb{E}[X^2], \mathbb{E}[X^3], \ldots) \end{align} determines the law of $X$, that is the probability measure $\mu$ on $\mathbb{R}$ given as $\mu(A)= \mathbb{P}(X \in A)$. This allows us to learn about the law of $X$ from independent samples $X_1,\ldots,X_n$ from $X$, since \[\frac{1}{n}\sum_{i=1}^n X_i^m \rightarrow \mathbb{E}[X^m] \text{ as }n\rightarrow \infty\] for every $m\ge 1$ by the law of large numbers. This extends to vector-valued random variables, but the situation for path-valued random variables is more subtle. The good news is that stochastic analysis and a field called "rough path theory'' provides a replacement for monomials in the form of iterated integrals: the tensor \[ \int dX^{\otimes{m}}:= \int _{0 \le t_1\le \cdots \le t_m \le T} dX_{t_1} \otimes \cdots \otimes dX_{t_m} \in (\mathbb{R}^e)^{\otimes m} \] is the natural "monomial'' of order $m$ of the stochastic process $X$. In analogy to (1), we expect that the so-called "expected signature"' of $X$, \begin{align}(2)\quad\label{esig} (\mathbb{E}[\int dX], \mathbb{E}[\int dX^{\otimes{2}}], \mathbb{E}[\int dX^{\otimes 3}], \ldots) \end{align} can characterize the law of $X$. In recent work with Ilya Chevyrev we revisit this question and develop tools that turn such theoretical insights into methods that can answer more applied questions as they arise in statistics and machine learning. As it turns out, the law of the stochastic process $X$ is (essentially) always characterized by (2) after a normalization of tensors that ensures boundedness of this sequence. Mathematically, this requires us to understand phenomena related to the non-compactness of $C([0,1],\mathbb{R}^d)$. Inspired by results in machine learning, we then introduce a metric for laws of stochastic processes, a so-called "maximum mean discrepancy'' \begin{align} d(\mu,\nu)= \sup_{f} |\mathbb{E}[f(X)] - \mathbb{E}[f(Y)]| \end{align} where $\mu$ denotes the law of $X$ and $\nu$ the law of $Y$ and the $\sup$ over a sufficently large class of real-valued functions $f:C([0,T], \mathbb{R}^e) \rightarrow \mathbb{R}$. To compute $d(\mu,\nu)$ we make use of a classic "kernel trick'' that shows that \begin{align} (3) \quad\label{mmd} d(\mu,\nu) = \mathbb{E}[k(X,X')] - 2\mathbb{E}[k(X,Y)] + \mathbb{E}[k(Y,Y')] \end{align} where $X'$ resp.~$Y'$ are independent copies of $X\sim\mu$ resp.~$Y\sim \nu$ and the kernel $k$ is the inner product of the sequences of iterated integrals formed from $X$ and $Y$. Previous work provides algorithms that can very efficiently evaluate $k$. Combined with (3) and the characterization of the law by (2), this allows us to estimate the metric $d(\mu,\nu)$ from finitely many samples.

An immediate application is a two-sample test: we can test the null hypothesis \[ H_0: \mu=\nu \text{ against the alternative }H_1: \mu\ne \nu \] by estimating $d(\mu,\nu)$ and rejecting $H_0$ if this estimate is bigger than a given threshold. Recall our example of two patient groups (medicated and placebo). The question whether there's a statistical difference between these groups can be formalized as such a two-sample hypothesis test. To gain more insight it can be useful to work with synthetic data (after all, the answer to a two-sample test will be a simple yes or no depending on whether to reject the null hypothesis). Therefore consider the following toy example: $\mu$ is the law of a simple random walk and $\nu$ is the law of a path-dependent random walk; Figure 1 shows how samples from these two stochastic processes look; more interestingly Figure 2 shows the histogram of an estimator for $d(\mu,\nu)$. We see that the support is nearly completely disjoint which indicates that the test will perform well and this can be made rigorous and quantitative.

These examples of $e=1$-dimensional paths are overly simplistic - we can already see a difference by looking at the sampled paths. However, this situation changes drastically in the higher-dimensional setting of real-world data; some coordinates might evolve quickly, some slowly, others might not be relevant at all, often the statistically significant differences are in the complex interactions between some coordinates; all subject to noise and variations. Finally, the index $t \in [0,T]$ might not necessarily be time; for example, in a recent paper with Ilya Chevyrev and Vidit Nanda (featured in a previous case-study), the index $t$ is the radius of spheres grown around points in space and the path value for every $t>0$ is determined by a so-called barcode from topological data analysis that captures the topological properties of these spheres of radius $t$"

Figure 1: The left plot shows 30 independent samples from a simple random walk; the right plot shows 30 independent samples from a path-dependent random walk.

Figure 2: The left plot shows the histogram for an estimator of $d(\mu,\nu)$ if the null hypothesis is true, $\mu=\nu$ (both are simple random walks); the right plot shows the same if the null is false ($\mu$ is a simple random walk and $\nu$ is a path-dependent random walk as shown in Figure 1.

Oxford Mathematician Artur Ekert has been awarded a Micius Quantum Prize 2019 (Theory category) for his invention of entanglement-based quantum key distribution, entanglement swapping, and entanglement purification. The prizes recognise the scientists who have made outstanding contributions in the field of quantum mechanics and the 2019 prizes focus on the field of quantum communication.

Artur Ekert is one of the leaders in the Quantum Cryptography field. His research extends over most aspects of information processing in quantum-mechanical systems and brings together theoretical and experimental quantum physics, computer science and information theory. Its scope ranges from deep fundamental issues in physics to prospective commercial exploitation by the computing and communications industries.

Oxford Physicist and close colleague of Artur's, David Deutsch was also awarded a prize in the Quantum Computation Theory Category.

The Micius prizes are awarded by the Micius Quantum Foundation. The Foundation is named after Micius, a Chinese philosopher who lived in the fifth century BC.

The formation of liquid drop patterns on solid surfaces is a fundamental process for both industry and nature. Now, a team of scientists including Oxford Mathematician Andreas Münch and colleagues from the Weierstrass Institute in Berlin, and the University of Saarbrücken can explain exactly how it happens.

Controlling and manipulating liquid drop patterns on solid surfaces has a diverse range of applications including in the coating industry as well as in developing tools for use in cell biology. These are also essential for the famous ‘lotus leaf’ effect where the leaf self-cleans. Now, as they explain in an article in Proceedings of the National Academy of Sciences of the United States of America (PNAS), the team has identified that the formation of droplets during a dewetting (retraction) of a thin liquid film from a hydrophobically (water repellent) coated substrate is essentially controlled by the friction at the interface of the liquid and the solid surface.

The mathematical models and the highly adaptive numerical schemes developed by the team predict very distinct droplet patterns that include both the large-scale polygonal structure of ensembles of micro-scale droplets to the small-scale cascades of even smaller satellite droplets (see picture). These results were achieved simply by varying the condition at the liquid-solid boundary, a so-called Navier-slip-type condition.

In the experiments, the surfaces were treated with different surface coatings that made them particularly slippery, thereby reducing the interfacial friction. As a result, the theoretical predictions, from the large-scale to the small-scale patterns were confirmed.

It is now possible for a trained observer to "see" the friction exerted by the surface coating and relate the multi-scale pattern to the corresponding interface condition and the presence of slip. This has great potential, not only for spotting similar conditions in nature, but as a facile, non-invasive tool for designing patterns in many micro-fluidic applications without the need for pre-patterning or lithographic treatment of the surface. These would include bespoke synthesis of monodisperse (uniform) materials or biosensor applications.

Oxford Mathematician Joni Teräväinen talks about his work concerning prime factorisations of consecutive integers and its applications in number theory.

"Analytic number theorists study the properties of the positive integers $1,2,3,\ldots$, and in particular the multiplicative structure of this set. From an additive point of view, the positive integers are rather simple: every integer is generated by $1$ (meaning that any number can be written as a sum of some number of ones). But what are the multiplicative generators of the integers? These are the prime numbers $2,3,5,7,11,\ldots$ (meaning that every integer is the product of primes in a unique way), and their distribution is a subtle and interesting question that has been at the centre of analytic number theory research for the past 150 years. The celebrated prime number theorem from 1897 states that the number of primes up to a large number $x$, denoted by $\pi(x)$, is approximately $x/\log x$ (more precisely, the ratio of $\pi(x)$ and $x/\log x$ approaches $1$ as $x$ tends to infinity). But how good exactly is this approximation? It turns out that the optimal answer to that question is equivalent to solving the Riemann hypothesis, one of the great unsolved problems in modern mathematics.

The distribution of prime numbers can be viewed as a purely multiplicative question. However, many of the most notorious conjectures in number theory arise when one combines additive structure with multiplicative structure. As an example, we have the unsolved twin prime conjecture: Are there infinitely many primes $p$ such that $p+2$ is also a prime? The list of such $p$ begins $3,5,11,17,29,41,\ldots$, and the largest known twin prime to date has $388 342$ decimal digits. The twin prime problem combines addition (by looking at the shift $p+2$) with an inherently multiplicative structure, the primes. The resolution of the twin prime conjecture seems to be out of reach (despite spectacular progress by Zhang, Maynard, Tao and others on gaps between primes, demonstrating that $p$ and $p+M$ are infinitely often simulatenously prime for some fixed $M\geq 2$), so one is led to look for related problems that are somewhat more manageable.

The Liouville function $\lambda(n)$ is an important number-theoretic function given by the formula $\lambda(n)=(-1)^{\Omega(n)}$, where $\Omega(n)$ is the number of prime divisors of $n$ (counting multiplicities, so for example $12=2\cdot 2\cdot 3$ has $\Omega(12)=3$). The Liouville function is intimately related to the primes and to multiplicative properties of the integers, since one easily sees that $\lambda(p)=-1$ for all primes $p$ and $\lambda(mn)=\lambda(m)\lambda(n)$ for all $m,n\geq 1.$ For example, one can show that the prime number theorem stated above is equivalent to finding cancellation in the partial sums of $\lambda(n)$ (or more precisely, it is equivalent to $\lim_{x\to \infty}\frac{1}{x}\sum_{n\leq x}\lambda(n)=0$).

If one looks at the values of the Liouville function up to a large threshold (the sequence starts $+1,-1,-1,+1,-1,+1,-1,-1,+1,+1,\ldots$), it appears to behave just like a random sequence of signs, with no apparent structure in it. This is called the Möbius randomness principle in number theory (because it's often formulated for the Möbius function, a function closely related to the Liouville function $\lambda(n)$). One instance of this principle is that the partial sums of $\lambda(n)$ should behave like the sums of random variables taking values $\pm 1$ with equal probability (so there would be square-root cancellation in the partial sums $\sum_{n\leq x}\lambda(n)$). It has been shown that a rigorous proof of this is equivalent to the Riemann hypothesis, and therefore appears to be out of reach of current methods. A different but equally valid way to specify how the Liouville sequence appears random is to assert that the sequence $\lambda(n)$ forgets its history. This means that the value of $\lambda(n+1)$ is asymptotically independent of the value of $\lambda(n)$ (much in the same way as the $n+1$st result of a series of coin tosses is independent of the $n$th result). The same should hold for longer patterns of shifts, so if someone picks a large integer $n$ and lists you the values of $\lambda(n),\lambda(n+1),\ldots, \lambda(n+9)$, that should be of no use if you're trying to guess the value of $\lambda(n+10)$. A precise version of this statement was conjectured by Chowla in the 1960, and it says the following.

Conjecture (Chowla's conjecture): Let $k\geq 1$. Then for any signs $s_1,\ldots, s_k\in \{-1,+1\}$, the asymptotic proportion of those integers $n\geq 1$ for which $\lambda(n+1)=s_1$, $\lambda(n+2)=s_2,\ldots, \lambda(n+k)=s_k$ is equal to $2^{-k}$.

Chowla's conjecture is closely related to the twin prime conjecture mentioned earlier, but at the same time Chowla's conjecture seems more tractable, due to the multiplicative property of $\lambda(n)$. Note that the twin prime conjecture would certainly imply as a special case that $\lambda(n)=\lambda(n+2)=-1$ infinitely often. Conversely, it is known that a sufficiently strong version of Chowla's conjecture could be used to deduce the twin prime conjecture (but no one has proved Chowla's conjecture, let alone this strengthening of it).

In 2015, Tao was the first one to make significant progress on Chowla's conjecture, building on the work of Matomäki and Radziwill on multiplicative functions. He showed that the case $k=2$ of Chowla's conjecture holds (with the technical modification that he considered the logarithmic probability, rather than asymptotic probability). Thus for example $\lambda(n)=\lambda(n+1)=-1$ holds for exactly $25\%$ of all the integers (again with logarithmic probability). In 2017, in joint work with Tao, we showed that Chowla's conjecture holds for $k=3$ as well (and we showed that a different formulation of it, which states that the correlation sums $\sum_{n\leq x}\lambda(n)\lambda(n+1)\cdots \lambda(n+k-1)$ exhibit cancellation, holds for any odd value of $k$). Again, we had to use the logarithmic probability instead of the asymptotic one. In a later work, we partially removed the restriction to logarithmic probability. Now we can for example say that $\lambda(n)=\lambda(n+1)=\lambda(n+2)=-1$ holds for exactly $12.5\%$ of all the integers.

Chowla's conjecture turns out to be far from being an isolated problem, and in the last few years methods that were originally developed to tackle partial cases of Chowla's conjecture, such as the ones above, have proved to be fruitful for several other problems as well, including some problems outside of number theory. For example, Tao used his work on the $k=2$ case of Chowla's conjecture (or rather a generalisation of it) to settle the Erdős discrepancy problem, a famous open problem in combinatorics. This notorious problem is the superficially simple statement that if you take any sequence $s(n)$ of signs $\pm 1$, then the partial sums $\sum_{n=1}^{j}s(dn)$ can be made arbitrarily large in absolute value by choosing suitable values of $j$ and $d$. In a different interesting development, Frantzikinakis and Host used the previously mentioned work on Chowla's conjecture as an ingredient to make progress on Sarnak's conejcture, an important conjecture in ergodic theory that is also related to the randomness of the Liouville sequence. Clearly, much remains to be studied regarding Chowla's conjecture and its applications and generalisations."

For more on Joni's work with Terry Tao on the structure of logarithmically averaged correlations of multiplicative functions please click here; and for more on their research in to the structure of correlations of multiplicative functions at almost all scales click here.

Far from taking us down the road of unpredictability and chaos, randomness has the power to help us solve a fascinating range of problems. Join Julia Wolf on a mathematical journey from penalty shoot-outs to internet security and patterns in the primes.

Julia Wolf is University Lecturer in the Department of Pure Mathematics and Mathematical Statistics at the University of Cambridge.

Oxford Mathematician Ebrahim Patel talks about his work using max-plus algebra to improve airport scheduling and to define optimal railway networks.

"Typically arising from scheduling problems, max-plus algebra (MPA) offers an elegant way to understand the behaviour of a complex system depicted as a network of nodes. For example, an arrow points from node a to node b if b is dependent on a. Thus, I used MPA to better schedule Dubai's airport activities as part of the Oxford-Emirates Data Science Lab. The task was to model the dependence on activities such as cleaning and passenger boarding times to ultimately reduce delays and their impact.

Fig 1 shows a network of some such activities that need to take place before the departure of a plane. For example, activity 28 (departure) is dependent on the completion of activities 18 and 27, taking 20 and 10 minutes respectively. Additionally, at least 30 minutes must elapse before this network of activities is repeated, which represents the preparation of a subsequent departure at the same gate. Scheduling is thus made periodic by connecting the last activity, 28, to activity 1, indicated by connecting node A.

Figure 1: Network of activities prior to a plane departure

Suppose that we know $x_{28}(k)$, the $k^\text{th}$ departure time at this gate. Then the earliest time (in minutes) that this set of activities can start again for the next departure is $$ x_1(k+1) = x_{28}(k) + 30 \ .$$ The start times of all other activities are obtained similarly. In particular, for activities like 28, that depend on more than one activity, we have $$ x_{28}(k) = \max\{ x_{18}(k) + 20, x_{27}(k)+10 \} \ .$$ Overall, we obtain a system of equations, which has a natural interpretation in MPA. The eigenvalues of the max-plus system correspond to the turnaround time of the network; here, we obtain turnaround time $= 141$ minutes, which can also be found by employing critical path analysis. However, MPA gives a deterministic, linear algebraic handle on this process, and it can also deal with a more complicated network of activities, where the period of the activity times is larger than one. How is it linear? Via the simple operator transformations: $\max \longmapsto \oplus $ and $+ \longmapsto \otimes $. The operators $\oplus$ and $\otimes$ play the same role in MPA as $+$ and $\times$ do in conventional mathematics (this is easy to check!).

The work with Emirates ended before being implemented. But it was another transport application that inspired this model. In the railway network, we can think of nodes as stations and edges as tracks. By imposing the rule that trains at each station must only depart once trains have arrived from other stations, a max-plus equation is formed since the earliest departure time of a train is the maximum of the times of arriving trains. Whilst this might seem constraining, it is in fact a standard starting point for efficient mathematical modelling. The beauty of MPA is the intimate link between the mathematics and network theory: one only needs a standard understanding of the mathematics, following which problems can often be solved by straightforward optimisation of network structures.

Indeed, by supervising Masters level projects in this area, most recently for the Oxford MMSC course, a big question that has been addressed is, 'what is the optimal railway network?' Starting with an abstraction of the railway network, then strategically adding nodes and edges, it turns out that a one-way ring layout is best! Whilst this allows stations to wait only for one train before departures, it obviously looks impractical; for instance, to travel from London Euston to Oxford, it seems one would have to go through Edinburgh! But actually the additional nodes in this layout represent platforms within stations, and the dashed edges represent walks between these platforms. This is what produces the impracticality illusion; we have in fact split London Euston into multiple substations, i.e., platforms. So MPA allows a lot of room for manoeuvre to produce optimal railway networks. This work has also thrown up a multitude of interesting theoretical questions, such as the concepts of eigenvalues and eigenvectors in generalised models; so this is a classic example of a two-way benefit: pure mathematics feeds applied mathematics, which feeds back to pure.

Figure 2: The optimal railway network is a one-way ring?

In my latest work, I am developing an extension of MPA called Maxmin-$\omega$, which has provided novel areas of application. Here, we obtain a solely max-plus system when $\omega=1$ and a solely min-plus system when $\omega=0$ (where nodal activities are started upon receipt of the first instead of the last input, as in MPA). When ${\omega\in(0,1)}$, the system is governed by a combination of max- and min-plus algebra. The parameter $\omega$ therefore behaves as a threshold, and this yields a deterministic way of thinking about threshold models of information proliferation. I particularly want to apply this to periodic information diffusion data, such as retweet activity on Twitter and neural networks, which are constantly bouncing signals amongst neurons. The work will be presented at Sunbelt, the biggest social networks conference, this summer, to be held in Montreal. In keeping with the nature of this modern data-centric age, the possibilities for MPA and its extensions are exciting."