News

Wednesday, 22 May 2019

Graham Farmelo - The Universe Speaks in Numbers. Latest Oxford Mathematics Public Lecture now available

An old-fashioned tale of romance and estrangement, Graham Farmelo's Oxford Mathematics Public Lecture charts the 350-year relationship between Mathematics and Physics and its prospects for the future. Might things be less dramatic in future? Might they just have to be 'going steady' for a while?

Our Oxford Mathematics Public Lectures are aimed at a general audience who are curious about maths and its many facets. They are all live streamed and available afterwards on our YouTube Channel. For a full list of forthcoming lectures please click here. You are all very welcome.

Oxford Mathematics Public Lectures are generously supported by XTX Markets.

 

 

 

 

 

 

Monday, 13 May 2019

The science of jumping popper toys

Snap-through buckling is a type of instability in which an elastic object rapidly jumps from one state to another. Such instabilities are familiar from everyday life: you have probably been soaked by an umbrella flipping upwards in high winds, while snap-through is harnessed to generate fast motions in applications ranging from soft robotics to artificial heart valves. In biology, snap-through has long been exploited to convert energy stored slowly into explosive movements: both the leaf of the Venus flytrap and the beak of the hummingbird snap-through to catch prey unawares.

Despite the ubiquity of snap-through in nature and engineering, how fast snap-through occurs (i.e. its dynamics) is generally not well understood, with many instances reported of delay phenomena in which snap-through occurs extremely slowly. A striking example is a children’s ‘jumping popper’ toy, which resembles a rubber spherical cap that can be turned inside-out. The inside-out shape remains stable while the cap is held at its edges, but leaving the popper on a surface causes it to snap back to its natural shape and leap upwards. As shown in the figure, the snap back is not immediate: a time delay is observed during which the popper moves very slowly before rapidly accelerating. The delay can be several tens of seconds in duration — much slower than the millisecond or so that would be expected for an elastic instability. Playing around further reveals other unusual features: holding the popper toy for longer before placing it down generally causes a slower snap-back, and the amount of delay is highly unpredictable, varying greatly with each attempt.

In a series of videos launching The Mathematical Observer, a new YouTube channel showcasing the research performed in the Oxford Mathematics Observatory, Oxford Mathematician Michael Gomez (in collaboration with Derek Moulton and Dominic Vella) investigates the science behind the jumping popper toy. Episode one discusses why the popper toy snaps, and the important role played by the geometry of a spherical cap. Episode two focuses on how fast the popper toy snaps, and how its unpredictable nature can arise purely from the mathematical structure of the snap-through transition.

 

 

Thursday, 9 May 2019

The third in our series of filmed student lectures - Ben Green on Integration

Back in October, for the first time, we filmed an actual student lecture, Vicky Neale's lecture on 'Complex Numbers.' We wanted to show what studying at Oxford is really like, how it is not so different to school while at the same time taking things to a more rigorous level. Since we made the film available, over 375,000 people have watched some of it. 

Emboldened, we went one stage further in February and live streamed a lecture (and made it available subsequently), James Sparks on 'Dynamics.' But in addition to the lecture, we also filmed the subsequent tutorial which all students receive, usually in pairs, after lectures, and which is the essential ingredient of the Oxford learning experience. Both have been huge successes.

So we come to the third in our series of filmed student lectures. This is the opening lecture in the 1st Year course on 'Analysis III - Integration.' Prof. Ben Green both links the course to the mathematics our students have already learnt at school and develops that knowledge, taking the students to the next stage. Like all good lectures it recaps and points forward (the course materials accompanying the Integration lectures can be found here).

The lectures and tutorial are all part of our going 'Behind the Scenes' at Oxford Mathematics. We shall we filming our Open Days in July and more lectures in the Autumn. Please send any comments to external-relations@maths.ox.ac.uk

Wednesday, 8 May 2019

Matthew Butler awarded the Lighthill-Thwaites Prize for 2019

Oxford Mathematician Matthew Butler has been awarded the biennial Lighthill-Thwaites Prize for 2019. The prize is awarded by the Institute of Mathematics and its Applications to researchers who have spent no more than five years in full-time study or work since completing their undergraduate degrees.

Matthew's research focuses on fluid dynamics, particulary flows at low Reynolds number involving surface tension and interactions with elastic boundaries. His talk at the British Applied Mathematics Colloquium 2019 where the prize was awarded was entitled 'Sticking with droplets: Insect-inspired modelling of capillary adhesion" and focused on how having a deformable foot can be beneficial when trying to adhere to a substrate using the surface tension of a fluid droplet. In his PhD Matthew is studying insect adhesion, and in particular how insects can utilise physical laws to improve their ability to stick to surfaces.

Oxford Mathematician Doireann O'Kiely won the prize in 2017 and Laura Kimpton, also from Oxford, won it in 2013. Oxford Mathematician Jessica Williams was also a finalist this year.

 

Tuesday, 7 May 2019

The influence of porous-medium microstructure on filtration - or how to design the perfect vacuum cleaner

Oxford Mathematician Ian Griffiths talks about his work with colleagues Galina Printsypar and Maria Bruna on modelling the most efficient filters for uses as diverse as blood purification and domestic vacuum cleaners.

"Filtration forms a vital part of our everyday lives, from the vacuum cleaners and air purifiers that we use to keep our homes clean to filters that are used in the pharmaceutical industry to remove viruses from liquids such as blood. If you’ve ever replaced the filter in your vacuum cleaner you will have seen that it is composed of a nonwoven medium – a fluffy material comprising many fibres laid down in a mat. These fibres trap dust particles as contaminated air passes through, protecting the motor from becoming damaged and clogged by the dust. A key question that we ask when designing these filters is, how does the arrangement of the fibres in the filter affect its ability to remove dust? 

One way in which we could answer this question is to manufacture many different types of filters, with different fibre sizes and arrangements, and then test the performance of each filter. However, this is time consuming and expensive. Moreover, while we would be able to determine which filter is best out of those that we manufactured, we would not know if we could have made an even better one. By developing a mathematical model, we can quickly and easily assess the performance of any type of filter, and determine the optimal design for a specific filtration task, such as a vacuum cleaner or a virus filter.

As contaminants stick onto the surface of the filter fibres they will grow in size and we must capture this in our mathematical model to predict the filter performance. For a filter composed of periodically spaced and uniformly sized fibres, eventually all the fibres will become so loaded with contaminants that they will touch each other. At this point the filter blocks and we can terminate our simulation. However, for a filter with randomly arranged fibres, we may find ourselves in a scenario in which two fibres start off quite close to one another and so touch after trapping only a small amount of contaminant on their surface. At this point the majority of the rest of the filter is still able to trap contaminants and so we do not want to stop our filtration simulation yet. To compare the performance of filters with periodic and random fibre arrangements we therefore introduce an agglomeration algorithm. This provides a way for us to model two touching fibres as a single entity that also traps contaminants, and allows us to continue our simulation.

Filter devices operate in one of two regimes:

Case 1: the flow rate is held constant. This corresponds to vacuum cleaners or air purifiers, which process a specified amount of air per hour, but require more energy to do this as the filter becomes blocked.

Case 2: the pressure difference is held constant. This corresponds to biological and pharmaceutical processes such as virus filtration from blood. A signature of this operating regime is the drop observed in the rate of filtering fluid as the filter clogs up.

We divide our filters into three types, comprising: (i) uniformly sized periodically arranged fibres; (ii) uniformly sized randomly arranged fibres; or (iii) randomly sized and randomly arranged fibres. We compare the performance of each of these three filter types through four different metrics:

The permeability, or ease in which the contaminated fluid can pass through them.
The efficiency, or how much contaminant is trapped by the obstacles.
The dirt-holding capacity, or how much contaminant can be held in total before the filter blocks.
The lifetime, or how long the filter lasts.

Our studies show that:

The permeability is higher for a filter composed of randomly arranged fibres than for a periodic filter. This is due to the ability for the contaminated fluid to find large spaces through which it can pass more easily (see figure below).

In Case 1, a filter composed of a random arrangement of uniformly sized fibres is shown to maintain the lowest pressure drop (i.e., require the least energy) with little compromise to the efficiency of removal and the dirt-holding capacity to that of a regular array of fibres.

In Case 2, a filter composed of randomly arranged fibres of different sizes gives the best removal efficiency and dirt-holding capacity. This comes at a cost of a reduced processing rate of purified fluid when compared with a regular array of fibres, but for examples such as virus filtration the most important objective is removal efficiency, and the processing rate is a secondary issue.

Thus, we find that the randomness in the fibres that we see in the filters in everyday processes can actually offer an advantage to the filtration performance. Since different filtration challenges will place different emphasis on the importance on the four performance metrics, our mathematical model can quickly and easily predict the optimum filter design for a given requirement."   

Left: air flow through a filter composed of a periodic hexagonal array of fibres.

Right: air flow through a filter composed of a random array of fibres. In the random case a 'channel' forms through which the fluid can flow more easily, which increases the permeability.

For more on this work please click here.

Monday, 29 April 2019

Learning from Stochastic Processes

Oxford Mathematician Harald Oberhauser talks about some of his recent research that combines insights from stochastic analysis with machine learning:

"Consider the following scenario: as part of a medical trial a group of $2n$ patients wears a device that records the activity of their hearts - e.g. an electrocardiogram that continuously records the electrical activity of their hearts - for a given time period $[0,T]$. For simplicity, assume measurements are taken in continuous time, hence we get for each patient a continuous path $x=(x(t))_{t \in [0,T]}$, that is an element $x \in C([0,T],\mathbb{R})$, that shows their heart activity for any moment in the time period $[0,T]$. Additionally, the patients are broken up into two subgroups - $n$ patients that take medication and the remaining $n$ patients that form a placebo group that does not take medication. Can one determine a statistically significant difference in the heart activity patterns between these groups based on our measurements?

The above example can be seen as learning about the law of a stochastic process. A stochastic process is simply a random variable $X=(X_t)_{t \in [0,T]}$ that takes values in the space of paths $C([0,T],\mathbb{R})$ and the law $\mu$ of $X$ is a probability measure that assigns a probability to the possible observed paths. The above example amounts to drawing $n$ samples from a path-valued random variable $X$ with unknown law $\mu$ (the medicated group) and $n$ samples from a path-valued random variable $Y$ with unknown law $\nu$ (the placebo group). Our question about a difference between these groups amounts to asking whether $\mu\neq\nu$. Finally, note that this example was very simple, and in typical applications we measure much more than just one quantity over time. Hence, we discuss below the general case of $C([0,T],\mathbb{R}^e)$-valued random variables for any $e \ge 1$ ($e$ can be very large, e.g.~$e\approx 10^3$ appears in standard benchmark data sets).

But before we continue our discussion about path-valued random variables, let's recall the simpler problem of learning the law of a real-valued random variable. A fundamental result in probability is that if $X$ is a bounded, real-valued random variable, then the sequence of moments \begin{align} (1)\quad\label{moments} (\mathbb{E}[X], \mathbb{E}[X^2], \mathbb{E}[X^3], \ldots) \end{align} determines the law of $X$, that is the probability measure $\mu$ on $\mathbb{R}$ given as $\mu(A)= \mathbb{P}(X \in A)$. This allows us to learn about the law of $X$ from independent samples $X_1,\ldots,X_n$ from $X$, since \[\frac{1}{n}\sum_{i=1}^n X_i^m \rightarrow \mathbb{E}[X^m] \text{ as }n\rightarrow \infty\] for every $m\ge 1$ by the law of large numbers. This extends to vector-valued random variables, but the situation for path-valued random variables is more subtle. The good news is that stochastic analysis and a field called "rough path theory'' provides a replacement for monomials in the form of iterated integrals: the tensor \[ \int dX^{\otimes{m}}:= \int _{0 \le t_1\le \cdots \le t_m \le T} dX_{t_1} \otimes \cdots \otimes dX_{t_m} \in (\mathbb{R}^e)^{\otimes m} \] is the natural "monomial'' of order $m$ of the stochastic process $X$. In analogy to (1), we expect that the so-called "expected signature"' of $X$, \begin{align}(2)\quad\label{esig} (\mathbb{E}[\int dX], \mathbb{E}[\int dX^{\otimes{2}}], \mathbb{E}[\int dX^{\otimes 3}], \ldots) \end{align} can characterize the law of $X$. In recent work with Ilya Chevyrev we revisit this question and develop tools that turn such theoretical insights into methods that can answer more applied questions as they arise in statistics and machine learning. As it turns out, the law of the stochastic process $X$ is (essentially) always characterized by (2) after a normalization of tensors that ensures boundedness of this sequence. Mathematically, this requires us to understand phenomena related to the non-compactness of $C([0,1],\mathbb{R}^d)$. Inspired by results in machine learning, we then introduce a metric for laws of stochastic processes, a so-called "maximum mean discrepancy'' \begin{align} d(\mu,\nu)= \sup_{f} |\mathbb{E}[f(X)] - \mathbb{E}[f(Y)]| \end{align} where $\mu$ denotes the law of $X$ and $\nu$ the law of $Y$ and the $\sup$ over a sufficently large class of real-valued functions $f:C([0,T], \mathbb{R}^e) \rightarrow \mathbb{R}$. To compute $d(\mu,\nu)$ we make use of a classic "kernel trick'' that shows that \begin{align} (3) \quad\label{mmd} d(\mu,\nu) = \mathbb{E}[k(X,X')] - 2\mathbb{E}[k(X,Y)] + \mathbb{E}[k(Y,Y')] \end{align} where $X'$ resp.~$Y'$ are independent copies of $X\sim\mu$ resp.~$Y\sim \nu$ and the kernel $k$ is the inner product of the sequences of iterated integrals formed from $X$ and $Y$. Previous work provides algorithms that can very efficiently evaluate $k$. Combined with (3) and the characterization of the law by (2), this allows us to estimate the metric $d(\mu,\nu)$ from finitely many samples.

An immediate application is a two-sample test: we can test the null hypothesis \[ H_0: \mu=\nu \text{ against the alternative }H_1: \mu\ne \nu \] by estimating $d(\mu,\nu)$ and rejecting $H_0$ if this estimate is bigger than a given threshold. Recall our example of two patient groups (medicated and placebo). The question whether there's a statistical difference between these groups can be formalized as such a two-sample hypothesis test. To gain more insight it can be useful to work with synthetic data (after all, the answer to a two-sample test will be a simple yes or no depending on whether to reject the null hypothesis). Therefore consider the following toy example: $\mu$ is the law of a simple random walk and $\nu$ is the law of a path-dependent random walk; Figure 1 shows how samples from these two stochastic processes look; more interestingly Figure 2 shows the histogram of an estimator for $d(\mu,\nu)$. We see that the support is nearly completely disjoint which indicates that the test will perform well and this can be made rigorous and quantitative.

These examples of $e=1$-dimensional paths are overly simplistic - we can already see a difference by looking at the sampled paths. However, this situation changes drastically in the higher-dimensional setting of real-world data; some coordinates might evolve quickly, some slowly, others might not be relevant at all, often the statistically significant differences are in the complex interactions between some coordinates; all subject to noise and variations. Finally, the index $t \in [0,T]$ might not necessarily be time; for example, in a recent paper with Ilya Chevyrev and Vidit Nanda (featured in a previous case-study), the index $t$ is the radius of spheres grown around points in space and the path value for every $t>0$ is determined by a so-called barcode from topological data analysis that captures the topological properties of these spheres of radius $t$"


Figure 1: The left plot shows 30 independent samples from a simple random walk; the right plot shows 30 independent samples from a path-dependent random walk.

 

  

Figure 2: The left plot shows the histogram for an estimator of $d(\mu,\nu)$ if the null hypothesis is true, $\mu=\nu$ (both are simple random walks); the right plot shows the same if the null is false ($\mu$ is a simple random walk and $\nu$ is a path-dependent random walk as shown in Figure 1.

Friday, 26 April 2019

Artur Ekert awarded a Micius Quantum Prize 2019

Oxford Mathematician Artur Ekert has been awarded a Micius Quantum Prize 2019 (Theory category) for his invention of entanglement-based quantum key distribution, entanglement swapping, and entanglement purification. The prizes recognise the scientists who have made outstanding contributions in the field of quantum mechanics and the 2019 prizes focus on the field of quantum communication. 

Artur Ekert is one of the leaders in the Quantum Cryptography field. His research extends over most aspects of information processing in quantum-mechanical systems and brings together theoretical and experimental quantum physics, computer science and information theory. Its scope ranges from deep fundamental issues in physics to prospective commercial exploitation by the computing and communications industries.

Oxford Physicist and close colleague of Artur's, David Deutsch was also awarded a prize in the Quantum Computation Theory Category.

The Micius prizes are awarded by the Micius Quantum Foundation. The Foundation is named after Micius, a Chinese philosopher who lived in the fifth century BC.

 

Tuesday, 23 April 2019

Why do liquids form patterns on solid surfaces?

The formation of liquid drop patterns on solid surfaces is a fundamental process for both industry and nature. Now, a team of scientists including Oxford Mathematician Andreas Münch and colleagues from the Weierstrass Institute in Berlin, and the University of Saarbrücken can explain exactly how it happens.

Controlling and manipulating liquid drop patterns on solid surfaces has a diverse range of applications including in the coating industry as well as in developing tools for use in cell biology. These are also essential for the famous ‘lotus leaf’ effect where the leaf self-cleans. Now, as they explain in an article in Proceedings of the National Academy of Sciences of the United States of America (PNAS), the team has identified that the formation of droplets during a dewetting (retraction) of a thin liquid film from a hydrophobically (water repellent) coated substrate is essentially controlled by the friction at the interface of the liquid and the solid surface.

The mathematical models and the highly adaptive numerical schemes developed by the team predict very distinct droplet patterns that include both the large-scale polygonal structure of ensembles of micro-scale droplets to the small-scale cascades of even smaller satellite droplets (see picture). These results were achieved simply by varying the condition at the liquid-solid boundary, a so-called Navier-slip-type condition.

In the experiments, the surfaces were treated with different surface coatings that made them particularly slippery, thereby reducing the interfacial friction. As a result, the theoretical predictions, from the large-scale to the small-scale patterns were confirmed.

It is now possible for a trained observer to "see" the friction exerted by the surface coating and relate the multi-scale pattern to the corresponding interface condition and the presence of slip. This has great potential, not only for spotting similar conditions in nature, but as a facile, non-invasive tool for designing patterns in many micro-fluidic applications without the need for pre-patterning or lithographic treatment of the surface. These would include bespoke synthesis of monodisperse (uniform) materials or biosensor applications.

Tuesday, 23 April 2019

How do additive and multiplicative structures interact?

Oxford Mathematician Joni Teräväinen talks about his work concerning prime factorisations of consecutive integers and its applications in number theory.

"Analytic number theorists study the properties of the positive integers $1,2,3,\ldots$, and in particular the multiplicative structure of this set. From an additive point of view, the positive integers are rather simple: every integer is generated by $1$ (meaning that any number can be written as a sum of some number of ones). But what are the multiplicative generators of the integers? These are the prime numbers $2,3,5,7,11,\ldots$ (meaning that every integer is the product of primes in a unique way), and their distribution is a subtle and interesting question that has been at the centre of analytic number theory research for the past 150 years. The celebrated prime number theorem from 1897 states that the number of primes up to a large number $x$, denoted by $\pi(x)$, is approximately $x/\log x$ (more precisely, the ratio of $\pi(x)$ and $x/\log x$ approaches $1$ as $x$ tends to infinity). But how good exactly is this approximation? It turns out that the optimal answer to that question is equivalent to solving the Riemann hypothesis, one of the great unsolved problems in modern mathematics.

The distribution of prime numbers can be viewed as a purely multiplicative question. However, many of the most notorious conjectures in number theory arise when one combines additive structure with multiplicative structure. As an example, we have the unsolved twin prime conjecture: Are there infinitely many primes $p$ such that $p+2$ is also a prime? The list of such $p$ begins $3,5,11,17,29,41,\ldots$, and the largest known twin prime to date has $388 342$ decimal digits. The twin prime problem combines addition (by looking at the shift $p+2$) with an inherently multiplicative structure, the primes. The resolution of the twin prime conjecture seems to be out of reach (despite spectacular progress by Zhang, Maynard, Tao and others on gaps between primes, demonstrating that $p$ and $p+M$ are infinitely often simulatenously prime for some fixed $M\geq 2$), so one is led to look for related problems that are somewhat more manageable.

The Liouville function $\lambda(n)$ is an important number-theoretic function given by the formula $\lambda(n)=(-1)^{\Omega(n)}$, where $\Omega(n)$ is the number of prime divisors of $n$ (counting multiplicities, so for example $12=2\cdot 2\cdot 3$ has $\Omega(12)=3$). The Liouville function is intimately related to the primes and to multiplicative properties of the integers, since one easily sees that $\lambda(p)=-1$ for all primes $p$ and $\lambda(mn)=\lambda(m)\lambda(n)$ for all $m,n\geq 1.$ For example, one can show that the prime number theorem stated above is equivalent to finding cancellation in the partial sums of $\lambda(n)$ (or more precisely, it is equivalent to $\lim_{x\to \infty}\frac{1}{x}\sum_{n\leq x}\lambda(n)=0$).

If one looks at the values of the Liouville function up to a large threshold (the sequence starts $+1,-1,-1,+1,-1,+1,-1,-1,+1,+1,\ldots$), it appears to behave just like a random sequence of signs, with no apparent structure in it. This is called the Möbius randomness principle in number theory (because it's often formulated for the  Möbius function, a function closely related to the Liouville function $\lambda(n)$). One instance of this principle is that the partial sums of $\lambda(n)$ should behave like the sums of random variables taking values $\pm 1$ with equal probability (so there would be square-root cancellation in the partial sums $\sum_{n\leq x}\lambda(n)$). It has been shown that a rigorous proof of this is equivalent to the Riemann hypothesis, and therefore appears to be out of reach of current methods. A different but equally valid way to specify how the Liouville sequence appears random is to assert that the sequence $\lambda(n)$ forgets its history. This means that the value of $\lambda(n+1)$ is asymptotically independent of the value of $\lambda(n)$ (much in the same way as the $n+1$st result of a series of coin tosses is independent of the $n$th result). The same should hold for longer patterns of shifts, so if someone picks a large integer $n$ and lists you the values of $\lambda(n),\lambda(n+1),\ldots, \lambda(n+9)$, that should be of no use if you're trying to guess the value of $\lambda(n+10)$. A precise version of this statement was conjectured by Chowla in the 1960, and it says the following.

Conjecture (Chowla's conjecture): Let $k\geq 1$. Then for any signs $s_1,\ldots, s_k\in \{-1,+1\}$, the asymptotic proportion of those integers $n\geq 1$ for which $\lambda(n+1)=s_1$, $\lambda(n+2)=s_2,\ldots, \lambda(n+k)=s_k$ is equal to $2^{-k}$.

Chowla's conjecture is closely related to the twin prime conjecture mentioned earlier, but at the same time Chowla's conjecture seems more tractable, due to the multiplicative property of $\lambda(n)$. Note that the twin prime conjecture would certainly imply as a special case that $\lambda(n)=\lambda(n+2)=-1$ infinitely often. Conversely, it is known that a sufficiently strong version of Chowla's conjecture could be used to deduce the twin prime conjecture (but no one has proved Chowla's conjecture, let alone this strengthening of it).

In 2015, Tao was the first one to make significant progress on Chowla's conjecture, building on the work of Matomäki and Radziwill on multiplicative functions. He showed that the case $k=2$ of Chowla's conjecture holds (with the technical modification that he considered the logarithmic probability, rather than asymptotic probability). Thus for example $\lambda(n)=\lambda(n+1)=-1$ holds for exactly $25\%$ of all the integers (again with logarithmic probability). In 2017, in joint work with Tao, we showed that Chowla's conjecture holds for $k=3$ as well (and we showed that a different formulation of it, which states that the correlation sums $\sum_{n\leq x}\lambda(n)\lambda(n+1)\cdots \lambda(n+k-1)$ exhibit cancellation, holds for any odd value of $k$). Again, we had to use the logarithmic probability instead of the asymptotic one. In a later work, we partially removed the restriction to logarithmic probability. Now we can for example say that $\lambda(n)=\lambda(n+1)=\lambda(n+2)=-1$ holds for exactly $12.5\%$ of all the integers.

Chowla's conjecture turns out to be far from being an isolated problem, and in the last few years methods that were originally developed to tackle partial cases of Chowla's conjecture, such as the ones above, have proved to be fruitful for several other problems as well, including some problems outside of number theory. For example, Tao used his work on the $k=2$ case of Chowla's conjecture (or rather a generalisation of it) to settle the Erdős discrepancy problem, a famous open problem in combinatorics. This notorious problem is the superficially simple statement that if you take any sequence $s(n)$ of signs $\pm 1$, then the partial sums $\sum_{n=1}^{j}s(dn)$ can be made arbitrarily large in absolute value by choosing suitable values of $j$ and $d$. In a different interesting development, Frantzikinakis and Host used the previously mentioned work on Chowla's conjecture as an ingredient to make progress on Sarnak's conejcture, an important conjecture in ergodic theory that is also related to the randomness of the Liouville sequence. Clearly, much remains to be studied regarding Chowla's conjecture and its applications and generalisations."

For more on Joni's work with Terry Tao on the structure of logarithmically averaged correlations of multiplicative functions please click here; and for more on their research in to the structure of correlations of multiplicative functions at almost all scales click here.

Monday, 15 April 2019

Oxford Mathematics Public Lectures: Julia Wolf - The Power of Randomness. 30 April 2019

Far from taking us down the road of unpredictability and chaos, randomness has the power to help us solve a fascinating range of problems. Join Julia Wolf on a mathematical journey from penalty shoot-outs to internet security and patterns in the primes. 

Julia Wolf is University Lecturer in the Department of Pure Mathematics and Mathematical Statistics at the University of Cambridge.

5-6pm 
Mathematical Institute
Oxford

Please email external-relations@maths.ox.ac.uk to register.

Watch live:
https://facebook.com/OxfordMathematics
https://livestream.com/oxuni/wolf

The Oxford Mathematics Public Lectures are generously supported by XTX Markets.

Pages