News

Thursday, 26 July 2018

Oxford Mathematics London Public Lecture: 'To a physicist I am a mathematician; to a mathematician, a physicist' - Roger Penrose in conversation with Hannah Fry

'To a physicist I am a mathematician; to a mathematician, a physicist'

7.00pm, 30 October 2018, Science Museum, London, SW7 2DD

Roger Penrose is the ultimate scientific all-rounder.  He started out in algebraic geometry but within a few years had laid the foundations of the modern theory of black holes with his celebrated paper on gravitational collapse. His exploration of foundational questions in relativistic quantum field theory and quantum gravity, based on his twistor theory, had a huge impact on differential geometry. His work has influenced both scientists and artists, notably Dutch graphic artist M. C. Escher.

Roger Penrose is also one of the great ambassadors for science. In this lecture and in conversation with mathematician and broadcaster Hannah Fry he will talk about work and career.

This lecture is in partnership with the Science Museum in London where it will take place. Please email external-relations@maths.ox.ac.uk to register.

You can also watch online:

https://www.facebook.com/OxfordMathematics

https://livestream.com/oxuni/Penrose-Fry

The Oxford Mathematics Public Lectures are generously supported by XTX Markets.

Thursday, 26 July 2018

The journey of the applied mathematician - retiring Sedleian Professor Sir John Ball reflects

John Ball is retiring as Sedleian Professor of Natural Philosophy, Oxford oldest scientific chair. In this interview with Alain Goriely he charts the journey of the applied mathematician.as the subject has developed over the last 50 years.

Describing his struggles with exams and his time at Cambridge, Sussex and Heriot-Watt before coming to Oxford in 1996, John reflects on how his interests have developed, what he prizes in his students, as well as describing walking round St Petersburg with Grigori Perelman, his work as an ambassador for his subject and the vital importance of family (and football).

 

 

 

 

Monday, 23 July 2018

How are trading strategies in electronic markets affected by latency?

Oxford Mathematicians Álvaro Cartea and Leandro Sánchez-Betancourt talk about their work on employing stochastic optimal control techniques to mitigate the effects of the time delay when receiving information in the marketplace and the time delay when sending instructions to buy or sell financial instruments on electronic exchanges.

"In order driven exchanges, liquidity takers face a moving target problem as a consequence of their latency – the time taken to send an order to the exchange. If an order is sent aiming at a price and quantity observed in the limit order book (LOB) then by the time their order is processed by the exchange prices could have worsened, so the order may not be filled; or prices could have improved, so the order is filled at a better price.

Traders can mitigate the adverse effects of missing a trade by including a price limit in their orders to increase the probability of filling the order when it is processed by the Exchange. This price limit consists of the best price seen by the trader in the LOB plus a degree of discretion that specifies the number of ticks the order can walk the LOB and still be acceptable. In other words, for a buy order, the number of ticks included in the discretion specifies the maximum  price the trader is willing to pay to fill the order. Similarly, for a sell order, the number of ticks included in the discretion specifies the minimum price the trader is willing to receive to fill the order. This discretion does not preclude the order from being filled at better prices if the LOB is updated with more favourable prices or quantities.

In our paper we show how to choose the discretion of orders in an optimal way to improve fill ratios over a period (days, weeks, months), while keeping orders exposed to receiving price improvement. Increasing fill ratios is costly. Everything else being equal, the chances of filling an order increase if the order can walk the LOB. Thus, there is a tradeoff between ensuring high fill ratios and the execution costs borne by the trading strategy. In our approach, the dynamic optimisation problem solved by the trader balances this tradeoff by minimising the discretion specified in the marketable orders, while targeting a fill ratio over a trading horizon. The trader's optimal strategy specifies the discretion for each transaction depending on the proportion of orders that have been filled, how far the strategy is from the target fill ratio, the cost of walking the LOB, and the volatility of the exchange rate.

We employ a data set of foreign exchange trades to analyse the performance of the optimal strategy developed here. The data are provided by LMAX Exchange (www.lmax.com). We use anonymised transaction data for two foreign exchange traders to compare the fill ratios they achieved in practice to those attainable with the optimal strategy derived in the paper. The data spans a set of dates from December 2016 to March 2017. During this period both traders filled between approximately 80% and 90% of their liquidity taking orders in the currency pair USD/JPY.

We find that the effect of latency on trade fills is exacerbated during times of heightened volatility in the pair USD/JPY. When volatility is arranged in quartiles, we find that between 36% and 40% of unfilled trades occur in the top quartile of volatility.

We employ the optimal strategy developed in our paper to show the tradeoff between increasing fill ratios through the use of discretion and the costs incurred by the strategy. We show that traders could have increased the percentage of filled trades, during the period 5 December 2016 to 30 March 2017, to 99% for both traders. In this example, the average cost incurred by the traders to fill trades missed by the naïve strategy was between 1.24 and 1.76 ticks. On the other hand, the cost of returning to the market 20ms and 100 ms later to fill trades missed by the naïve strategy is between 2.01 and 2.75 ticks respectively.

The performance of the optimal strategy is more remarkable during times of heightened volatility of the exchange rate. In the top quartile of volatility, the average cost of filling missed trades using the optimal strategy is approximately 1.87 ticks, while the mark-to-market average cost of filling the missed trades employing market orders that walk the LOB until filled 100ms later is between 3 and 3.3 ticks.

Finally, we build a function that maps various levels of latency to the corresponding percentage of filled orders. We use this mapping to calculate the shadow price of latency that a particular trader would be willing to pay to reduce the latency of his connection to an exchange. We show that the trader would be better off employing the latency-optimal strategy developed in our paper, instead of   investing in hardware and co-location services to reduce latency. The latency-optimal strategy is superior because it not only achieves the same fill ratios as those obtained with better hardware and co-location, but it scoops price improvements that stem from orders arriving with latency at the Exchange."

The full paper may be downloaded here.

Monday, 23 July 2018

How do Nodal lines for eigenfunctions bring together so many facets of mathematics?

Oxford Mathematician Riccardo W. Maffucci is interested in `Nodal lines for eigenfunctions', a multidisciplinary topic in pure mathematics, with application to physics. Its study is at the interface of probability, number theory, analysis, and geometry. The applications to physics include the study of ocean waves, earthquakes, sound and other types of waves. Here he talks about his work. 

"Of particular interest to me are the lines that remain stationary during membrane vibrations, the so-called `nodal lines'.

 

Figure1: Nodal lines

 

The study of these lines dates back to pioneering experiments by Hooke. The alternative name, `Chladni Plates,' derives from Chladni's work (18th-19th century). One wants to understand the fine geometric properties of the nodal lines. In several cases we introduce a randomisation of the model, to examine events occurring with high probability. The number theory aspects of this problem are related to which numbers are representable as a sum of two squares. For instance one may write 10 = 1+9 but 7 is not the sum of two squares. Their understanding is tantamount to the study of integer coordinate points (`lattice points') on circles.

 

Figure 2: Lattice points on circles (see larger title image for detail)

 

The merge of ideas from these disciplines has been brought together by the new and exciting research area of `arithmetic random waves'. There are natural generalisations of these two-dimensional concepts to higher dimensions. For instance in dimension three, one is interested in the `nodal surfaces'.

 

Figure 3: Nodal Surfaces

 

Here the number theory is related to integers expressible as a sum of three squares, and to the lattice points on spheres. For instance, one question concerns the distribution of the lattice points on the surface of the sphere, and in specific regions of it, as in the picture.

 

Figure 4: Lattice points on spheres

 

This is a new and exciting field of research with several recent breakthroughs. The group of academics working in this area is growing rapidly. Watch this space."

For more on this subject click here and here.

Wednesday, 18 July 2018

Oxford Mathematician Ian Griffiths wins Vice Chancellor's Innovation Award for his work on mitigation of arsenic poisoning

Oxford Mathematician Ian Griffiths has won a Vice Chancellor's Innovation Award for his work on mitigation of arsenic poisoning. This work is in collaboration with his postdoctoral research associates Sourav Mondal and Raka Mondal, and collaborators Professor Sirshendu De and Krishnasri Venkata at the Indian Institute of Technology, Kharagpur.

As part of this award a short video was produced explaining the problem and its possible mathematical solution. 

 

Saturday, 14 July 2018

The Mathematics of Smoothies - the Dynamics of Particle Chopping in Blenders and Food Processors

Have you ever forgotten to replace the lid of the blender before beginning to puree your mango and passion-fruit smoothie? If you have, you'll have witnessed the catastrophic explosion of fruit and yoghurt flung haphazardly around the kitchen in an unpredictable manner. This is a consequence of the complicated and turbulent fluid dynamics present within the machine, the exact behaviour of which is unknown. Sharp, angular blades rotate at extremely high speeds to mix and chop the fruit into a puree consisting of particles that are ideally as small and uniform in size as possible. But what characteristics of the blender are responsible for the outcome? While experimental evidence gives intuition into blade and vessel design, along with operational parameters such as speed and blend time, there is a knowledge gap surrounding the precise impact on the particle and fluid dynamics.

Oxford Mathematicians Caoimhe Rooney, Ian Griffiths and Colin Please worked with Chuck Brunner, James Potter and Max Wood-Lee from SharkNinja, the company responsible for Nutri Ninja blenders, to understand the chopping dynamics that take place in a blender, with the aim of shedding light on this complex process.

The team derived an integro-differential-equation system, inspired by Becker-Döring and Smoluchowski theory, which provides a predictive model for the resulting size distribution of particles comprising a smoothie after blending an initial mixture of fruits (such as the contents of the blender shown in the figure) for a given amount of time.

The results of the model were found to agree well with experimental trials performed in house (see figure). An unexpected result was the emergence of a second peak in the size distribution of chopped pieces. This is attributed to the fact that each time the blade slices through a piece of fruit, some small debris is also formed. The team modified their model to account for this additional feature, which enabled the second peak to be predicted.    

The taste and texture of a smoothie is heavily dependent on the size and distribution of the pieces from which it is composed. Given an initial selection of fruit pieces, along with the blend time and blend speed, the model is able to predict how the distribution of particle sizes and the most common piece size changes with time during blending. This provides guidance on the optimal blend time to maximize the taste experience.

The work performed by the team forms a foundation for the exploration and optimization of food blenders. In particular, this work paves the way for understanding the complex interplay between fluid dynamics and chopping within a blender. Ultimately, these models will allow us to determine the precise operating regime that will create the most homogeneous smoothies in the most efficient manner. 

---

For more information click here.

The images above feature the NutriNinja blender and a comparison between theoretical prediction (red) and experimental data (grey) for the distribution of different particle sizes in a blender after 50 seconds of blending.

Friday, 29 June 2018

Oxford Mathematician Heather Harrington awarded Whitehead Prize

Oxford Mathematician Heather Harrington has been awarded a Whitehead Prize by the London Mathematical Society (LMS) for her outstanding contributions to mathematical biology which have generated new biological insights using novel applications of topological and algebraic techniques. 

In the words of the citation Heather "has made significant advances through the application of ideas originating in pure mathematics to biological problems for which the techniques of traditional applied mathematics are inadequate. This has involved in particular the development of methods in algebraic statistics which allow one to characterize the qualitative behaviour of dynamical systems and networks, adapting approaches from algebraic geometry to test whether a given mathematical model is commensurate with a given set of experimental observations."

Friday, 29 June 2018

What is Representation Theory and how is it used? Oxford Mathematics Research investigates

Oxford Mathematician Karin Erdmann specializes in the areas of algebra known as representation theory (especially modular representation theory) and homological algebra (especially Hochschild cohomology). Here she discusses her latest work.

"Roughly speaking, representation theory investigates how algebraic systems can act on vector spaces. When the vector spaces are finite-dimensional this allows one to explicitly express the elements of the algebraic system by matrices, hence one can exploit linear algebra to study 'abstract' algebraic systems. In this way one can study symmetry, via group actions. One can also study irreversible processes. Algebras and their representations provide a natural frame for this.

An algebra is a ring which also is a vector space such that scalars commute with everything. An important construction are path algebras: Take a directed graph $Q$, which we call a quiver, and take a coefficient field $K$. Then the path algebra $KQ$ is the vector space over $K$ with basis all paths in $Q$. This becomes an algebra, where the product of two basis elements is either its concatenation if this exists, or is zero otherwise.

Algebras generalize groups, namely if we start with a group, we get naturally an algebra: take the vector space with basis labelled by the group, and extend the group multiplication to a ring structure.

When the coefficients are contained in the complex numbers, representations of groups have been studied for a long time, and have many applications. With coefficients in the integers modulo $2$, for example, the algebras and their representations are much harder to understand. For some groups, the representations have 'finite type'. These are well-understood but almost always they have 'infinite type'. With a few exceptional 'tame' cases, these are usually 'wild', that is there is no hope of a classification of the representations.

The same cases occur precisely for modulo 2 arithmetic and when the symmetry is based on dihedral or semidihedral or quaternion 2-groups. Dihedral 2-groups are symmetries of regular $n$-gons when $n$ is a power of 2. The smallest quaternion group is the famous one discovered by Hamilton.

Viewing these symmetries from groups in the wider context of algebras was used (a while ago) to classify such tame situations. Recently it was discovered that this is part of a much larger universe. Namely one can construct algebras from surface triangulations, in which the ones from the group setting occur as special cases.

One starts with a surface triangulation, and constructs from this a quiver, that is, a directed graph: Replace each edge of the triangulation by a vertex, and for each triangle

where in the last case $a=c\neq b$. At any boundary edge, draw a loop.

For example, consider triangulation of the torus with two triangles, as shown below. Then there are, up to labelling, two possible orientations of triangles and two possible quivers:

The tetrahedral triangulation of the sphere

gives rise to several quivers, depending on the orientation of each triangle, for example:

 

The crystal in the north wing of the Andrew Wiles Building, home of Oxford Mathematics (image drawn above) can be viewed as a triangulation of a surface with boundary. We leave drawing the quiver to the reader.

Starting with the path algebra of such a quiver, we construct algebras by imposing explicit relations, which mimic the triangulation. Although the quiver can be arbitrarily large and complicated, there is an easy description of the algebras. We call these 'weighted surface algebras.' This is joint work with A. Skowronski.

We show that these algebras place group representations in a wider context. The starting point is that (with one exception) the cohomology of a weighted surface algebra is periodic of period four, which means that these algebras generalize group algebras with quaternion symmetry.

The relations which mimic triangles can be degenerated, so that the product of two arrows around a triangle become zero in the algebra. This gives rise to many new algebras. When all such relations are degenerated, the resulting algebras are very similar to group algebras with dihedral symmetry. If we degenerate relations around some but not all triangles, we obtain algebras which share properties of group algebras with semidihedral symmetry. Work on these is in progress."

Thursday, 21 June 2018

Tinkering with postulates. How some mathematics is now redundant. Or is it?

At the beginning of the twentieth century, some minor algebraic investigations grabbed the interest of a small group of American mathematicians.  The problems they worked on had little impact at the time, but they may nevertheless have had a subtle effect on the way in which mathematics has been taught over the past century.

The work in question is labelled postulate analysis.  By 1900, several objects of mathematical study had been axiomatised – that is, their important properties had been identified and assembled into self-contained lists of defining conditions (axioms or postulates).  The postulate analysts typically turned their attention to axiomatisations of systems formed by a set of elements subject to a given operation: for example, the integers with respect to addition, or the rational numbers under multiplication, or other more exotic constructions.  Thus, to give an example, suppose that we wanted to find a collection of postulates for the integers with respect to addition.  We would notice, for instance, that addition is associative, i.e., given any three integers a, b, c, it will always be the case that a + (b + c) = (a + b) + c.  We might therefore take the condition of associativity as a defining postulate.  Likewise, the integers are commutative under addition (a + b = b + a, for any integers a, b), so here we have another possible postulate.  The goal is to come up with a list of postulates that completely describes the system in question.

One of the main obsessions of the postulate analysts was to ensure that the sets of postulates that they had to hand were independent.  To continue with the example of the integers under addition, suppose that we have constructed a set of postulates for the integers which includes the associative and commutative conditions noted above, and also includes, among other things, the further condition that a + (b + c) = (b + a) + c.  This latter condition is, however, dependent on the associative and commutative conditions: all we have to do is apply the commutative law to the bracketed part of the right-hand side of the associative law, and we obtain our new condition.  This latter condition is therefore redundant within our collection of postulates, and can safely be dropped.  The postulate analysts experimented with the inclusion of different postulates in order to ensure that their lists were independent.

The question that arises immediately is: what is the point?  How does the removal of redundancy affect the overall mathematics?  The answer: it doesn't.  We’ve merely tidied up the mathematics.  The investigations of the postulate analysts had a significant aesthetic element, although they argued also that they were increasing the understanding of the objects with whose postulates they were tinkering.  However, most mathematicians disagreed, and by the middle of the twentieth century, postulate analysis had all but died away.  Many of the prominent postulate analysts are still remembered today, but for other things.  For example, E. V. Huntington of Harvard, the most prolific of the postulate analysts, is now best remembered for his work on voting systems: the method currently used to appoint representatives to the US Congress was devised by him.

Although it doesn't appear to have gone anywhere, there is still value in looking at the work on postulate analysis that appeared briefly at the beginning of the twentieth century.  It reminds us, for example, that mathematics has its fashions, just the same as any other human endeavour.  And it seems also that over the course of the twentieth century some of the basic methods of the postulate analysts found their way into elementary mathematics textbooks: the kind of algebraic manipulations of postulates that they carried out may often be found in preliminary exercises, whose value lies not in their answers but in the methods used to arrive at them.  Such exercises that serve to train students in particular ways of logical thinking are arguably the legacy of the postulate analysts.

A study of the work of the postulate analysts by Oxford Mathematician Christopher Hollings may be found in here

Tuesday, 12 June 2018

Mechanistic models versus machine learning: a fight worth fighting for the biological community?

90% of the world’s data have been generated in the last five years. A small fraction of these data is collected with the aim of validating specific hypotheses. These studies are led by the development of mechanistic models focussed on the causality of input-output relationships. However, the vast majority of the data are aimed at supporting statistical or correlation studies that bypass the need for causality and focus exclusively on prediction.

Along these lines, there has been a vast increase in the use of machine learning models, in particular in the biomedical and clinical sciences, to try and keep pace with the rate of data generation. Recent successes now beg the question of whether mechanistic models are still relevant in this area. Why should we try to understand the mechanisms of disease progression when we can use machine learning tools to directly predict disease outcome?

Oxford Mathematician Ruth Baker and Antoine Jerusalem from Oxford's Department of Engineering argue that the research community should embrace the complementary strengths of mechanistic modelling and machine learning approaches to provide, for example, the missing link between patient outcome prediction and the mechanistic understanding of disease progression. The full details of their discussion can be found in Biology Letters.

Pages