News

Saturday, 14 July 2018

The Mathematics of Smoothies - the Dynamics of Particle Chopping in Blenders and Food Processors

Have you ever forgotten to replace the lid of the blender before beginning to puree your mango and passion-fruit smoothie? If you have, you'll have witnessed the catastrophic explosion of fruit and yoghurt flung haphazardly around the kitchen in an unpredictable manner. This is a consequence of the complicated and turbulent fluid dynamics present within the machine, the exact behaviour of which is unknown. Sharp, angular blades rotate at extremely high speeds to mix and chop the fruit into a puree consisting of particles that are ideally as small and uniform in size as possible. But what characteristics of the blender are responsible for the outcome? While experimental evidence gives intuition into blade and vessel design, along with operational parameters such as speed and blend time, there is a knowledge gap surrounding the precise impact on the particle and fluid dynamics.

Oxford Mathematicians Caoimhe Rooney, Ian Griffiths and Colin Please worked with Chuck Brunner, James Potter and Max Wood-Lee from SharkNinja, the company responsible for Nutri Ninja blenders, to understand the chopping dynamics that take place in a blender, with the aim of shedding light on this complex process.

The team derived an integro-differential-equation system, inspired by Becker-Döring and Smoluchowski theory, which provides a predictive model for the resulting size distribution of particles comprising a smoothie after blending an initial mixture of fruits (such as the contents of the blender shown in the figure) for a given amount of time.

The results of the model were found to agree well with experimental trials performed in house (see figure). An unexpected result was the emergence of a second peak in the size distribution of chopped pieces. This is attributed to the fact that each time the blade slices through a piece of fruit, some small debris is also formed. The team modified their model to account for this additional feature, which enabled the second peak to be predicted.    

The taste and texture of a smoothie is heavily dependent on the size and distribution of the pieces from which it is composed. Given an initial selection of fruit pieces, along with the blend time and blend speed, the model is able to predict how the distribution of particle sizes and the most common piece size changes with time during blending. This provides guidance on the optimal blend time to maximize the taste experience.

The work performed by the team forms a foundation for the exploration and optimization of food blenders. In particular, this work paves the way for understanding the complex interplay between fluid dynamics and chopping within a blender. Ultimately, these models will allow us to determine the precise operating regime that will create the most homogeneous smoothies in the most efficient manner. 

---

For more information click here.

The images above feature the NutriNinja blender and a comparison between theoretical prediction (red) and experimental data (grey) for the distribution of different particle sizes in a blender after 50 seconds of blending.

Friday, 29 June 2018

Oxford Mathematician Heather Harrington awarded Whitehead Prize

Oxford Mathematician Heather Harrington has been awarded a Whitehead Prize by the London Mathematical Society (LMS) for her outstanding contributions to mathematical biology which have generated new biological insights using novel applications of topological and algebraic techniques. 

In the words of the citation Heather "has made significant advances through the application of ideas originating in pure mathematics to biological problems for which the techniques of traditional applied mathematics are inadequate. This has involved in particular the development of methods in algebraic statistics which allow one to characterize the qualitative behaviour of dynamical systems and networks, adapting approaches from algebraic geometry to test whether a given mathematical model is commensurate with a given set of experimental observations."

Friday, 29 June 2018

What is Representation Theory and how is it used? Oxford Mathematics Research investigates

Oxford Mathematician Karin Erdmann specializes in the areas of algebra known as representation theory (especially modular representation theory) and homological algebra (especially Hochschild cohomology). Here she discusses her latest work.

"Roughly speaking, representation theory investigates how algebraic systems can act on vector spaces. When the vector spaces are finite-dimensional this allows one to explicitly express the elements of the algebraic system by matrices, hence one can exploit linear algebra to study 'abstract' algebraic systems. In this way one can study symmetry, via group actions. One can also study irreversible processes. Algebras and their representations provide a natural frame for this.

An algebra is a ring which also is a vector space such that scalars commute with everything. An important construction are path algebras: Take a directed graph $Q$, which we call a quiver, and take a coefficient field $K$. Then the path algebra $KQ$ is the vector space over $K$ with basis all paths in $Q$. This becomes an algebra, where the product of two basis elements is either its concatenation if this exists, or is zero otherwise.

Algebras generalize groups, namely if we start with a group, we get naturally an algebra: take the vector space with basis labelled by the group, and extend the group multiplication to a ring structure.

When the coefficients are contained in the complex numbers, representations of groups have been studied for a long time, and have many applications. With coefficients in the integers modulo $2$, for example, the algebras and their representations are much harder to understand. For some groups, the representations have 'finite type'. These are well-understood but almost always they have 'infinite type'. With a few exceptional 'tame' cases, these are usually 'wild', that is there is no hope of a classification of the representations.

The same cases occur precisely for modulo 2 arithmetic and when the symmetry is based on dihedral or semidihedral or quaternion 2-groups. Dihedral 2-groups are symmetries of regular $n$-gons when $n$ is a power of 2. The smallest quaternion group is the famous one discovered by Hamilton.

Viewing these symmetries from groups in the wider context of algebras was used (a while ago) to classify such tame situations. Recently it was discovered that this is part of a much larger universe. Namely one can construct algebras from surface triangulations, in which the ones from the group setting occur as special cases.

One starts with a surface triangulation, and constructs from this a quiver, that is, a directed graph: Replace each edge of the triangulation by a vertex, and for each triangle

where in the last case $a=c\neq b$. At any boundary edge, draw a loop.

For example, consider triangulation of the torus with two triangles, as shown below. Then there are, up to labelling, two possible orientations of triangles and two possible quivers:

The tetrahedral triangulation of the sphere

gives rise to several quivers, depending on the oreintation of each traingle, for example:

 

The crystal in the north wing of the Andrew Wiles Building, home of Oxford Mathematics (image drawn above) can be viewed as a triangulation of a surface with boundary. We leave drawing the quiver to the reader.

Starting with the path algebra of such a quiver, we construct algebras by imposing explicit relations, which mimic the triangulation. Although the quiver can be arbitrarily large and complicated, there is an easy description of the algebras. We call these 'weighted surface algebras.' This is joint work with A. Skowronski.

We show that these algebras place group representations in a wider context. The starting point is that (with one exception) the cohomology of a weighted surface algebra is periodic of period four, which means that these algebras generalize group algebras with quaternion symmetry.

The relations which mimic triangles can be degenerated, so that the product of two arrows around a triangle become zero in the algebra. This gives rise to many new algebras. When all such relations are degenerated, the resulting algebras are very similar to group algebras with dihedral symmetry. If we degenerate relations around some but not all triangles, we obtain algebras which share properties of group algebras with semidihedral symmetry. Work on these is in progress."

Thursday, 21 June 2018

Tinkering with postulates. How some mathematics is now redundant. Or is it?

At the beginning of the twentieth century, some minor algebraic investigations grabbed the interest of a small group of American mathematicians.  The problems they worked on had little impact at the time, but they may nevertheless have had a subtle effect on the way in which mathematics has been taught over the past century.

The work in question is labelled postulate analysis.  By 1900, several objects of mathematical study had been axiomatised – that is, their important properties had been identified and assembled into self-contained lists of defining conditions (axioms or postulates).  The postulate analysts typically turned their attention to axiomatisations of systems formed by a set of elements subject to a given operation: for example, the integers with respect to addition, or the rational numbers under multiplication, or other more exotic constructions.  Thus, to give an example, suppose that we wanted to find a collection of postulates for the integers with respect to addition.  We would notice, for instance, that addition is associative, i.e., given any three integers a, b, c, it will always be the case that a + (b + c) = (a + b) + c.  We might therefore take the condition of associativity as a defining postulate.  Likewise, the integers are commutative under addition (a + b = b + a, for any integers a, b), so here we have another possible postulate.  The goal is to come up with a list of postulates that completely describes the system in question.

One of the main obsessions of the postulate analysts was to ensure that the sets of postulates that they had to hand were independent.  To continue with the example of the integers under addition, suppose that we have constructed a set of postulates for the integers which includes the associative and commutative conditions noted above, and also includes, among other things, the further condition that a + (b + c) = (b + a) + c.  This latter condition is, however, dependent on the associative and commutative conditions: all we have to do is apply the commutative law to the bracketed part of the right-hand side of the associative law, and we obtain our new condition.  This latter condition is therefore redundant within our collection of postulates, and can safely be dropped.  The postulate analysts experimented with the inclusion of different postulates in order to ensure that their lists were independent.

The question that arises immediately is: what is the point?  How does the removal of redundancy affect the overall mathematics?  The answer: it doesn't.  We’ve merely tidied up the mathematics.  The investigations of the postulate analysts had a significant aesthetic element, although they argued also that they were increasing the understanding of the objects with whose postulates they were tinkering.  However, most mathematicians disagreed, and by the middle of the twentieth century, postulate analysis had all but died away.  Many of the prominent postulate analysts are still remembered today, but for other things.  For example, E. V. Huntington of Harvard, the most prolific of the postulate analysts, is now best remembered for his work on voting systems: the method currently used to appoint representatives to the US Congress was devised by him.

Although it doesn't appear to have gone anywhere, there is still value in looking at the work on postulate analysis that appeared briefly at the beginning of the twentieth century.  It reminds us, for example, that mathematics has its fashions, just the same as any other human endeavour.  And it seems also that over the course of the twentieth century some of the basic methods of the postulate analysts found their way into elementary mathematics textbooks: the kind of algebraic manipulations of postulates that they carried out may often be found in preliminary exercises, whose value lies not in their answers but in the methods used to arrive at them.  Such exercises that serve to train students in particular ways of logical thinking are arguably the legacy of the postulate analysts.

A study of the work of the postulate analysts by Oxford Mathematician Christopher Hollings may be found in here

Tuesday, 12 June 2018

Mechanistic models versus machine learning: a fight worth fighting for the biological community?

90% of the world’s data have been generated in the last five years. A small fraction of these data is collected with the aim of validating specific hypotheses. These studies are led by the development of mechanistic models focussed on the causality of input-output relationships. However, the vast majority of the data are aimed at supporting statistical or correlation studies that bypass the need for causality and focus exclusively on prediction.

Along these lines, there has been a vast increase in the use of machine learning models, in particular in the biomedical and clinical sciences, to try and keep pace with the rate of data generation. Recent successes now beg the question of whether mechanistic models are still relevant in this area. Why should we try to understand the mechanisms of disease progression when we can use machine learning tools to directly predict disease outcome?

Oxford Mathematician Ruth Baker and Antoine Jerusalem from Oxford's Department of Engineering argue that the research community should embrace the complementary strengths of mechanistic modelling and machine learning approaches to provide, for example, the missing link between patient outcome prediction and the mechanistic understanding of disease progression. The full details of their discussion can be found in Biology Letters.

Tuesday, 5 June 2018

Lifting the curse of ill-posedness by hybridization

At the beginning of the 20th century, Jacques Hadamard gave the definition of well-posed problems, with a view to classifying “correct” mathematical models of physical phenomena. Three criteria should be fulfilled: a solution exists, that solution is unique, and it should depend continuously on the parameters.

This point of view has led to the foundation of the functional analysis approach to the resolution of partial differential equations, which was developed during the last century and is commonly accepted nowadays. The continuous dependence on the parameters means that a small change will not affect the outcome noticeably. Its immediate consequence is that inferring cause from effect, that is, parameters from measurement, is an ill-posed problem – the opposite of a well-posed problem.

Elliptic boundary value problems, such as the conduction equation, are well-posed problems. The voltage potential depends continuously on the imposed boundary condition, and even analytically (the smoothest possible dependence) on the conductivity parameter of the material inside the object. Electrical Impedance Tomography is the corresponding ill-posed problem. A current is imposed on the surface of the object to be imaged, and the resulting voltage potential on the surface of the object is measured. The target parameter to be reconstructed is the internal impedance. It is an appealing imaging method, as it is non-invasive, non-destructive, and the equipment required is inexpensive. But this problem is severely ill-posed : a small error on the measurements can result in a very large error on the reconstructed impedance.

To address this difficulty, the so-called hybrid imaging methods have been developed. In Acousto-Electric Tomography, one creates internal perturbations of the impedance of the medium to be imaged by an acoustic pulse. Because these perturbations are controlled, they provide additional data that reduce very significantly the ill-posedness of the problem. In a recent book, Yves Capdeboscq and his former Oxford student Giovanni Alberti explore the mathematics of these hybrid problems and the corresponding reconstruction methods. They show that this problem has deep connections to the Rado-Kneser-Choquet Theorem for holomorphic functions of the complex variables, and how structures such as Antoine's necklace create additional difficulties in three dimensions. Quasi-static approximations prove unhelpful in such a case. But time is on our side.

Thursday, 31 May 2018

Sir Andrew Wiles appointed as the first Regius Professor of Mathematics at Oxford

Oxford mathematician Sir Andrew Wiles, renowned for his proof of Fermat’s Last Theorem, has been appointed by Her Majesty the Queen to be Oxford’s first Regius Professor of Mathematics.

The Regius Professorship – a rare, sovereign-granted title – was granted to Oxford’s Mathematical Institute as part of the Queen’s 90th birthday celebrations. It is the first Regius Professorship awarded to Oxford since 1842.

Sir Andrew is the world’s most celebrated mathematician. In 2016 he was awarded the highest honour in mathematics, the Abel Prize, for his stunning proof of Fermat’s Last Theorem, a conundrum that had stumped mankind for 350 years. In recognition of this transformative work, he was also awarded the Copley medal, the Royal Society’s oldest and most prestigious award.

Professor Louise Richardson, Vice-Chancellor of Oxford University, said: ‘I know my colleagues join me in offering our warmest congratulations to Sir Andrew on being named Oxford’s newest Regius Professor. It is a fitting recognition of his outstanding contribution to the field of mathematics.’

Professor Martin Bridson, Head of Oxford’s Mathematical Institute, said: ‘The award of the Regius Professorship to Oxford recognised both our pre-eminence in fundamental research and the enormous benefits that flow to society from mathematics.

‘It is entirely fitting that the first holder of this Professorship should be Sir Andrew Wiles. Nobody exemplifies the relentless pursuit of mathematical understanding in the service of mankind better than him. His dedication to solving problems that have defied mankind for centuries, and the stunning beauty of his solutions to these problems, provide a beacon to inspire and sustain everyone who wrestles with the fundamental challenges of mathematics and the world around us. We are immensely proud to have Andrew as a colleague at the Mathematical Institute in Oxford.’

Sir Andrew, who will remain the Royal Society Research Professor of Mathematics at Oxford and a Fellow of Merton College, dedicated much of his early career to solving Fermat’s Last Theorem. First formulated by the French mathematician Pierre de Fermat in 1637, the theorem states:

There are no whole number solutions to the equation $x^n + y^n = z^n$ when n is greater than 2, unless xyz=0

Fermat himself claimed to have found a proof for the theorem but said that the margin of the text he was making notes on was not wide enough to contain it. Sir Andrew first became fascinated with the problem as a boy, and after years of intense private study at Princeton University, he announced he had found a proof in 1993, combining three complex mathematical fields – modular forms, elliptic curves and Galois representations.

The Norwegian Academy of Science and Letters, which presents the Abel Prize, said in its citation that ‘few results have as rich a mathematical history and as dramatic a proof as Fermat’s Last Theorem’. The proof has subsequently opened up new fields of inquiry and approaches to mathematics, and Sir Andrew himself continues to pursue his fascination with the subject. In his current research he is developing new ideas in the context of the Langlands Program, a set of far-reaching and influential conjectures connecting number theory to algebraic geometry and the theory of automorphic forms.

The new Regius Professorship in mathematics was one of a dozen announced by the government to celebrate the increasingly important role of academic research in driving growth and improving productivity during Queen Elizabeth II’s reign. The creation of Regius Professorships falls under the Royal Prerogative, and each appointment is approved by the monarch on ministerial advice.

Sir Andrew’s father, Maurice Wiles, was Regius Professor of Divinity at Oxford from 1970 to 1991.

You can watch Sir Andrew's Oxford Mathematics London Public Lecture and interview with Hannah Fry here.

Tuesday, 29 May 2018

Wytham Woods Photography Exhibition in the Andrew Wiles Building - Celebrating 75 Years of Science

If you are ever in the centre of Oxford and are getting tired of the endless beautiful buildings, then make your way to Wytham Woods. Covering 1000 acres of ancient and beautiful woodland 3 miles NW of Oxford, Wytham is exceptionally rich in flora and fauna, with over 500 species of plants, a wealth of woodland habitats, and 800 species of butterflies and moths. And it is so wonderfully peaceful.

But if you don't make it down to the woods today, you'll notice that Wytham Woods has come to Dunsinane (aka the Andrew Wiles Building, home to Oxford Mathematics). Wytham are celebrating 75 years of scientific research with a photographic exhibition on the Mezzanine level. You are very welcome.

Tuesday, 22 May 2018

Alchemy on a Saturday night & Sunday morning - Oxford and UCL mathematicians go mad after midnight in a search for Newton's baldness cure

It is a little known (and entirely untrue) fact that Isaac Newton's alchemical investigations led him to a formula for a potion to cure baldness. Ten mathematicians from Oxford and UCL spent Saturday night (and Sunday morning) running around central London solving puzzles and gathering clues and ingredients to recreate this potion, before a pedalo race across the Serpentine to present a vial of the wonder cure to the President of the Royal Society.

This wasn't just for fun (although it was certainly enormously enjoyable): the event was raising funds for the charity Raise Your Hands. The other teams, comprising the City's finest, were vying for a trophy, and the Oxford and UCL team (aka 'Crackers') of academics was there to set the benchmark. Which they achieved coming in as honorary winners within minutes of the trophy winners from Cantab Capital.

The puzzle hunt was inspired by a similar charity fundraising event in New York. Teams started at Banking Hall at 20.00 hours on Saturday night, and the first teams reached the finish line at the Serpentine Sackler Gallery in Hyde Park just after 08.30 on Sunday morning. Each team had to collect three sets of alchemical information: from the criminal leader of the Chain Gang on the 7th floor of an NCP car park, from a celebrity businessman on level 32 of the Gherkin, and from a chemistry professor standing next to a statue of a goat in Spitalfields. 

All of which led the team to a street corner a few minutes from Tower Bridge where they had to assemble a marble run to decode the alchemical information into a list of ingredients. Three team members went off for a boat trip from Tower Pier, which led to a sequence of puzzles on the south side of the river. A quick call to criminal mastermind Ray then led the team to Newspaper Charlie on Tower Bridge, who was persuaded to hand over a newspaper containing clues to several more locations where the team found ingredients as diverse as apple pip oil and slumber dust. 

To get the butterfly tears, team members had to locate a group of mime artists on the South Bank near Waterloo Bridge and learn to mime, then race to Waterloo Vaults to add Newton's family crest to the walls of graffiti. Meanwhile other members of the team were getting apples in Borough Market, and solving a murder in a boxing club using a code written in blood on towels. No ordinary Saturday night.

From their various locations round London, the team gathered at the Institute of Engineering and Technology, to help police solve the mystery of the theft of Newton's flask. Having solved many further puzzles and pieced together the clues, the team identified the culprit and found the flask in her locker, along with the fourth and final piece of a puzzle cube, which revealed that a lab and chemist would be found at the Institute of Contemporary Arts. 

Cue a dash along to Carlton House Terrace, where the chemist combined the ingredients to produce a spectacular reaction and a long snaking coil of baldness-curing foam.  The team thought they'd finished, but instead had to take a vial of the potion along to the President of the Royal Society, who was fishing on the Serpentine.  Other teams were reaching the lake at a similar time, and there was a nail-biting pedalo race to the President to collect a certificate.  Then a final dash on a Santander bike to the Sackler Gallery, where the team found one last challenge: conduct an orchestra in a rendition of Bizet's Habanera, complete with violin solo.

Time for breakfast, while the other teams found their way to the finish line.  As though that wasn't exhausting enough, even after leaving to head home, the group on the train back to Oxford were working on filling in the details of one of the puzzles. The ultimate 'completer-finishers'.

This was an amazing experience, seeing London from a whole new perspective, with a diverse collection of ingenious puzzles, immersive theatre, and stamina and fitness elements - and all in a good cause. The Raise Your Hands fundraising page is still open. 

The Crackers team was captained by Oxford Prof Jon Chapman, and also featured Head of IT Waldemar Schlackow (Oxford), faculty members Ian Hewitt (Oxford), Vicky Neale (Oxford) and Karen Page (UCL), and graduate students James Aaronson (Oxford), Ed Goldsmith (UCL), Momchil Konstantinov (UCL), Johnny Nicholson (UCL) and Spike Smith (Oxford). 

Some of the team members are disappointed to confirm that Newton's potion does not in fact cure baldness.

Friday, 18 May 2018

Using mathematical modelling to identify future diagnoses of Alzheimer's disease

Oxford Mathematician Paul Moore talks about his application of mathematical tools to identify who will be affected with Alzheimer's.

"Alzheimer's disease is a brain disorder which progressively affects cognition and results in an impairment in the ability to perform daily activities.  It is the most common form of dementia in older people affecting about 6% of the population aged over 65 and it increases in incidence with age. The initial stage of Alzheimer's disease is characterised by memory loss, and this is the usual presenting symptom. 

Psychiatrists would like to predict which individuals will develop the condition both for selecting participants for clinical trials and for finding which variables are used in prediction because this gives insights into the disease process. These variables might be individual characteristics such as age and genetic status or the results of brain scans and cognitive tests.  The graph shows some time plots of scaled brain volumes from successive MRI scans of an individual who has Alzheimer's disease.  The whole brain volume is shown as the blue markers, the hippocampus is marked in red and the entorhinus in yellow. The diamonds at the foot of the graph represent diagnosis points where the red diamonds are a diagnosis of Alzheimer's disease.  The trend in time seems to be downwards, but this feature might also be found in many healthy people as they age.  So our research question is: can we distinguish the changes of relative brain volumes in people who are healthy from those who will subsequently be diagnosed with Alzheimer's disease?

One possibility is to put the data points directly into a deep learning method like a neural network.  This approach might give accurate predictions, but it would not be easy to see which variables are important and how they change with respect to each other.  The method we use is to think of the way the variables change against each other over time as a path in Euclidean space and to characterise that path as a vector which uniquely identifies it.  This path signature was introduced by K.T.Chen in 1958 and has recently proved to be highly successful in machine learning applications.  It generates interpretable features and it can distinguish the time ordering of events: whether variable a or variable b changes value first.  Our results show that the hippocampus is shrinking abnormally fast in people who are subsequently diagnosed with Alzheimer’s disease - a finding that is already known from clinical research.  We are now expanding the number of brain regions that we investigate to improve the accuracy of our models and to learn more about the underlying process of this deadly disease."

Pages