Forthcoming events in this series


Fri, 15 Dec 2017

10:00 - 11:00
L3

Interpreting non-invasive measurement of markers of diseases including diabetes and Alzheimer’s

Dan Daly
(Lein Applied Diagnostics)
Abstract

Lein Applied Diagnostics has a novel optical measurement technique that is used to measure various parameters in the body for medical applications.

Two particular areas of interest are non-invasive glucose measurement for diabetes care and the diagnosis of diabetes. Both measurements are based on the eye and involve collecting complex data sets and modelling their links to the desired parameter.

If we take non-invasive glucose measurement as an example, we have two data sets – that from the eye and the gold standard blood glucose reading. The goal is to take the eye data and create a model that enables the calculation of the glucose level from just that eye data (and a calibration parameter for the individual). The eye data consists of measurements of apparent corneal thickness, anterior chamber depth, optical axis orientation; all things that are altered by the change in refractive index caused by a change in glucose level. So, they all correlate with changes in glucose as required but there are also noise factors as these parameters also change with alignment to the meter etc. The goal is to get to a model that gives us the information we need but also uses the additional parameter data to discount the noise features and thereby improve the accuracy.

Fri, 17 Nov 2017

10:00 - 11:00
L3

Call Routing Optimisation

Jonathan Welton
(Vodafone)
Abstract

The costs to Vodafone of calls terminating on other networks – especially fixed networks – are largely determined by the termination charges levied by other telecoms operators.  We interconnect to several other telecoms operators, who charge differently; within one interconnect operator, costs vary depending on which of their switching centres we deliver calls to, and what the terminating phone number is.  So, while these termination costs depend partly on factors that we cannot control (such as the number called, the call duration and the time of day), they are also influenced by some factors that we can control.  In particular, we can route calls within our network before handing them over from our network to the other telecoms operator; where this “handover” occurs has an impact on termination cost.  
Vodafone would like to develop a repeatable capability to determine call delivery cost efficiency and identify where network routing changes can be made to improve matters, and determine traffic growth forecasts.

Fri, 03 Nov 2017

10:00 - 11:00
L3

Service optimisation and decision making in railway traffic management

Graham Scott
(Resonate)
Abstract

Railway traffic management is the combination of monitoring the progress of trains, forecasting of the likely future progression of trains, and evaluating the impact of intervention options in near real time in order to make traffic adjustments that minimise the combined delay of trains when measured against the planned timetable.

In a time of increasing demand for rail travel, the desire to maximise the usage of the available infrastructure capacity competes with the need for contingency space to allow traffic management when disruption occurs. Optimisation algorithms and decision support tools therefore need to be increasingly sophisticated and traffic management has become a crucial function in meeting the growing expectations of rail travellers for punctuality and quality of service.

Resonate is a technology company specialising in rail and connected transport solutions. We have embarked on a drive to maximise capacity and performance through the use of mathematical, statistical, data-driven and machine learning based methods driving decision support and automated traffic management solutions.

Fri, 27 Oct 2017

10:00 - 11:00
L3

Challenges in the optimisation of warehouse efficiency

Padraig Regan
(StayLinked)
Abstract

In certain business environments, it is essential to the success of the business that workers stick closely to their plans and are not distracted, diverted or stopped. A warehouse is a great example of this for businesses where customers order goods online and the merchants commit to delivery dates.  In a warehouse, somewhere, a team of workers are scheduled to pick the items which will make up those orders and get them shipped on time.  If the workers do not deliver to plan, then orders will not be shipped on time, reputations will be damaged, customer will be lost and companies will go out of business.

StayLinked builds software which measures what these warehouse workers do and measures the factors which cause them to be distracted, diverted or stopped.  We measure whenever they start or end a task or process (e.g. start an order, pick an item in an order, complete an order). Some of the influencing factors we measure include the way the worker interacts with the device (using keyboard, scanner, gesture), navigates through the application (screens 1-3-4-2 instead of 1-2-3-4), the performance of the battery (dead battery stops work), the performance of the network (connected to access point or not, high or low latency), the device types being used, device form factor, physical location (warehouse 1, warehouse 2), profile of worker, etc.

We are seeking to build a configurable real-time mathematical model which will allow us to take all these factors into account and confidently demonstrate a measure of their impact (positive or negative) on the business process and therefore on the worker’s productivity. We also want to alert operational staff as soon as we can identify that important events have happened.  These alerts can then be quickly acted upon and problems resolved at the earliest possible opportunity.

In this project, we would like to collaborate with the maths faculty to understand the appropriate mathematical techniques and tools to use to build this functionality.  This product is being used right now by our customers so it would also be a great opportunity for a student to quickly see the results of their work in action in a real-world environment.

Fri, 09 Jun 2017

10:00 - 11:00
L4

Some mathematical problems in data science of interest to NPL

Stephane Chretien
(National Physical Laboratory)
Abstract

The National Physical Laboratory is the national measurement institute. Researchers in the Data Science Division analyse various types of data using mathematical, statistical and machine learning based methods. The goal of the workshop is to describe a set of exciting mathematical problems that are of interest to NPL and more generally to the Data Science community. In particular, I will describe the problem of clustering using minimum spanning trees (MST-Clustering), Non-Negative Matrix Factorisation (NMF), adaptive Compressed Sensing (CS) for tomography, and sparse polynomial chaos expansion (PCE) for parametrised PDE’s.

Fri, 19 May 2017

10:00 - 11:00
L4

Neutron reflection from mineral surfaces: Through thick and thin

Stuart Clarke
(BP Institute at Cambridge University)
Abstract

Conventional neutron reflection is a very powerful tool to characterise surfactants, polymers and other materials at the solid/liquid and air/liquid interfaces. Usually the analysis considers molecular layers with coherent addition of reflected waves that give the resultant reflected intensity. In this short workshop talk I will illustrate recent developments in this approach to address a wide variety of challenges of academic and commercial interest. Specifically I will introduce the challenges of using substrates that are thick on the coherence lengthscale of the radiation and the issues that brings in the structural analysis. I also invite the audience to consider if there may be some mathematical analysis that might lead us to exploit this incoherence to optimise our analysis. In particular, facilitating the removal of the 'background substrate contribution' to help us focus on the adsorbed layers of most interest.

Fri, 05 May 2017

10:00 - 11:00
L4

The Mathematics of Liquid Crystals for Interdisciplinary Applications

Apala Majumdar
(University of Bath)
Abstract

Liquid crystals are classical examples of mesophases or materials that are intermediate in character between conventional solids and liquids. There are different classes of liquid crystals and we focus on the simplest and most widely used nematic liquid crystals. Nematic liquid crystals are simply put, anisotropic liquids with distinguished directions and are the working material of choice for the multi-billion dollar liquid crystal display industry. In this workshop, we briefly review the mathematical theories for nematic liquid crystals, the modelling framework and some recent work on modelling experiments on confined liquid crystalline systems conducted by the Aarts Group (Chemistry Oxford) and experiments on nematic microfluidics by Anupam Sengupta (ETH Zurich). This is joint work with Alexander Lewis, Peter Howell, Dirk Aarts, Ian Griffiths, Maria Crespo Moya and Angel Ramos.
We conclude with a brief overview of new experiments on smectic liquid crystals in the Aarts laboratory and questions related to the recycling of liquid crystal displays originating from informal discussions with Votechnik ( a company dealing with automated recycling technologies , http://votechnik.com/).
 

Fri, 03 Mar 2017

10:00 - 11:00
L4

Predictions for Roads

Steve Hilditch
(Thales)
Abstract

Road travel is taking longer each year in the UK. This has been true for the last four years. Travel times have increased by 4% in the last two years. Applying the principle finding of the Eddington Report 2006, this change over the last two years will cost the UK economy an additional £2bn per year going forward even without further deterioration. Additional travel times are matched by a greater unreliability of travel times.

Knowing demand and road capacity, can we predict travel times?

We will look briefly at previous partial solutions and the abundance of motorway data in the UK. Can we make a breakthrough to achieve real-time predictions?

Fri, 09 Dec 2016

10:00 - 11:00
L2

Towards a drive-through wheel alignment system

Alex Codd
(WheelRight)
Abstract

As part of a suite of products that provide a drive thorough vehicle tyre inspection system the assessment of wheel alignment would be useful to drivers in maintaining their vehicles and reducing tyre wear.  The current method of assessing wheel alignment involves fitting equipment to the tyre and assessment within a garage environment. 

The challenge is to develop a technique that can be used in the roadway with no equipment fitted to the vehicle.  The WheelRight equipment is already capturing images of tyres from both  front and side views.  Pressure sensors in the roadway also allow a tyre pressure footprint to be created.  Using the existing data to interpret the alignment of the wheels on each axle is a preferred way forward.

Fri, 02 Dec 2016

10:00 - 11:00
L4

Modelling Aspects of Hotel Recommendation Systems

Christian Sommeregger & Wen Wong
(hotels.com (Expedia))
Abstract

Hotels.com is one of the world’s leading accommodation booking websites featuring an inventory of around 300.000 hotels and 100s of millions of users. A crucial part of our business is to act as an agent between these two sides of the market, thus reducing search costs and information asymmetries to enable our visitors to find the right hotel in the most efficient way.

From this point of view selling hotels is one large recommendation challenge: given a set of items and a set of observed choices/ratings, identify a user’s preference profile.  Over the last years this particular problem has been intensively studied by a strongly interdisciplinary field based on ideas from choice theory, linear algebra, statistics, computer science and machine learning. This pluralism is reflected in the broad array of techniques that are used in today’s industry applications, i.e. collaborative filtering, matrix factorization, graph-based algorithms, decision trees or generalized linear models.

The aim of this workshop is twofold.

Firstly we want to give some insight into the statistical modelling techniques and assumptions employed at hotels.com, the practical challenges one has to face when designing a flexible and scalable recommender system and potential gaps between current research and real-world applications.

Secondly we are going to consider some more advanced questions around learning to rank from partial/incomplete feedback (1), dealing with selection-bias correction (2) and how econometrics and behavioral theory (eg Luce, Kahneman /Tversky) can be used to complement existing techniques (3).

 

Fri, 25 Nov 2016

10:00 - 11:00
L4

Planning and interpreting measurements of the decay of chemicals in soil

Paul Sweeney
(Syngenta)
Abstract

Environmental risk assessments for chemicals in the EU rely heavily upon modelled estimates of potential concentrations in soil and water.  A key parameter used by these models is the degradation of the chemical in soil which is derived from a kinetic fitting of laboratory data using standard fitting routines.  Several different types of kinetic can be represented such as: Simple First Order (SFO), Double First Order in Parallel (DFOP), and First Order Multi-Compartment (FOMC). Choice of a particular kinetic and selection of a representative degradation rate can have a huge influence on the outcome of the risk assessment. This selection is made from laboratory data that are subject to experimental error.  It is known that the combination of small errors in time and concentration can in certain cases have an impact upon the goodness of fit and kinetic predicted by fitting software.  Syngenta currently spends in the region of 4m GBP per annum on laboratory studies to support registration of chemicals in the EU and the outcome of the kinetic assessment can adversely affect the potential registerability of chemicals having sales of several million pounds.  We would therefore like to understand the sensitivities involved with kinetic fitting of laboratory studies.  The aim is to provide guidelines for the conduct and fitting of laboratory data so that the correct kinetic and degradation rate of chemicals in environmental risk assessments is used.

Fri, 11 Nov 2016

10:00 - 11:00
L4

The "surfactantless" middle phase

Harry McEvoy
(dstl)
Abstract

Dstl are interested in removing liquid contaminants from capillary features (cracks in surfaces, screw threads etc.). We speculated that liquid decontaminants with low surface tension would have beneficial properties. The colloid literature, and in particular the oil recovery literature, discusss the properties of multiphase systems in terms of “Winsor types”, typically consisting of “brine” (water + electrolyte), “oil” (non-polar, water-insoluble solvent) and surfactant. Winsor I systems are oil-in-water microemulsions and Winsor II systems are water-in-oil microemulsions. Under certain circumstances, the mixture will separate into three phases. The middle (Winsor III) phase is surfactant-rich, and is reported to exhibit ultra-low surface tension. The glycol ethers (“Cellosolve” type solvents) consist of short (3-4) linked ether groups attached to short (3-4 carbon) alkyl chains. Although these materials would not normally be considered to be surfactants, their polar head, non-polar tail properties allow them to form a “surfactantless” Winsor III middle phase. We have found that small changes in temperature, electrolyte concentration or addition of contaminant can cause these novel colloids to phase separate. In our decontamination experiments, we have observed that contaminant-induced phase separation takes the form of droplets of the separating phase. These droplets are highly mobile, exhibiting behaviour that is visually similar to Brownian motion, which induces somewhat turbulent liquid currents in the vicinity of the contaminant. We tentatively attribute this behaviour to the Marangoni effect. We present our work as an interesting physics/ physical chemistry phenomenon that should be suitable for mathematical analysis.

Fri, 04 Nov 2016

10:00 - 11:00
L4

Advanced Medical Imaging Reconstruction Using Distributed X-ray Sources

Gil travish
(Adaptix Imaging)
Abstract

Currently all medical x-ray imaging is performed using point-like sources which produce cone or fan beams. In planar radiology the source is fixed relative to the patient and detector array and therefore only 2D images can be produced. In CT imaging, the source and detector are rotated about the patient and through reconstruction (such as Radon methods), a 3D image can be formed. In Tomosynthesis, a limited range of angles are captured which greatly reduces the complexity and cost of the device and the dose exposure to the patient while largely preserving the clinical utility of the 3D images. Conventional tomosynthesis relies on mechanically moving a source about a fixed trajectory (e.g. an arc) and capturing multiple images along that path. Adaptix is developing a fixed source with an electronically addressable array that allows for a motion-free tomosynthesis system. The Adaptix approach has many advantages including reduced cost, portability, angular information acquired in 2D, and the ability to shape the radiation field (by selectively activating only certain emitters).


The proposed work would examine the effects of patient motion and apply suitable corrections to the image reconstruction (or raw data). Many approaches have been considered in the literature for motion correction, and only some of these may be of use in tomosynthesis. The study will consider which approaches are optimal, and apply them to the present geometry.


A related but perhaps distinct area of investigation is the use of “structured light” techniques to encode the x-rays and extract additional information from the imaging. Most conventional structured light approaches are not suitable for transmissive operation nor for the limited control available in x-rays. Selection of appropriate techniques and algorithms, however, could prove very powerful and yield new ways of performing medical imaging.


Adaptix is a start-up based at the Begbroke Centre for Innovation and Enterprise. Adaptix is transforming planar X-ray – the diagnostic imaging modality most widely used in healthcare worldwide. We are adding low-dose 3D capability – digital tomosynthesis - to planar X-ray while making it more affordable and truly portable so radiology can more easily travel to the patient. This transformation will enhance patient’s access to the world’s most important imaging technologies and likely increases the diagnostic accuracy for many high incidence conditions such as cardiovascular and pulmonary diseases, lung cancer and osteoporosis. 
 

Fri, 28 Oct 2016

10:00 - 11:00
L4

Feasibility projection for vibrational and damping constraints of turbines

Ulrich Ehehalt
(Siemens P & G)
Abstract

The challenge is to develop an automated process that transforms an initial desired design of turbine rotor and blades in to a close approximation having eigenfrequencies that avoid the operating frequency (and its first harmonic) of the turbine.

Fri, 17 Jun 2016

10:00 - 11:00
L5

Reconstructing effective signalling networks in T cells

Omer Dushek
(Sir William Dunn School of Pathology)
Abstract

T cells are important white blood cells that continually circulate in the body in search of the molecular signatures ('antigens') of infection and cancer. We (and many other labs) are trying to construct models of the T cell signalling network that can be used to predict how ligand binding (at the surface of the cell) controls gene express (in the nucleus). To do this, we stimulate T cells with various ligands (input) and measure products of gene expression (output) and then try to determine which model must be invoked to explain the data. The challenge that we face is finding 1) unique models and 2) scaling the method to many different input and outputs.

Fri, 10 Jun 2016

10:00 - 11:00
L4

Occurrence detection, correlation and classification among large numbers of time series

Alexander Denev
(Markit)
Abstract

Markit is a leading global provider of financial information services. We provide products that enhance transparency, reduce risk and improve operational efficiency.

We wish to find ways to automatically detect and label ‘extreme’ occurrences in a time series such as structural breaks, nonlinearities, and spikes (i.e. outliers). We hope to detect these occurrences in the levels, returns and volatility of a time series or any other transformation of it (e.g. moving average).

We also want to look for the same types of occurrences in the multivariate case in a set of time series through measures such as e.g. correlations, eigenvalues of the covariance matrix etc. The number of time series involved is of the order 3x10^6.

We wish to explain the appearance of an ‘extreme’ occurrence or a cluster of occurrences endogenously, as an event conditional on the values of the time series in the set, both contemporaneously and/or as conditional on their time lags.

Furthermore, we would like to classify the events that caused the occurrence in some major categories, if found e.g. shock to oil supply, general risk aversion, migrations etc. both algorithmically and by allowing human corrective judgement (which could become the basis for supervised learning).

Fri, 03 Jun 2016

10:00 - 11:00
L4

Unanticipated interaction loops involving autonomous systems

James Sutherland
(Thales Security and Consulting)
Abstract

We are entering a world where unmanned vehicles will be common. They have the potential to dramatically decrease the cost of services whilst simultaneously increasing the safety record of whole industries.

Autonomous technologies will, by their very nature, shift decision making responsibility from individual humans to technology systems. The 2010 Flash Crash showed how such systems can create rare (but not inconceivably rare) and highly destructive positive feedback loops which can severely disrupt a sector.

In the case of Unmanned Air Systems (UAS), how might similar effects obstruct the development of the Commercial UAS industry? Is it conceivable that, like the high frequency trading industry at the heart of the Flash Crash, the algorithms we provide UAS to enable autonomy could decrease the risk of small incidents whilst increasing the risk of severe accidents? And if so, what is the relationship between probability and consequence of incidents?

Fri, 27 May 2016
10:00
L4

Mathematical models of genome replication

Conrad Nieduszynski
(Sir William Dunn School of Pathology)
Abstract

We aim to determine how cells faithfully complete genome replication. Accurate and complete genome replication is essential for all life. A single DNA replication error in a single cell division can give rise to a genomic disorder. However, almost all experimental data are ensemble; collected from millions of cells. We used a combination of high-resolution, genomic-wide DNA replication data, mathematical modelling and single cell experiments to demonstrate that ensemble data mask the significant heterogeneity present within a cell population; see [1-4]. Therefore, the pattern of replication origin usage and dynamics of genome replication in individual cells remains largely unknown. We are now developing cutting-edge single molecule methods and allied mathematical models to determine the dynamics of genome replication at the DNA sequence level in normal and perturbed human cells.

[1] de Moura et al., 2010, Nucleic Acids Research, 38: 5623-5633

[2] Retkute et al, 2011, PRL, 107:068103

[3] Retkute et al, 2012, PRE, 86:031916

[4] Hawkins et al., 2013, Cell Reports, 5:1132-41

Fri, 06 May 2016

10:00 - 11:00
L4

Probabilistic Time Series Forecasting: Challenges and Opportunities

Siddharth Arora
(Mathematical Institute)
Abstract

Over the years, nonlinear and nonparametric models have attracted a great deal of attention. This is mainly due to the fact that most time series arising from the real-world exhibit nonlinear behavior, whereas nonparametric models, in principle, do not make strong prior assumptions about the true functional form of the underlying data generating process.

 

In this workshop, we will focus on the use of nonlinear and nonparametric modelling approaches for time series forecasting, and discuss the need and implications of accurate forecasts for informed policy and decision-making. Crucially, we will discuss some of the major challenges (and potential solutions) in probabilistic time series forecasting, with emphasis on: (1) Modelling in the presence of regime shifts, (2) Effect of model over-fitting on out-of-sample forecast accuracy, and, (3) Importance of using naïve benchmarks and different performance scores for model comparison. We will discuss the applications of different modelling approaches for: Macroeconomics (US GNP), Energy (electricity consumption recorded via smart meters), and Healthcare (remote detection of disease symptoms).

Fri, 04 Mar 2016

10:00 - 11:00
L4

Fault prediction from time series data

Mike Newman
(Thales)
Abstract

On the railway network, for example, there is a large base of installed equipment with a useful life of many years.  This equipment has condition monitoring that can flag a fault when a measured parameter goes outside the permitted range.  If we can use existing measurements to predict when this would occur, preventative maintenance could be targeted more effectively and faults reduced.  As an example, we will consider the current supplied to a points motor as a function of time in each operational cycle.

Fri, 26 Feb 2016

10:00 - 11:00
L4

Ionic liquids - a challenge to our understanding of the liquid state

Susan Perkin
(Department of Chemistry)
Abstract
Ionic liquids are salts, composed solely of positive and negative ions, which are liquid under ambient conditions. Despite an increasing range of successful applications, there remain fundamental challenges in understanding the intermolecular forces and propagation of fields in ionic liquids. 
I am an experimental scientist, and in my laboratory we study thin films of liquids. The aim is to discover their molecular and surface interactions and fluid properties in confinement. In this talk I will describe the experiments and show some results which have led to better understanding of ionic liquids. I will then show some measurements which currently have no understanding attached! 
Fri, 29 Jan 2016

10:00 - 11:00
L4

Causal Calculus and Actionable Associations in Market-Basket Data

Marco Brambilla
(dunnhumby)
Abstract

“Market-Basket (MB) and Household (HH) data provide a fertile substrate for the inference of association between marketing activity (e.g.: prices, promotions, advertisement, etc.) and customer behaviour (e.g.: customers driven to a store, specific product purchases, joint product purchases, etc.). The main aspect of MB and HH data which makes them suitable for this type of inference is the large number of variables of interest they contain at a granularity that is fit for purpose (e.g.: which items are bought together, at what frequency are items bought by a specific household, etc.).

A large number of methods are available to researchers and practitioners to infer meaningful networks of associations between variables of interest (e.g.: Bayesian networks, association rules, etc.). Inferred associations arise from applying statistical inference to the data. In order to use statistical association (correlation) to support an inference of causal association (“which is driving which”), an explicit theory of causality is needed.

Such a theory of causality can be used to design experiments and analyse the resultant data; in such a context certain statistical associations can be interpreted as evidence of causal associations.

On observational data (as opposed to experimental), the link between statistical and causal associations is less straightforward and it requires a theory of causality which is formal enough to support an appropriate calculus (e.g.: do-calculus) of counterfactuals and networks of causation.

My talk will be focused on providing retail analytic problems which may motivate an interest in exploring causal calculi’s potential benefits and challenges.”

Fri, 04 Dec 2015

10:00 - 11:00
L4

Analysis of images in multidimensional single molecule microscopy

Michael Hirsch
(STFC Rutherford Appleton Laboratory)
Abstract

Multidimensional single molecule microscopy (MSMM) generates image time series of biomolecules in a cellular environment that have been tagged with fluorescent labels. Initial analysis steps of such images consist of image registration of multiple channels, feature detection and single particle tracking. Further analysis may involve the estimation of diffusion rates, the measurement of separations between molecules that are not optically resolved and more. The analysis is done under the condition of poor signal to noise ratios, high density of features and other adverse conditions. Pushing the boundary of what is measurable, we are facing among others the following challenges. Firstly the correct assessment of the uncertainties and the significance of the results, secondly the fast and reliable identification of those features and tracks that fulfil the assumptions of the models used. Simpler models require more rigid preconditions and therefore limiting the usable data, complexer models are theoretically and especially computationally challenging.

Fri, 20 Nov 2015

10:00 - 11:00
L4

More accurate optical measurements

Graeme Clark
(Lein)
Abstract

Lein’s confocal systems make accurate and precise measurements in many different applications. In applications where the object under test introduces variability and/or optical aberrations to the optical signal, the accuracy and precision may deteriorate. This technical challenge looks for mathematical solutions to improve the accuracy and precision of measurements made in such circumstances.

The presentation will outline the confocal principle, show “perfect” signals, give details of how we analyse such signals, then move on to less perfect signals and the effects on measurement accuracy and precision.

Fri, 13 Nov 2015

10:00 - 11:00
L4

Exploitation of the parareal algorithm in divertor physics simulations

Debasmita Samaddar
(Culham Center for Fusion Energy (CCFE))
Abstract

Parallelizing the time domain in numerical simulations is non-intuitive, but has been proven to be possible using various algorithms like parareal, PFASST and RIDC. Temporal parallelizations adds an entire new dimension to parallelize and significantly enhances use of super computing resources. Exploiting this technique serves as a big step towards exascale computation.

Starting with relatively simple problems, the parareal algorithm (Lions et al, A ''parareal'' in time discretization of PDE's, 2001) has been successfully applied to various complex simulations in the last few years (Samaddar et al, Parallelization in time of numerical simulations of fully-developed plasma turbulence using the parareal algorithm, 2010). The algorithm involves a predictor-corrector technique.

Numerical studies of the edge of magnetically confined, fusion plasma are an extremely challenging task. The complexity of the physics in this regime is particularly increased due to the presence of neutrals as well as the interaction of the plasma with the wall. These simulations are extremely computationally intensive but are key to rapidly achieving thermonuclear breakeven on ITER-like machines.

The SOLPS code package (Schneider et al, Plasma Edge Physics with B2‐Eirene, 2006) is widely used in the fusion community and has been used to design the ITER divertor. A reduction of the wallclock time for this code has been a long standing goal and recent studies have shown that a computational speed-up greater than 10 is possible for SOLPS (Samaddar et al, Greater than 10x Acceleration of fusion plasma edge simulations using the Parareal algorithm, 2014), which is highly significant for a code with this level of complexity.

In this project, the aim is to explore a variety of cases of relevance to ITER and thus involving more complex physics to study the feasibility of the algorithm. Since the success of the parareal algorithm heavily relies on choosing the optimum coarse solver as a predictor, the project will involve studying various options for this purpose. The tasks will also include performing scaling studies to optimize the use of computing resources yielding maximum possible computational gain.

Fri, 06 Nov 2015

10:00 - 11:00
L4

(1) Fluid and particle dynamics in blenders and food processors; (2) Filter surface optimisation for maximising peak air power of vacuum cleaners; (3) Fluid system models for drip coffee makers

Chuck Brunner
(Sharkninja)
Abstract

Blenders and food processors have been around for years.  However, detailed understanding of the fluid and particle dynamics going on with in the multi-phase flow of the processing chamber as well as the influence of variables such as the vessel geometry, blade geometry, speeds, surface properties etc., are not well understood.  SharkNinja would like Oxford Universities help in developing a model that can be used to gain insight into fluid dynamics within the food processing chamber with the goal being to develop a system that will produce better food processing performance as well as predict loading on food processing elements to enable data driven product design.

Many vacuum cleaners sold claim “no loss of suction” which is defined as having only a very small reduction in peak air power output over the life of the unit under normal operating conditions.  This is commonly achieved by having a high efficiency cyclonic separator combined with a filter which the user washes at regular intervals (typically every 3 months).  It has been observed that some vacuum cleaners show an increase in peak air watts output after a small amount of dust is deposited on the filter.  This effect is beneficial since it prolongs the time between filter washing.  SharkNinja are currently working on validating their theory as to why this occurs.  SharkNinja would like Oxford University’s help in developing a model that can be used to better understand this effect and provide insight towards optimizing future designs.

Although a very simple system from a construction standpoint, creating a drip coffee maker that can be produce a range of coffee sizes from a single cup to a multi-cup carafe presents unique problems.  Challenges within this system result from varying pressure heads on the inlet side, accurate measurement of relatively low flow rates, fluid motive force generated by boilers, and head above the boiler on the outlet side.  Getting all of these parameters right to deliver the proper strength, proper temp, and proper volume of coffee requires in depth understanding of the fluid dynamics involved in the system.  An ideal outcome from this work would be an adaptive model that enables a fluid system model to be created from building blocks.  This system model would include component models for tubing, boilers, flow meters, filters, pumps, check valves, and the like.

Fri, 19 Jun 2015
11:30
L5

iceCAM project with G's-Fresh

Alasdair Craighead
(G's-Fresh)
Abstract

G’s Growers supply salad and vegetable crops throughout the UK and Europe; primarily as a direct supplier to supermarkets. We are currently working on a project to improve the availability of Iceberg Lettuce throughout the year as this has historically been a very volatile crop. It is also by far the highest volume crop that we produce with typical weekly sales in the summer season being about 3m heads per week.

In order to continue to grow our business we must maintain continuous supply to the supermarkets. Our current method for achieving this is to grow more crop than we will actually harvest. We then aim to use the wholesale markets to sell the extra crop that is grown rather than ploughing it back in and then we reduce availability to these markets when the availability is tight.

We currently use a relatively simple computer Heat Unit model to help predict availability however we know that this is not the full picture. In order to try to help improve our position we have started the IceCAM project (Iceberg Crop Adaptive Model) which has 3 aims.

  1. Forecast crop availability spikes and troughs and use this to have better planting programmes from the start of the season.
  2. Identify the growth stages of Iceberg to measure more accurately whether crop is ahead or behind expectation when it is physically examined in the field.
  3. The final utopian aim would be to match the market so that in times of general shortage when price are high we have sufficient crop to meet all of our supermarket customer requirements and still have spare to sell onto the markets to benefit from the higher prices. Equally when there is a general surplus we would only look to have sufficient to supply the primary customer base.

We believe that statistical mathematics can help us to solve these problems!!

Fri, 19 Jun 2015

10:00 - 11:00
L5

Toward a Higher-Order Accurate Computational Flume Facility for Understanding Wave-Current-Structure Interaction

Chris Kees
(USAERDC)
Abstract

Accurate simulation of coastal and hydraulic structures is challenging due to a range of complex processes such as turbulent air-water flow and breaking waves. Many engineering studies are based on scale models in laboratory flumes, which are often expensive and insufficient for fully exploring these complex processes. To extend the physical laboratory facility, the US Army Engineer Research and Development Center has developed a computational flume capability for this class of problems. I will discuss the turbulent air-water flow model equations, which govern the computational flume, and the order-independent, unstructured finite element discretization on which our implementation is based. Results from our air-water verification and validation test set, which is being developed along with the computational flume, demonstrate the ability of the computational flume to predict the target phenomena, but the test results and our experience developing the computational flume suggest that significant improvements in accuracy, efficiency, and robustness may be obtained by incorporating recent improvements in numerical methods.

Key Words:

Multiphase flow, Navier-Stokes, level set methods, finite element methods, water waves

Fri, 12 Jun 2015

10:00 - 11:00
L5

A recommendation system for journey planning

Darren Price
(Thales)
Abstract

A recommendation system for multi-modal journey planning could be useful to travellers in making their journeys more efficient and pleasant, and to transport operators in encouraging travellers to make more effective use of infrastructure capacity.

Journeys will have multiple quantifiable attributes (e.g. time, cost, likelihood of getting a seat) and other attributes that we might infer indirectly (e.g. a pleasant view).  Individual travellers will have different preferences that will affect the most appropriate recommendations.  The recommendation system might build profiles for travellers, quantifying their preferences.  These could be inferred indirectly, based on the information they provide, choices they make and feedback they give.  These profiles might then be used to compare and rank different travel options.

Fri, 29 May 2015

10:00 - 11:00
L5

Continuum mechanics, uncertainty management, and the derivation of numerical modelling schemes in the area of hydrocarbon resources generation, expulsion and migration over the history of a basin

Steve Daum
(PDS Production Enterprise)
Abstract

Classically, basin modelling is undertaken with very little a priori knowledge. Alongside the challenge of improving the general fidelity and utility of the modelling systems, is the challenge of constraining these systems with unknowns and uncertainties in such a way that models (and derived simulation results) can be readily regenerated/reevaluated in the light of new empirical data obtained during the course of exploration, development and production activities.

Fri, 20 Mar 2015

10:00 - 11:00
L6

Saint-Gobain

Paul Leplay
Abstract

For this workshop, we have identified two subject of interest for us in the field of particle technology, one the wet granulation is a size enlargement process of converting small-diameter solid particles (typically powders) into larger-diameter agglomerates to generate a specific size, the other one the mechanical centrifugal air classifier is employed when the particle size that you need to separate is too fine to screen.