Forthcoming events in this series


Fri, 28 Oct 2016

10:00 - 11:00
L4

Feasibility projection for vibrational and damping constraints of turbines

Ulrich Ehehalt
(Siemens P & G)
Abstract

The challenge is to develop an automated process that transforms an initial desired design of turbine rotor and blades in to a close approximation having eigenfrequencies that avoid the operating frequency (and its first harmonic) of the turbine.

Fri, 17 Jun 2016

10:00 - 11:00
L5

Reconstructing effective signalling networks in T cells

Omer Dushek
(Sir William Dunn School of Pathology)
Abstract

T cells are important white blood cells that continually circulate in the body in search of the molecular signatures ('antigens') of infection and cancer. We (and many other labs) are trying to construct models of the T cell signalling network that can be used to predict how ligand binding (at the surface of the cell) controls gene express (in the nucleus). To do this, we stimulate T cells with various ligands (input) and measure products of gene expression (output) and then try to determine which model must be invoked to explain the data. The challenge that we face is finding 1) unique models and 2) scaling the method to many different input and outputs.

Fri, 10 Jun 2016

10:00 - 11:00
L4

Occurrence detection, correlation and classification among large numbers of time series

Alexander Denev
(Markit)
Abstract

Markit is a leading global provider of financial information services. We provide products that enhance transparency, reduce risk and improve operational efficiency.

We wish to find ways to automatically detect and label ‘extreme’ occurrences in a time series such as structural breaks, nonlinearities, and spikes (i.e. outliers). We hope to detect these occurrences in the levels, returns and volatility of a time series or any other transformation of it (e.g. moving average).

We also want to look for the same types of occurrences in the multivariate case in a set of time series through measures such as e.g. correlations, eigenvalues of the covariance matrix etc. The number of time series involved is of the order 3x10^6.

We wish to explain the appearance of an ‘extreme’ occurrence or a cluster of occurrences endogenously, as an event conditional on the values of the time series in the set, both contemporaneously and/or as conditional on their time lags.

Furthermore, we would like to classify the events that caused the occurrence in some major categories, if found e.g. shock to oil supply, general risk aversion, migrations etc. both algorithmically and by allowing human corrective judgement (which could become the basis for supervised learning).

Fri, 03 Jun 2016

10:00 - 11:00
L4

Unanticipated interaction loops involving autonomous systems

James Sutherland
(Thales Security and Consulting)
Abstract

We are entering a world where unmanned vehicles will be common. They have the potential to dramatically decrease the cost of services whilst simultaneously increasing the safety record of whole industries.

Autonomous technologies will, by their very nature, shift decision making responsibility from individual humans to technology systems. The 2010 Flash Crash showed how such systems can create rare (but not inconceivably rare) and highly destructive positive feedback loops which can severely disrupt a sector.

In the case of Unmanned Air Systems (UAS), how might similar effects obstruct the development of the Commercial UAS industry? Is it conceivable that, like the high frequency trading industry at the heart of the Flash Crash, the algorithms we provide UAS to enable autonomy could decrease the risk of small incidents whilst increasing the risk of severe accidents? And if so, what is the relationship between probability and consequence of incidents?

Fri, 27 May 2016
10:00
L4

Mathematical models of genome replication

Conrad Nieduszynski
(Sir William Dunn School of Pathology)
Abstract

We aim to determine how cells faithfully complete genome replication. Accurate and complete genome replication is essential for all life. A single DNA replication error in a single cell division can give rise to a genomic disorder. However, almost all experimental data are ensemble; collected from millions of cells. We used a combination of high-resolution, genomic-wide DNA replication data, mathematical modelling and single cell experiments to demonstrate that ensemble data mask the significant heterogeneity present within a cell population; see [1-4]. Therefore, the pattern of replication origin usage and dynamics of genome replication in individual cells remains largely unknown. We are now developing cutting-edge single molecule methods and allied mathematical models to determine the dynamics of genome replication at the DNA sequence level in normal and perturbed human cells.

[1] de Moura et al., 2010, Nucleic Acids Research, 38: 5623-5633

[2] Retkute et al, 2011, PRL, 107:068103

[3] Retkute et al, 2012, PRE, 86:031916

[4] Hawkins et al., 2013, Cell Reports, 5:1132-41

Fri, 06 May 2016

10:00 - 11:00
L4

Probabilistic Time Series Forecasting: Challenges and Opportunities

Siddharth Arora
(Mathematical Institute)
Abstract

Over the years, nonlinear and nonparametric models have attracted a great deal of attention. This is mainly due to the fact that most time series arising from the real-world exhibit nonlinear behavior, whereas nonparametric models, in principle, do not make strong prior assumptions about the true functional form of the underlying data generating process.

 

In this workshop, we will focus on the use of nonlinear and nonparametric modelling approaches for time series forecasting, and discuss the need and implications of accurate forecasts for informed policy and decision-making. Crucially, we will discuss some of the major challenges (and potential solutions) in probabilistic time series forecasting, with emphasis on: (1) Modelling in the presence of regime shifts, (2) Effect of model over-fitting on out-of-sample forecast accuracy, and, (3) Importance of using naïve benchmarks and different performance scores for model comparison. We will discuss the applications of different modelling approaches for: Macroeconomics (US GNP), Energy (electricity consumption recorded via smart meters), and Healthcare (remote detection of disease symptoms).

Fri, 04 Mar 2016

10:00 - 11:00
L4

Fault prediction from time series data

Mike Newman
(Thales)
Abstract

On the railway network, for example, there is a large base of installed equipment with a useful life of many years.  This equipment has condition monitoring that can flag a fault when a measured parameter goes outside the permitted range.  If we can use existing measurements to predict when this would occur, preventative maintenance could be targeted more effectively and faults reduced.  As an example, we will consider the current supplied to a points motor as a function of time in each operational cycle.

Fri, 26 Feb 2016

10:00 - 11:00
L4

Ionic liquids - a challenge to our understanding of the liquid state

Susan Perkin
(Department of Chemistry)
Abstract
Ionic liquids are salts, composed solely of positive and negative ions, which are liquid under ambient conditions. Despite an increasing range of successful applications, there remain fundamental challenges in understanding the intermolecular forces and propagation of fields in ionic liquids. 
I am an experimental scientist, and in my laboratory we study thin films of liquids. The aim is to discover their molecular and surface interactions and fluid properties in confinement. In this talk I will describe the experiments and show some results which have led to better understanding of ionic liquids. I will then show some measurements which currently have no understanding attached! 
Fri, 29 Jan 2016

10:00 - 11:00
L4

Causal Calculus and Actionable Associations in Market-Basket Data

Marco Brambilla
(dunnhumby)
Abstract

“Market-Basket (MB) and Household (HH) data provide a fertile substrate for the inference of association between marketing activity (e.g.: prices, promotions, advertisement, etc.) and customer behaviour (e.g.: customers driven to a store, specific product purchases, joint product purchases, etc.). The main aspect of MB and HH data which makes them suitable for this type of inference is the large number of variables of interest they contain at a granularity that is fit for purpose (e.g.: which items are bought together, at what frequency are items bought by a specific household, etc.).

A large number of methods are available to researchers and practitioners to infer meaningful networks of associations between variables of interest (e.g.: Bayesian networks, association rules, etc.). Inferred associations arise from applying statistical inference to the data. In order to use statistical association (correlation) to support an inference of causal association (“which is driving which”), an explicit theory of causality is needed.

Such a theory of causality can be used to design experiments and analyse the resultant data; in such a context certain statistical associations can be interpreted as evidence of causal associations.

On observational data (as opposed to experimental), the link between statistical and causal associations is less straightforward and it requires a theory of causality which is formal enough to support an appropriate calculus (e.g.: do-calculus) of counterfactuals and networks of causation.

My talk will be focused on providing retail analytic problems which may motivate an interest in exploring causal calculi’s potential benefits and challenges.”

Fri, 04 Dec 2015

10:00 - 11:00
L4

Analysis of images in multidimensional single molecule microscopy

Michael Hirsch
(STFC Rutherford Appleton Laboratory)
Abstract

Multidimensional single molecule microscopy (MSMM) generates image time series of biomolecules in a cellular environment that have been tagged with fluorescent labels. Initial analysis steps of such images consist of image registration of multiple channels, feature detection and single particle tracking. Further analysis may involve the estimation of diffusion rates, the measurement of separations between molecules that are not optically resolved and more. The analysis is done under the condition of poor signal to noise ratios, high density of features and other adverse conditions. Pushing the boundary of what is measurable, we are facing among others the following challenges. Firstly the correct assessment of the uncertainties and the significance of the results, secondly the fast and reliable identification of those features and tracks that fulfil the assumptions of the models used. Simpler models require more rigid preconditions and therefore limiting the usable data, complexer models are theoretically and especially computationally challenging.

Fri, 20 Nov 2015

10:00 - 11:00
L4

More accurate optical measurements

Graeme Clark
(Lein)
Abstract

Lein’s confocal systems make accurate and precise measurements in many different applications. In applications where the object under test introduces variability and/or optical aberrations to the optical signal, the accuracy and precision may deteriorate. This technical challenge looks for mathematical solutions to improve the accuracy and precision of measurements made in such circumstances.

The presentation will outline the confocal principle, show “perfect” signals, give details of how we analyse such signals, then move on to less perfect signals and the effects on measurement accuracy and precision.

Fri, 13 Nov 2015

10:00 - 11:00
L4

Exploitation of the parareal algorithm in divertor physics simulations

Debasmita Samaddar
(Culham Center for Fusion Energy (CCFE))
Abstract

Parallelizing the time domain in numerical simulations is non-intuitive, but has been proven to be possible using various algorithms like parareal, PFASST and RIDC. Temporal parallelizations adds an entire new dimension to parallelize and significantly enhances use of super computing resources. Exploiting this technique serves as a big step towards exascale computation.

Starting with relatively simple problems, the parareal algorithm (Lions et al, A ''parareal'' in time discretization of PDE's, 2001) has been successfully applied to various complex simulations in the last few years (Samaddar et al, Parallelization in time of numerical simulations of fully-developed plasma turbulence using the parareal algorithm, 2010). The algorithm involves a predictor-corrector technique.

Numerical studies of the edge of magnetically confined, fusion plasma are an extremely challenging task. The complexity of the physics in this regime is particularly increased due to the presence of neutrals as well as the interaction of the plasma with the wall. These simulations are extremely computationally intensive but are key to rapidly achieving thermonuclear breakeven on ITER-like machines.

The SOLPS code package (Schneider et al, Plasma Edge Physics with B2‐Eirene, 2006) is widely used in the fusion community and has been used to design the ITER divertor. A reduction of the wallclock time for this code has been a long standing goal and recent studies have shown that a computational speed-up greater than 10 is possible for SOLPS (Samaddar et al, Greater than 10x Acceleration of fusion plasma edge simulations using the Parareal algorithm, 2014), which is highly significant for a code with this level of complexity.

In this project, the aim is to explore a variety of cases of relevance to ITER and thus involving more complex physics to study the feasibility of the algorithm. Since the success of the parareal algorithm heavily relies on choosing the optimum coarse solver as a predictor, the project will involve studying various options for this purpose. The tasks will also include performing scaling studies to optimize the use of computing resources yielding maximum possible computational gain.

Fri, 06 Nov 2015

10:00 - 11:00
L4

(1) Fluid and particle dynamics in blenders and food processors; (2) Filter surface optimisation for maximising peak air power of vacuum cleaners; (3) Fluid system models for drip coffee makers

Chuck Brunner
(Sharkninja)
Abstract

Blenders and food processors have been around for years.  However, detailed understanding of the fluid and particle dynamics going on with in the multi-phase flow of the processing chamber as well as the influence of variables such as the vessel geometry, blade geometry, speeds, surface properties etc., are not well understood.  SharkNinja would like Oxford Universities help in developing a model that can be used to gain insight into fluid dynamics within the food processing chamber with the goal being to develop a system that will produce better food processing performance as well as predict loading on food processing elements to enable data driven product design.

Many vacuum cleaners sold claim “no loss of suction” which is defined as having only a very small reduction in peak air power output over the life of the unit under normal operating conditions.  This is commonly achieved by having a high efficiency cyclonic separator combined with a filter which the user washes at regular intervals (typically every 3 months).  It has been observed that some vacuum cleaners show an increase in peak air watts output after a small amount of dust is deposited on the filter.  This effect is beneficial since it prolongs the time between filter washing.  SharkNinja are currently working on validating their theory as to why this occurs.  SharkNinja would like Oxford University’s help in developing a model that can be used to better understand this effect and provide insight towards optimizing future designs.

Although a very simple system from a construction standpoint, creating a drip coffee maker that can be produce a range of coffee sizes from a single cup to a multi-cup carafe presents unique problems.  Challenges within this system result from varying pressure heads on the inlet side, accurate measurement of relatively low flow rates, fluid motive force generated by boilers, and head above the boiler on the outlet side.  Getting all of these parameters right to deliver the proper strength, proper temp, and proper volume of coffee requires in depth understanding of the fluid dynamics involved in the system.  An ideal outcome from this work would be an adaptive model that enables a fluid system model to be created from building blocks.  This system model would include component models for tubing, boilers, flow meters, filters, pumps, check valves, and the like.

Fri, 19 Jun 2015
11:30
L5

iceCAM project with G's-Fresh

Alasdair Craighead
(G's-Fresh)
Abstract

G’s Growers supply salad and vegetable crops throughout the UK and Europe; primarily as a direct supplier to supermarkets. We are currently working on a project to improve the availability of Iceberg Lettuce throughout the year as this has historically been a very volatile crop. It is also by far the highest volume crop that we produce with typical weekly sales in the summer season being about 3m heads per week.

In order to continue to grow our business we must maintain continuous supply to the supermarkets. Our current method for achieving this is to grow more crop than we will actually harvest. We then aim to use the wholesale markets to sell the extra crop that is grown rather than ploughing it back in and then we reduce availability to these markets when the availability is tight.

We currently use a relatively simple computer Heat Unit model to help predict availability however we know that this is not the full picture. In order to try to help improve our position we have started the IceCAM project (Iceberg Crop Adaptive Model) which has 3 aims.

  1. Forecast crop availability spikes and troughs and use this to have better planting programmes from the start of the season.
  2. Identify the growth stages of Iceberg to measure more accurately whether crop is ahead or behind expectation when it is physically examined in the field.
  3. The final utopian aim would be to match the market so that in times of general shortage when price are high we have sufficient crop to meet all of our supermarket customer requirements and still have spare to sell onto the markets to benefit from the higher prices. Equally when there is a general surplus we would only look to have sufficient to supply the primary customer base.

We believe that statistical mathematics can help us to solve these problems!!

Fri, 19 Jun 2015

10:00 - 11:00
L5

Toward a Higher-Order Accurate Computational Flume Facility for Understanding Wave-Current-Structure Interaction

Chris Kees
(USAERDC)
Abstract

Accurate simulation of coastal and hydraulic structures is challenging due to a range of complex processes such as turbulent air-water flow and breaking waves. Many engineering studies are based on scale models in laboratory flumes, which are often expensive and insufficient for fully exploring these complex processes. To extend the physical laboratory facility, the US Army Engineer Research and Development Center has developed a computational flume capability for this class of problems. I will discuss the turbulent air-water flow model equations, which govern the computational flume, and the order-independent, unstructured finite element discretization on which our implementation is based. Results from our air-water verification and validation test set, which is being developed along with the computational flume, demonstrate the ability of the computational flume to predict the target phenomena, but the test results and our experience developing the computational flume suggest that significant improvements in accuracy, efficiency, and robustness may be obtained by incorporating recent improvements in numerical methods.

Key Words:

Multiphase flow, Navier-Stokes, level set methods, finite element methods, water waves

Fri, 12 Jun 2015

10:00 - 11:00
L5

A recommendation system for journey planning

Darren Price
(Thales)
Abstract

A recommendation system for multi-modal journey planning could be useful to travellers in making their journeys more efficient and pleasant, and to transport operators in encouraging travellers to make more effective use of infrastructure capacity.

Journeys will have multiple quantifiable attributes (e.g. time, cost, likelihood of getting a seat) and other attributes that we might infer indirectly (e.g. a pleasant view).  Individual travellers will have different preferences that will affect the most appropriate recommendations.  The recommendation system might build profiles for travellers, quantifying their preferences.  These could be inferred indirectly, based on the information they provide, choices they make and feedback they give.  These profiles might then be used to compare and rank different travel options.

Fri, 29 May 2015

10:00 - 11:00
L5

Continuum mechanics, uncertainty management, and the derivation of numerical modelling schemes in the area of hydrocarbon resources generation, expulsion and migration over the history of a basin

Steve Daum
(PDS Production Enterprise)
Abstract

Classically, basin modelling is undertaken with very little a priori knowledge. Alongside the challenge of improving the general fidelity and utility of the modelling systems, is the challenge of constraining these systems with unknowns and uncertainties in such a way that models (and derived simulation results) can be readily regenerated/reevaluated in the light of new empirical data obtained during the course of exploration, development and production activities.

Fri, 20 Mar 2015

10:00 - 11:00
L6

Saint-Gobain

Paul Leplay
Abstract

For this workshop, we have identified two subject of interest for us in the field of particle technology, one the wet granulation is a size enlargement process of converting small-diameter solid particles (typically powders) into larger-diameter agglomerates to generate a specific size, the other one the mechanical centrifugal air classifier is employed when the particle size that you need to separate is too fine to screen.