Past Industrial and Interdisciplinary Workshops

29 January 2016
10:00
Marco Brambilla
Abstract

“Market-Basket (MB) and Household (HH) data provide a fertile substrate for the inference of association between marketing activity (e.g.: prices, promotions, advertisement, etc.) and customer behaviour (e.g.: customers driven to a store, specific product purchases, joint product purchases, etc.). The main aspect of MB and HH data which makes them suitable for this type of inference is the large number of variables of interest they contain at a granularity that is fit for purpose (e.g.: which items are bought together, at what frequency are items bought by a specific household, etc.).

A large number of methods are available to researchers and practitioners to infer meaningful networks of associations between variables of interest (e.g.: Bayesian networks, association rules, etc.). Inferred associations arise from applying statistical inference to the data. In order to use statistical association (correlation) to support an inference of causal association (“which is driving which”), an explicit theory of causality is needed.

Such a theory of causality can be used to design experiments and analyse the resultant data; in such a context certain statistical associations can be interpreted as evidence of causal associations.

On observational data (as opposed to experimental), the link between statistical and causal associations is less straightforward and it requires a theory of causality which is formal enough to support an appropriate calculus (e.g.: do-calculus) of counterfactuals and networks of causation.

My talk will be focused on providing retail analytic problems which may motivate an interest in exploring causal calculi’s potential benefits and challenges.”

  • Industrial and Interdisciplinary Workshops
4 December 2015
10:00
Abstract

Multidimensional single molecule microscopy (MSMM) generates image time series of biomolecules in a cellular environment that have been tagged with fluorescent labels. Initial analysis steps of such images consist of image registration of multiple channels, feature detection and single particle tracking. Further analysis may involve the estimation of diffusion rates, the measurement of separations between molecules that are not optically resolved and more. The analysis is done under the condition of poor signal to noise ratios, high density of features and other adverse conditions. Pushing the boundary of what is measurable, we are facing among others the following challenges. Firstly the correct assessment of the uncertainties and the significance of the results, secondly the fast and reliable identification of those features and tracks that fulfil the assumptions of the models used. Simpler models require more rigid preconditions and therefore limiting the usable data, complexer models are theoretically and especially computationally challenging.

  • Industrial and Interdisciplinary Workshops
20 November 2015
10:00
Graeme Clark
Abstract

Lein’s confocal systems make accurate and precise measurements in many different applications. In applications where the object under test introduces variability and/or optical aberrations to the optical signal, the accuracy and precision may deteriorate. This technical challenge looks for mathematical solutions to improve the accuracy and precision of measurements made in such circumstances.

The presentation will outline the confocal principle, show “perfect” signals, give details of how we analyse such signals, then move on to less perfect signals and the effects on measurement accuracy and precision.

  • Industrial and Interdisciplinary Workshops
13 November 2015
10:00
Abstract

Parallelizing the time domain in numerical simulations is non-intuitive, but has been proven to be possible using various algorithms like parareal, PFASST and RIDC. Temporal parallelizations adds an entire new dimension to parallelize and significantly enhances use of super computing resources. Exploiting this technique serves as a big step towards exascale computation.

Starting with relatively simple problems, the parareal algorithm (Lions et al, A ''parareal'' in time discretization of PDE's, 2001) has been successfully applied to various complex simulations in the last few years (Samaddar et al, Parallelization in time of numerical simulations of fully-developed plasma turbulence using the parareal algorithm, 2010). The algorithm involves a predictor-corrector technique.

Numerical studies of the edge of magnetically confined, fusion plasma are an extremely challenging task. The complexity of the physics in this regime is particularly increased due to the presence of neutrals as well as the interaction of the plasma with the wall. These simulations are extremely computationally intensive but are key to rapidly achieving thermonuclear breakeven on ITER-like machines.

The SOLPS code package (Schneider et al, Plasma Edge Physics with B2‐Eirene, 2006) is widely used in the fusion community and has been used to design the ITER divertor. A reduction of the wallclock time for this code has been a long standing goal and recent studies have shown that a computational speed-up greater than 10 is possible for SOLPS (Samaddar et al, Greater than 10x Acceleration of fusion plasma edge simulations using the Parareal algorithm, 2014), which is highly significant for a code with this level of complexity.

In this project, the aim is to explore a variety of cases of relevance to ITER and thus involving more complex physics to study the feasibility of the algorithm. Since the success of the parareal algorithm heavily relies on choosing the optimum coarse solver as a predictor, the project will involve studying various options for this purpose. The tasks will also include performing scaling studies to optimize the use of computing resources yielding maximum possible computational gain.

  • Industrial and Interdisciplinary Workshops
6 November 2015
10:00
Chuck Brunner
Abstract

Blenders and food processors have been around for years.  However, detailed understanding of the fluid and particle dynamics going on with in the multi-phase flow of the processing chamber as well as the influence of variables such as the vessel geometry, blade geometry, speeds, surface properties etc., are not well understood.  SharkNinja would like Oxford Universities help in developing a model that can be used to gain insight into fluid dynamics within the food processing chamber with the goal being to develop a system that will produce better food processing performance as well as predict loading on food processing elements to enable data driven product design.

Many vacuum cleaners sold claim “no loss of suction” which is defined as having only a very small reduction in peak air power output over the life of the unit under normal operating conditions.  This is commonly achieved by having a high efficiency cyclonic separator combined with a filter which the user washes at regular intervals (typically every 3 months).  It has been observed that some vacuum cleaners show an increase in peak air watts output after a small amount of dust is deposited on the filter.  This effect is beneficial since it prolongs the time between filter washing.  SharkNinja are currently working on validating their theory as to why this occurs.  SharkNinja would like Oxford University’s help in developing a model that can be used to better understand this effect and provide insight towards optimizing future designs.

Although a very simple system from a construction standpoint, creating a drip coffee maker that can be produce a range of coffee sizes from a single cup to a multi-cup carafe presents unique problems.  Challenges within this system result from varying pressure heads on the inlet side, accurate measurement of relatively low flow rates, fluid motive force generated by boilers, and head above the boiler on the outlet side.  Getting all of these parameters right to deliver the proper strength, proper temp, and proper volume of coffee requires in depth understanding of the fluid dynamics involved in the system.  An ideal outcome from this work would be an adaptive model that enables a fluid system model to be created from building blocks.  This system model would include component models for tubing, boilers, flow meters, filters, pumps, check valves, and the like.

  • Industrial and Interdisciplinary Workshops
19 June 2015
11:30
Alasdair Craighead
Abstract

G’s Growers supply salad and vegetable crops throughout the UK and Europe; primarily as a direct supplier to supermarkets. We are currently working on a project to improve the availability of Iceberg Lettuce throughout the year as this has historically been a very volatile crop. It is also by far the highest volume crop that we produce with typical weekly sales in the summer season being about 3m heads per week.

In order to continue to grow our business we must maintain continuous supply to the supermarkets. Our current method for achieving this is to grow more crop than we will actually harvest. We then aim to use the wholesale markets to sell the extra crop that is grown rather than ploughing it back in and then we reduce availability to these markets when the availability is tight.

We currently use a relatively simple computer Heat Unit model to help predict availability however we know that this is not the full picture. In order to try to help improve our position we have started the IceCAM project (Iceberg Crop Adaptive Model) which has 3 aims.

  1. Forecast crop availability spikes and troughs and use this to have better planting programmes from the start of the season.
  2. Identify the growth stages of Iceberg to measure more accurately whether crop is ahead or behind expectation when it is physically examined in the field.
  3. The final utopian aim would be to match the market so that in times of general shortage when price are high we have sufficient crop to meet all of our supermarket customer requirements and still have spare to sell onto the markets to benefit from the higher prices. Equally when there is a general surplus we would only look to have sufficient to supply the primary customer base.

We believe that statistical mathematics can help us to solve these problems!!

  • Industrial and Interdisciplinary Workshops
19 June 2015
10:00
Chris Kees
Abstract

Accurate simulation of coastal and hydraulic structures is challenging due to a range of complex processes such as turbulent air-water flow and breaking waves. Many engineering studies are based on scale models in laboratory flumes, which are often expensive and insufficient for fully exploring these complex processes. To extend the physical laboratory facility, the US Army Engineer Research and Development Center has developed a computational flume capability for this class of problems. I will discuss the turbulent air-water flow model equations, which govern the computational flume, and the order-independent, unstructured finite element discretization on which our implementation is based. Results from our air-water verification and validation test set, which is being developed along with the computational flume, demonstrate the ability of the computational flume to predict the target phenomena, but the test results and our experience developing the computational flume suggest that significant improvements in accuracy, efficiency, and robustness may be obtained by incorporating recent improvements in numerical methods.

Key Words:

Multiphase flow, Navier-Stokes, level set methods, finite element methods, water waves

  • Industrial and Interdisciplinary Workshops

Pages