Industrial and Interdisciplinary Workshops (past)

Fri, 26/04
10:00
Charles Offer (Thales UK) Industrial and Interdisciplinary Workshops Add to calendar DH 3rd floor SR

Please note the change of venue!

Suppose there is a system where certain objects move through a network. The objects are detected only when they pass through a sparse set of points in the network. For example, the objects could be vehicles moving along a road network, and observed by a radar or other sensor as they pass through (or originate or terminate at) certain key points in the network, but which cannot be observed continuously and tracked as they travel from one point to another. Alternatively they could be data packets in a computer network. The detections only record the time at which an object passes by, and contain no information about identity that would trivially allow the movement of an individual object from one point to another to be deduced. It is desired to determine the statistics of the movement of the objects through the network. I.e. if an object passes through point A at a certain time it is desired to determine the probability density that the same object will pass through a point B at a certain later time.

The system might perhaps be represented by a graph, with a node at each point where detections are made. The detections at each node can be represented by a signal as a function of time, where the signal is a superposition of delta functions (one per detection). The statistics of the movement of objects between nodes must be deduced from the correlations between the signals at each node. The problem is complicated by the possibility that a given object might move between two nodes along several alternative routes (perhaps via other nodes or perhaps not), or might travel along the same route but with several alternative speeds.

What prior knowledge about the network, or constraints on the signals, are needed to make this problem solvable? Is it necessary to know the connections between the nodes or the pdfs for the transition time between nodes a priori, or can this be deduced? What conditions are needed on the information content of the signals? (I.e. if detections are very sparse on the time scale for passage through the network then the transition probabilities can be built up by considering each cascade of detections independently, while if detections are dense then it will presumably be necessary to assume that objects do not move through the network independently, but instead tend to form convoys that are apparent as a pattern of detections that persist for some distance on average). What limits are there on the noise in the signal or amount of unwanted signal, i.e. false detections, or objects which randomly fail to be detected at a particular node, or objects which are detected at one node but which do not pass through any other nodes? Is any special action needed to enforce causality, i.e. positive time delays for transitions between nodes?

Mon, 11/03
10:00
Tim Blass (Carnegie Mellon University & OxPDE) Industrial and Interdisciplinary Workshops Add to calendar Gibson 1st Floor SR
Please note the unusual day of the week for this workshop (a Monday) and also the unusual location.
Fri, 08/03
09:45
Nick Hall-Taylor (TBC) Industrial and Interdisciplinary Workshops Add to calendar DH 1st floor SR
In vertical annular two-phase flow, large amplitude waves ("disturbance waves") are the most significant means by which the liquid is transported by the action of the gas phase. The presentation is of certain experimental results with the intention of defining a conceptual model suitable for possible mathematical interpretation. These large waves have been studied for over 50 years but there has been little corresponding advance in the mathematical understanding of the phenomenon. The aim of the workshop is to discuss what analysis might be possible and how this might contribute to the understanding of the phenomena involved.
Fri, 01/03
10:00
Paul Duinveld (Philips) Industrial and Interdisciplinary Workshops Add to calendar DH 1st floor SR
An overview will be given for several examples of fluid mechanical problems in developing household appliances, we discuss some examples of e.g. baby bottles, water treatment, irons, fruit juicers and focus on oral health care where a new air floss product will be discussed.
Fri, 22/02
10:00
Klim McPherson (Obstetrics & Gynaecology, Oxford) Industrial and Interdisciplinary Workshops Add to calendar DH 1st floor SR

We wish to discuss the role of Modelling in Health Care. While risk factor prevalences vary and change with time it is difficult to anticipate the change in disease incidence that will result without accurately modelling the epidemiology. When detailed study of the prevalence of obesity, tobacco and salt intake, for example, are studied clear patterns emerge that can be extrapolated into the future. These can give rise to estimated probability distributions of these risk factors across age, sex, ethnicity, social class groups etc into the future. Micro simulation of individuals from defined populations (eg England 2012) can then estimate disease incidence, prevalence, death, costs and quality of life. Thus future health and other needs can be estimated, and interventions on these risk factors can be simulated for their population effect. Health policy can be better determined by a realistic characterisation of public health. The Foresight microsimulation modelling of the National Heart Forum (UK Health Forum) will be described. We will emphasise some of the mathematical and statistical issues associated with so doing.

Fri, 15/02
10:00
Victoria Nockles (Department of Earth Sciences, University of Oxford) Industrial and Interdisciplinary Workshops Add to calendar DH 1st floor SR

InSAR (Interferometric Synthetic Aperture Radar) is an important space geodetic technique (i.e. a technique that uses satellite data to obtain measurements of the Earth) of great interest to geophysicists monitoring slip along fault lines and other changes to shape of the Earth. InSAR works by using the difference in radar phase returns acquired at two different times to measure displacements of the Earth’s surface. Unfortunately, atmospheric noise and other problems mean that it can be difficult to use the InSAR data to obtain clear measurements of displacement.

Persistent Scatterer (PS) InSAR is a later adaptation of InSAR that uses statistical techniques to identify pixels within an InSAR image that are dominated by a single back scatterer, producing high amplitude and stable phase returns (Feretti et al. 2001, Hooper et al. 2004). PS InSAR has the advantage that it (hopefully) chooses the ‘better’ datapoints, but it has the disadvantage that it throws away a lot of the data that might have been available in the original InSAR signal.

InSAR and PS InSAR have typically been used in isolation to obtain slip-rates across faults, to understand the roles that faults play in regional tectonics, and to test models of continental deformation. But could they perhaps be combined? Or could PS InSAR be refined so that it doesn’t throw away as much of the original data? Or, perhaps, could the criteria used to determine what data are signal and what are noise be improved?

The key aim of this workshop is to describe and discuss the techniques and challenges associated with InSAR and PS InSAR (particularly the problem of atmospheric noise), and to look at possible methods for improvement, by combining InSAR and PS InSAR or by methods for making the choice of thresholds.

Fri, 08/02
09:45
Darren Kavanagh (Department of Engineering Science, University of Oxford) Industrial and Interdisciplinary Workshops Add to calendar DH 1st floor SR
Fri, 18/01
09:45
OCIAM Meeting Industrial and Interdisciplinary Workshops Add to calendar
DH common room at 09:45 and from 10:00 in DH12
Fri, 23/11/2012
10:00
Andreas Duering (Archaeology, Oxford) Industrial and Interdisciplinary Workshops Add to calendar DH 1st floor SR

The University of Oxford’s modelling4all software is a wonderful tool to simulate early medieval populations and their cemeteries in order to evaluate the influence of palaeodemographic variables, such as mortality, fertility, catastrophic events and disease on settlement dispersal. In my DPhil project I will study archaeological sites in Anglo-Saxon England and the German south-west in a comparative approach. The two regions have interesting similarities in their early medieval settlement pattern and include some of the first sites where both cemeteries and settlements were completely excavated.

An important discovery in bioarchaeology is that an excavated cemetery is not a straightforward representation of the living population. Preservation issues and the limitations of age and sex estimation methods using skeletal material must be considered. But also the statistical procedures to calculate the palaeodemographic characteristics of archaeological populations are procrustean. Agent-based models can help archaeologists to virtually bridge the chasm between the excavated dead populations and their living counterparts in which we are really interested in.

This approach leads very far away from the archaeologist’s methods and ways of thinking and the major challenge therefore is to balance innovative ideas with practicability and tangibility.

Some of the problems for the workshop are:

1.) Finding the best fitting virtual living populations for the excavated cemeteries

2.) Sensitivity analyses of palaeodemographic variables

3.) General methodologies to evaluate the outcome of agent based models

4.) Present data in a way that is both statistically correct and up to date & clear for archaeologists like me

5.) Explore how to include analytical procedures in the model to present the archaeological community with a user-friendly and not necessarily overwhelming toolkit

 

Fri, 16/11/2012
10:00
Owen Thomas (Thales Optronics) Industrial and Interdisciplinary Workshops Add to calendar DH 1st floor SR

The task is to estimate approach time (time-to-go (TTG)) of non-ballistic threats (e.g. missiles) using passive infrared imagery captured from a sensor on the target platform (e.g. a helicopter). The threat information available in a frame of data is angular position and signal amplitude.

A Kalman filter approach is presented that is applied to example amplitude data to estimate TTG. Angular information alone is not sufficient to allow analysis of missile guidance dynamics to provide a TTG estimate. Detection of the launch is required as is additional information in the form of a terrain database to determine initial range. Parameters that relate to missile dynamics might include proportional navigation constant and motor thrust. Differences between actual angular position observations and modelled values can beused to form an estimator for the parameter set and thence to the TTG.

The question posed here is, "how can signal amplitude information be employed to establish observability in a state-estimation-based model of the angular data to improve TTG estimate performance without any other source of range information?"

Fri, 09/11/2012
09:45
Tyler Reddy (Department of Biochemistry) Industrial and Interdisciplinary Workshops Add to calendar DH 1st floor SR

PLEASE NOTE EARLY START TIME TO AVOID CLASH WITH OCCAM GROUP MEETING

The human influenza A virus causes three to five million cases of severe illness and about 250 000 to 500 000 deaths each year. The 1918 Spanish Flu may have killed more than 40 million people. Yet, the underlying cause of the seasonality of the human influenza virus, its preferential transmission in winter in temperate climates, remains controversial. One of the major forms of the human influenza virus is a sphere made up of lipids selectively derived from the host cell along with specialized viral proteins. I have employed molecular dynamics simulations to study the biophysical properties of a single transmissible unit--an approximately spherical influenza A virion in water (i.e., to mimic the water droplets present in normal transmission of the virus). The surface area per lipid can't be calculated as a ratio of the surface area of the sphere to the number of lipids present as there are many different species of lipid for which different surface area values should be calculated. The 'mosaic' of lipid surface areas may be regarded quantitatively as a Voronoi diagram, but construction of a true spherical Voronoi tessellation is more challenging than the well-established methods for planar Voronoi diagrams. I describe my attempt to implement an approach to the spherical Voronoi problem (based on: Hyeon-Suk Na, Chung-Nim Lee, Otfried Cheong. Computational Geometry 23 (2002) 183–194) and the challenges that remain in the implementation of this algorithm.

Fri, 02/11/2012
10:00
various (Industry) Industrial and Interdisciplinary Workshops Add to calendar DH 1st floor SR

This is the session for our industrial sponsors to propose project ideas. Academic staff are requested to attend to help shape the problem statements and to suggest suitable internal supervisors for the projects. 

Fri, 19/10/2012
10:00
Visitor (Maths, Oxford) Industrial and Interdisciplinary Workshops Add to calendar DH 1st floor SR

Links between:

• storm tracks, sediment movement and an icy environment

• fluvial flash flooding to coastal erosion in the UK

Did you know that the recent Japanese, Chilean and Samoan tsunami all led to strong currents from resonance at the opposite end of the ocean?

Journey around the world, from the north Atlantic to the south Pacific, on a quest to explore and explain the maths of nature.

Fri, 12/10/2012
09:45
coffee in DH common room at 09:30 Industrial and Interdisciplinary Workshops Add to calendar DH 3rd floor SR
Fri, 01/06/2012
10:00
Andy Stove (Thales UK) Industrial and Interdisciplinary Workshops Add to calendar DH 1st floor SR

The issue of resource management arises with any sensor which is capable either of sensing only a part of its total field of view at any one time, or which is capable of having a number of operating modes, or both.

A very simple example is a camera with a telephoto lens.  The photographer has to decide what he is going to photograph, and whether to zoom in to get high resolution on a part of the scene, or zoom out to see more of the scene.  Very similar issues apply, of course, to electro-optical sensors (visible light or infra-red 'TV' cameras) and to radars.

The subject has, perhaps, been most extensively studied in relation to multi mode/multi function radars, where approaches such as neural networks, genetic algorithms and auction mechanisms have been proposed as well as more deterministic mangement schemes, but the methods which have actually been implemented have been much more primitive.

The use of multiple, disparate, sensors on multiple mobile, especially airborne, platforms adds further degrees of freedom to the problem - an extension is of growing interest.

The presentation will briefly review the problem for both the single-sensor and the multi-platform cases, and some of the approaches which have been proposed, and will highlight the remaining current problems.

Fri, 25/05/2012
11:00
David Howey (Department of Engineering Science, University of Oxford) Industrial and Interdisciplinary Workshops Add to calendar DH 1st floor SR

Please note the unusual start-time.

In order to run accurate electrochemical models of batteries (and other devices) it is necessary to know a priori the values of many geometric, electrical and electrochemical parameters (10-100 parameters) e.g. diffusion coefficients, electrode thicknesses etc. However a basic difficulty is that the only external measurements that can be made on cells without deconstructing and destroying them are surface temperature plus electrical measurements (voltage, current, impedance) at the terminals. An interesting research challenge therefore is the accurate, robust estimation of physically realistic model parameters based only on external measurements of complete cells. System identification techniques (from control engineering) including ‘electrochemical impedance spectroscopy’ (EIS) may be applied here – i.e. small signal frequency response measurement. However It is not clear exactly why and how impedance correlates to SOC/ SOH and temperature for each battery chemistry due to the complex interaction between impedance, degradation and temperature.

I will give a brief overview of some of the recent work in this area and try to explain some of the challenges in the hope that this will lead to a fruitful discussion about whether this problem can be solved or not and how best to tackle it.

Fri, 11/05/2012
09:30
chair: Jon Chapman Industrial and Interdisciplinary Workshops Add to calendar DH 3rd floor SR
Fri, 04/05/2012
10:00
Gary Barnes (Arkex) Industrial and Interdisciplinary Workshops Add to calendar DH 1st floor SR

ARKeX is a geophysical exploration company that conducts airborne gravity gradiometer surveys for the oil industry. By measuring the variations in the gravity field it is possible to infer valuable information about the sub-surface geology and help find prospective areas.

A new type of gravity gradiometer instrument is being developed to have higher resolution than the current technology. The basic operating principles are fairly simple - essentially measuring the relative displacement of two proof masses in response to a change in the gravity field. The challenge is to be able to see typical signals from geological features in the presence of large amounts of motional noise due to the aircraft. Fortunately, by making a gradient measurement, a lot of this noise is cancelled by the instrument itself. However, due to engineering tolerances, the instrument is not perfect and residual interference remains in the measurement.

Accelerometers and gyroscopes record the motional disturbances and can be used to mathematically model how the noise appears in the instrument and remove it during a software processing stage. To achieve this, we have employed methods taken from the field of system identification to produce models having typically 12 inputs and a single output. Generally, the models contain linear transfer functions that are optimised during a training stage where controlled accelerations are applied to the instrument in the absence of any anomalous gravity signal. After training, the models can be used to predict and remove the noise from data sets that contain signals of interest.

High levels of accuracy are required in the noise correction schemes to achieve the levels of data quality required for airborne exploration. We are therefore investigating ways to improve on our existing methods, or find alternative techniques. In particular, we believe non-linear and non-stationary models show benefits for this situation.

Fri, 27/04/2012
10:00
Industrial and Interdisciplinary Workshops Add to calendar DH 3rd floor SR
Syndicate content