Forthcoming events in this series


Fri, 14 Jun 2013

09:45 - 11:00

TBA

Abstract

Note early start to avoid a clash with the OCCAM group meeting.

Fri, 07 Jun 2013

10:00 - 11:00
DH 1st floor SR

Microelectromechanical Systems, Inverse Eigenvalue Analysis and Nonlinear Lattices

Bhaskar Choubey
(Department of Engineering Science, University of Oxford)
Abstract

Collective behaviours of coupled linear or nonlinear resonators have been of interest to engineers as well as mathematician for a long time. In this presentation, using the example of coupled resonant nano-sensors (which leads to a Linear pencil with a Jacobian matrix), I will show how previously feared and often avoided coupling between nano-devices along with their weak nonlinear behaviour can be used with inverse eigenvalue analysis to design multiple-input-single-output nano-sensors. We are using these matrices in designing micro/Nano electromechanical systems, particularly resonant sensors capable for measuring very small mass for use as environmental as well as biomedical monitors. With improvement in fabrication technology, we can design and build several such sensors on one substrate. However, this leads to challenges in interfacing them as well as introduces undesired parasitic coupling. More importantly, increased nonlinearity is being observed as these sensors reduce in size. However, this also presents an opportunity to experimentally study chains or matrices of coupled linear and/or nonlinear structures to develop new sensing modalities as well as to experimentally verify theoretically or numerically predicted results. The challenge for us is now to identify sensing modalities with chain of linear or nonlinear resonators coupled either linearly or nonlinearly. We are currently exploring chains of Duffing resonators, van der Pol oscillators as well as FPU type lattices.

Fri, 31 May 2013

10:00 - 11:15
DH 1st floor SR

Understanding Composite Hydrophones' Sensitivity at Low Frequency

Mike Clifton
(Thales UK (Underwater Systems))
Abstract

In order to reduce cost, the MOD are attempting to reduce the number of array types fitted to their assets. There is also a requirement for the arrays to increase their frequency coverage. A wide bandwidth capability is thus needed from a single array. The need for high sensitivity and comparatively high frequencies of operation has led to the view that 1 3 composites are suitable hydrophones for this purpose. These hydrophones are used widely in ultra-sonics, but are not generally used down to the frequency of the new arrays.

Experimental work using a single hydrophone (small in terms of wavelengths) has shown that the sensitivity drops significantly as the frequency approaches the bottom of the required band, and then recovers as the frequency reduces further. Complex computer modelling appears to suggest the loss in sensitivity is due to a "lateral mode" where the hydrophone "breathes" in and out. In order to engineer a solution, the mechanics of the cause of this problem and the associated parameters of the materials need to be identified (e.g. is changing the 1 3 filler material the best option?). In order to achieve this understanding, a mathematical model of the 1 3 composite hydrophone (ceramic pegs and filler) is required that can be used to explain why the hydrophone changes from the simple compression and expansion in the direction of travel of the wave front to a lateral "breathing" mode.

More details available from @email

Fri, 24 May 2013

10:00 - 11:15
DH 1st floor SR

Flash Sintering

Richard Todd
(Dept. of Materials)
Abstract

“Flash sintering” is a process reported by R Raj and co-workers in which very rapid densification of a ceramic powder compact is achieved by the passage of an electrical current through the specimen. Full density can be achieved in a few seconds (sintering normally takes several hours) and at furnace temperatures several hundred Kelvin below the temperatures required with conventional sintering. The name of the process comes from a runaway power spike that is observed at the point of sintering. Although it is acknowledged by Raj that Joule heating plays a role in the process, he and his co-authors claim that this is of minor importance and that entirely new physical effects must also be involved. However, the existence and possible relevance of these other effects of the electric field/current remains controversial. The aim of this workshop is to introduce the subject and to stimulate discussion of how mathematics could shed light on some the factors that are difficult to measure and understand experimentally.

Fri, 26 Apr 2013

10:00 - 11:15
DH 3rd floor SR

Analysis of travel patterns from departure and arrival times

Charles Offer
(Thales UK)
Abstract

Please note the change of venue!

Suppose there is a system where certain objects move through a network. The objects are detected only when they pass through a sparse set of points in the network. For example, the objects could be vehicles moving along a road network, and observed by a radar or other sensor as they pass through (or originate or terminate at) certain key points in the network, but which cannot be observed continuously and tracked as they travel from one point to another. Alternatively they could be data packets in a computer network. The detections only record the time at which an object passes by, and contain no information about identity that would trivially allow the movement of an individual object from one point to another to be deduced. It is desired to determine the statistics of the movement of the objects through the network. I.e. if an object passes through point A at a certain time it is desired to determine the probability density that the same object will pass through a point B at a certain later time.

The system might perhaps be represented by a graph, with a node at each point where detections are made. The detections at each node can be represented by a signal as a function of time, where the signal is a superposition of delta functions (one per detection). The statistics of the movement of objects between nodes must be deduced from the correlations between the signals at each node. The problem is complicated by the possibility that a given object might move between two nodes along several alternative routes (perhaps via other nodes or perhaps not), or might travel along the same route but with several alternative speeds.

What prior knowledge about the network, or constraints on the signals, are needed to make this problem solvable? Is it necessary to know the connections between the nodes or the pdfs for the transition time between nodes a priori, or can this be deduced? What conditions are needed on the information content of the signals? (I.e. if detections are very sparse on the time scale for passage through the network then the transition probabilities can be built up by considering each cascade of detections independently, while if detections are dense then it will presumably be necessary to assume that objects do not move through the network independently, but instead tend to form convoys that are apparent as a pattern of detections that persist for some distance on average). What limits are there on the noise in the signal or amount of unwanted signal, i.e. false detections, or objects which randomly fail to be detected at a particular node, or objects which are detected at one node but which do not pass through any other nodes? Is any special action needed to enforce causality, i.e. positive time delays for transitions between nodes?

Mon, 11 Mar 2013

10:00 - 12:00
Gibson 1st Floor SR

Dislocations

Tim Blass
(Carnegie Mellon University & OxPDE)
Abstract

Please note the unusual day of the week for this workshop (a Monday) and also the unusual location.

Fri, 08 Mar 2013

09:45 - 11:00
DH 1st floor SR

Experimental results in two-phase flow

Nick Hall-Taylor
(TBC)
Abstract

In vertical annular two-phase flow, large amplitude waves ("disturbance waves") are the most significant means by which the liquid is transported by the action of the gas phase. The presentation is of certain experimental results with the intention of defining a conceptual model suitable for possible mathematical interpretation.

These large waves have been studied for over 50 years but there has been little corresponding advance in the mathematical understanding of the phenomenon.

The aim of the workshop is to discuss what analysis might be possible and how this might contribute to the understanding of the phenomena involved.

Fri, 01 Mar 2013

10:00 - 11:15
DH 1st floor SR

The fluid mechanics of household appliances; a fascinating world!

Paul Duinveld
(Philips)
Abstract

An overview will be given for several examples of fluid mechanical problems in developing household appliances, we discuss some examples of e.g. baby bottles, water treatment, irons, fruit juicers and focus on oral health care where a new air floss product will be discussed.

Fri, 22 Feb 2013

10:00 - 11:37
DH 1st floor SR

Modelling chronic diseases and their consequences into the future reliably and usefully

Klim McPherson
(Obstetrics & Gynaecology, Oxford)
Abstract

We wish to discuss the role of Modelling in Health Care. While risk factor prevalences vary and change with time it is difficult to anticipate the change in disease incidence that will result without accurately modelling the epidemiology. When detailed study of the prevalence of obesity, tobacco and salt intake, for example, are studied clear patterns emerge that can be extrapolated into the future. These can give rise to estimated probability distributions of these risk factors across age, sex, ethnicity, social class groups etc into the future. Micro simulation of individuals from defined populations (eg England 2012) can then estimate disease incidence, prevalence, death, costs and quality of life. Thus future health and other needs can be estimated, and interventions on these risk factors can be simulated for their population effect. Health policy can be better determined by a realistic characterisation of public health. The Foresight microsimulation modelling of the National Heart Forum (UK Health Forum) will be described. We will emphasise some of the mathematical and statistical issues associated with so doing.

Fri, 15 Feb 2013

10:00 - 11:15
DH 1st floor SR

Investigating continental deformation using InSAR

Victoria Nockles
(Department of Earth Sciences, University of Oxford)
Abstract

InSAR (Interferometric Synthetic Aperture Radar) is an important space geodetic technique (i.e. a technique that uses satellite data to obtain measurements of the Earth) of great interest to geophysicists monitoring slip along fault lines and other changes to shape of the Earth. InSAR works by using the difference in radar phase returns acquired at two different times to measure displacements of the Earth’s surface. Unfortunately, atmospheric noise and other problems mean that it can be difficult to use the InSAR data to obtain clear measurements of displacement.

Persistent Scatterer (PS) InSAR is a later adaptation of InSAR that uses statistical techniques to identify pixels within an InSAR image that are dominated by a single back scatterer, producing high amplitude and stable phase returns (Feretti et al. 2001, Hooper et al. 2004). PS InSAR has the advantage that it (hopefully) chooses the ‘better’ datapoints, but it has the disadvantage that it throws away a lot of the data that might have been available in the original InSAR signal.

InSAR and PS InSAR have typically been used in isolation to obtain slip-rates across faults, to understand the roles that faults play in regional tectonics, and to test models of continental deformation. But could they perhaps be combined? Or could PS InSAR be refined so that it doesn’t throw away as much of the original data? Or, perhaps, could the criteria used to determine what data are signal and what are noise be improved?

The key aim of this workshop is to describe and discuss the techniques and challenges associated with InSAR and PS InSAR (particularly the problem of atmospheric noise), and to look at possible methods for improvement, by combining InSAR and PS InSAR or by methods for making the choice of thresholds.

Fri, 18 Jan 2013

09:45 - 11:00

DH12 Alan Tayler Room

OCIAM Meeting
Abstract

DH common room at 09:45 and from 10:00 in DH12

Fri, 23 Nov 2012

10:00 - 11:30
DH 1st floor SR

Virtual Anglo-Saxons. Agent-based modelling in archaeology and palaeodemography

Andreas Duering
(Archaeology, Oxford)
Abstract

The University of Oxford’s modelling4all software is a wonderful tool to simulate early medieval populations and their cemeteries in order to evaluate the influence of palaeodemographic variables, such as mortality, fertility, catastrophic events and disease on settlement dispersal. In my DPhil project I will study archaeological sites in Anglo-Saxon England and the German south-west in a comparative approach. The two regions have interesting similarities in their early medieval settlement pattern and include some of the first sites where both cemeteries and settlements were completely excavated.

An important discovery in bioarchaeology is that an excavated cemetery is not a straightforward representation of the living population. Preservation issues and the limitations of age and sex estimation methods using skeletal material must be considered. But also the statistical procedures to calculate the palaeodemographic characteristics of archaeological populations are procrustean. Agent-based models can help archaeologists to virtually bridge the chasm between the excavated dead populations and their living counterparts in which we are really interested in.

This approach leads very far away from the archaeologist’s methods and ways of thinking and the major challenge therefore is to balance innovative ideas with practicability and tangibility.

Some of the problems for the workshop are:

1.) Finding the best fitting virtual living populations for the excavated cemeteries

2.) Sensitivity analyses of palaeodemographic variables

3.) General methodologies to evaluate the outcome of agent based models

4.) Present data in a way that is both statistically correct and up to date & clear for archaeologists like me

5.) Explore how to include analytical procedures in the model to present the archaeological community with a user-friendly and not necessarily overwhelming toolkit

 

Fri, 16 Nov 2012

10:00 - 13:00
DH 1st floor SR

Time-To-Go Estimation

Owen Thomas
(Thales Optronics)
Abstract

The task is to estimate approach time (time-to-go (TTG)) of non-ballistic threats (e.g. missiles) using passive infrared imagery captured from a sensor on the target platform (e.g. a helicopter). The threat information available in a frame of data is angular position and signal amplitude.

A Kalman filter approach is presented that is applied to example amplitude data to estimate TTG. Angular information alone is not sufficient to allow analysis of missile guidance dynamics to provide a TTG estimate. Detection of the launch is required as is additional information in the form of a terrain database to determine initial range. Parameters that relate to missile dynamics might include proportional navigation constant and motor thrust. Differences between actual angular position observations and modelled values can beused to form an estimator for the parameter set and thence to the TTG.

The question posed here is, "how can signal amplitude information be employed to establish observability in a state-estimation-based model of the angular data to improve TTG estimate performance without any other source of range information?"

Fri, 09 Nov 2012

09:45 - 11:00
DH 1st floor SR

Tracking lipid surface area in the human influenza A virus

Tyler Reddy
(Department of Biochemistry)
Abstract

PLEASE NOTE EARLY START TIME TO AVOID CLASH WITH OCCAM GROUP MEETING

The human influenza A virus causes three to five million cases of severe illness and about 250 000 to 500 000 deaths each year. The 1918 Spanish Flu may have killed more than 40 million people. Yet, the underlying cause of the seasonality of the human influenza virus, its preferential transmission in winter in temperate climates, remains controversial. One of the major forms of the human influenza virus is a sphere made up of lipids selectively derived from the host cell along with specialized viral proteins. I have employed molecular dynamics simulations to study the biophysical properties of a single transmissible unit--an approximately spherical influenza A virion in water (i.e., to mimic the water droplets present in normal transmission of the virus). The surface area per lipid can't be calculated as a ratio of the surface area of the sphere to the number of lipids present as there are many different species of lipid for which different surface area values should be calculated. The 'mosaic' of lipid surface areas may be regarded quantitatively as a Voronoi diagram, but construction of a true spherical Voronoi tessellation is more challenging than the well-established methods for planar Voronoi diagrams. I describe my attempt to implement an approach to the spherical Voronoi problem (based on: Hyeon-Suk Na, Chung-Nim Lee, Otfried Cheong. Computational Geometry 23 (2002) 183–194) and the challenges that remain in the implementation of this algorithm.

Fri, 02 Nov 2012

10:00 - 12:33
DH 1st floor SR

MSc project proposals

various
(Industry)
Abstract

This is the session for our industrial sponsors to propose project ideas. Academic staff are requested to attend to help shape the problem statements and to suggest suitable internal supervisors for the projects. 

Fri, 19 Oct 2012

10:00 - 11:31
DH 1st floor SR

From Patterns to Modelling - Mathmagics in Land, Sea and Sky: What We Know, Don't Know and What We Think

Visitor
(Maths, Oxford)
Abstract

Links between:

• storm tracks, sediment movement and an icy environment

• fluvial flash flooding to coastal erosion in the UK

Did you know that the recent Japanese, Chilean and Samoan tsunami all led to strong currents from resonance at the opposite end of the ocean?

Journey around the world, from the north Atlantic to the south Pacific, on a quest to explore and explain the maths of nature.

Fri, 01 Jun 2012

10:00 - 12:30
DH 1st floor SR

Sensor Resource Management

Andy Stove
(Thales UK)
Abstract

The issue of resource management arises with any sensor which is capable either of sensing only a part of its total field of view at any one time, or which is capable of having a number of operating modes, or both.

A very simple example is a camera with a telephoto lens.  The photographer has to decide what he is going to photograph, and whether to zoom in to get high resolution on a part of the scene, or zoom out to see more of the scene.  Very similar issues apply, of course, to electro-optical sensors (visible light or infra-red 'TV' cameras) and to radars.

The subject has, perhaps, been most extensively studied in relation to multi mode/multi function radars, where approaches such as neural networks, genetic algorithms and auction mechanisms have been proposed as well as more deterministic mangement schemes, but the methods which have actually been implemented have been much more primitive.

The use of multiple, disparate, sensors on multiple mobile, especially airborne, platforms adds further degrees of freedom to the problem - an extension is of growing interest.

The presentation will briefly review the problem for both the single-sensor and the multi-platform cases, and some of the approaches which have been proposed, and will highlight the remaining current problems.

Fri, 25 May 2012

11:00 - 12:30
DH 1st floor SR

Parameter estimation for electrochemical cells

David Howey
(Department of Engineering Science, University of Oxford)
Abstract

Please note the unusual start-time.

In order to run accurate electrochemical models of batteries (and other devices) it is necessary to know a priori the values of many geometric, electrical and electrochemical parameters (10-100 parameters) e.g. diffusion coefficients, electrode thicknesses etc. However a basic difficulty is that the only external measurements that can be made on cells without deconstructing and destroying them are surface temperature plus electrical measurements (voltage, current, impedance) at the terminals. An interesting research challenge therefore is the accurate, robust estimation of physically realistic model parameters based only on external measurements of complete cells. System identification techniques (from control engineering) including ‘electrochemical impedance spectroscopy’ (EIS) may be applied here – i.e. small signal frequency response measurement. However It is not clear exactly why and how impedance correlates to SOC/ SOH and temperature for each battery chemistry due to the complex interaction between impedance, degradation and temperature.

I will give a brief overview of some of the recent work in this area and try to explain some of the challenges in the hope that this will lead to a fruitful discussion about whether this problem can be solved or not and how best to tackle it.

Fri, 11 May 2012

09:30 - 11:00
DH 3rd floor SR

OCIAM meeting

chair: Jon Chapman
Fri, 04 May 2012

10:00 - 11:30
DH 1st floor SR

Noise reduction for airborne gravity gradiometer instrumentation

Gary Barnes
(Arkex)
Abstract

ARKeX is a geophysical exploration company that conducts airborne gravity gradiometer surveys for the oil industry. By measuring the variations in the gravity field it is possible to infer valuable information about the sub-surface geology and help find prospective areas.

A new type of gravity gradiometer instrument is being developed to have higher resolution than the current technology. The basic operating principles are fairly simple - essentially measuring the relative displacement of two proof masses in response to a change in the gravity field. The challenge is to be able to see typical signals from geological features in the presence of large amounts of motional noise due to the aircraft. Fortunately, by making a gradient measurement, a lot of this noise is cancelled by the instrument itself. However, due to engineering tolerances, the instrument is not perfect and residual interference remains in the measurement.

Accelerometers and gyroscopes record the motional disturbances and can be used to mathematically model how the noise appears in the instrument and remove it during a software processing stage. To achieve this, we have employed methods taken from the field of system identification to produce models having typically 12 inputs and a single output. Generally, the models contain linear transfer functions that are optimised during a training stage where controlled accelerations are applied to the instrument in the absence of any anomalous gravity signal. After training, the models can be used to predict and remove the noise from data sets that contain signals of interest.

High levels of accuracy are required in the noise correction schemes to achieve the levels of data quality required for airborne exploration. We are therefore investigating ways to improve on our existing methods, or find alternative techniques. In particular, we believe non-linear and non-stationary models show benefits for this situation.

Fri, 27 Apr 2012

10:00 - 11:22
DH 3rd floor SR