Forthcoming events in this series


Fri, 06 Mar 2015

10:00 - 11:00
L4

Thales - Optimisation of complex processing systems

Mike Newman
Abstract

The behaviour of complex processing systems is often controlled by large numbers of parameters.  For example, one Thales radar processor has over 2000 adjustable parameters.  Evaluating the performance for each set of parameters is typically time-consuming, involving either simulation or processing of large recorded data sets (or both).  In processing recorded data, the optimum parameters for one data set are unlikely to be optimal for another.

We would be interested in discussing mathematical techniques that could make the process of optimisation more efficient and effective, and what we might learn from a more mathematical approach.

Fri, 13 Feb 2015

10:00 - 11:00
L5

VerdErg - VETT, a new low-head hydropower generator: minimising the losses

Abstract

VerdErg Renewable Energy Ltd is developing a new hydropower unit for cost-effective energy generation at very low heads of pressure. The device is called the VETT after the underlying technology – Venturi Enhanced Turbine Technology. Flow into the VETT is split into two. The larger flow at low head transfers its energy to the smaller flow at a greater head. The smaller flow powers a conventional turbo-generator which can be a smaller, faster unit at an order of magnitude lower cost. Further, there are significant environmental benefits to fish and birds compared to the conventional hydropower solution. After several physical model test programmes* in the UK, France and The Netherlands along with CFD studies the efficiency now stands at 50%. We wish to increase that by understanding the major loss mechanisms and how they might be avoided or minimised.

The presentation will explain the VETT’s working principles and key relationships, together with some possible ideas for improvement. The comments of attendees on problem areas, potential solutions and how an enhanced understanding of key phenomena may be applied will be most welcome.

*(One was observed by Prof John Ockendon who identified a fairly extreme flow condition in a region previously thought to be benign.)

Fri, 12 Dec 2014

10:00 - 11:00
L3

Workshop with Thales - Reduction of Radar Range Sidelobes Using Variants of the CLEAN Algorithm

Abstract

Most sensing systems exhibit so-called ‘sidelobe’ responses, which can be interpreted as an inevitable effect in one domain of truncation of the signal in the Fourier-complement domain.  Perhaps the best-known example is in antenna theory where sidelobes are an inevitable consequence of the fact that the antenna aperture must be finite.  The effect also appears in many other places, for example in time-frequency conversions and in the range domain of a pulse-compressed radar which radiates a signal only over a finite frequency band.  In the range domain these sidelobes extend over twice the length of the transmitted pulse.  For a conventional radar with relatively short pulses the effect of these unwanted returns is thus confined to a relatively short part of the range swathe.

 

Some of the most modern radar techniques, however, use continuous, noise-like transmissions.  ‘Primary’ noise-modulated radars are in their infancy but so-called ‘Passive’ radars using broadcast transmissions as their power source receive similar signals.  The sidelobes of even a small target at very short range can be larger than the main return from a target at much greater range.  This limits the dynamic range of the radar.

 

Since, however, the sidelobe pattern is predictable if the illuminating signal is known sufficiently accurately, the expected sidelobes due to a large target can be estimated and removed to tidy up the image.  This approach was first described formally in:

Hoegbom, J. A., ‘Aperture Synthesis with a Non-Regular Distribution of Interferometer Baselines,’ Astrom. Astrophys. Suppl. 15, pp417-26, 1974.

And is generally known by the name of the ‘CLEAN’ algorithm.

 

The seminar will outline the problem, outline the basic form of the algorithm and ask questions about what is possible with non-iterative versions of the algorithms, how to process the data coherently and how to understand any stability issues associated with the algorithm.

Fri, 21 Nov 2014

10:00 - 11:00
L5

Workshop with Sharp - Two Modelling Problems: (i) Freezing Particle-Containing Liquids and (ii)Llithium/Sodium Batteries

Abstract

Abstract:

(i) We consider the modelling of freezing of fluids which contain particulates and fibres (imagine orange juice “with bits”) flowing in channels. The objective is to design optimum geometry/temperatures to accelerate freezing.

(ii) We present the challenge of setting-up a model for lithium or sodium ion stationary energy storage cells and battery packs to calculate the gravimetric and volumetric energy density of the cells and cost. Depending upon the materials, electrode content, porosity, packing electrolyte and current collectors. There is a model existing for automotive called Batpac.

Fri, 04 Jul 2014

10:00 - 11:00
N3.12

Coffee Roasting

John Melrose (Mondelez)
Fri, 20 Jun 2014
10:00
L5

TBA

Giles Pavey (dunnhumby)
Fri, 13 Jun 2014

11:00 - 12:00
L5

Four Topics

Several Members from DuPont
Abstract

The four topics are:

1. Thermal interface materials

2. Low temperature joining technology

3. Nano Ag materials

4. Status of PV technology

Fri, 06 Jun 2014

10:00 - 11:00
L5

Finding Radar Transmissions from their Pulse Patterns

Andy Stove (Thales)
Abstract

An important military task in a high-technology environment is to understand the set of radars present in it, since the radars will be, to a greater or lesser extent, indicative of the ships, aircraft and other military units which are present.

The transmissions of the different radars typically overlap in most of the dimensions which characterise then, such as frequency and bearing, and their pulses are interleaved in time. If, however, we are able to separate the individual pulse trains which are present then not only does this allow us to know how many different radars are present, but the characteristics of the pulse train are indicative of the type of the radar.

The problem of recognising the pulse trains is not trivial, because many radars 'jitter' their transmissions and pulses may be missing or two pulses may occur together, causing the characteristics of the pulse to be 'garbled.' The jittering may be used as a way to reject mutual interference between the radars, to resolve ambiguities in measurements of range or velocity or to make it harder to jam the radar.

The problems caused by pulses overlapping are likely to become more severe in the future because the pulses of the individual radars are becoming longer.

Although solutions currently exist which can cope, to at least some extent, with most of these issues, the purpose of bringing this topic to the seminar is to allow a fresh look at the problem from first principles.

Fri, 16 May 2014

10:00 - 11:00
L5

Power dissipation in engineering superconductors, and implications on wire design

Ian Wilkinson (Siemens Magnet Technology)
Abstract

NbTi-based superconducting wires have widespread use in engineering applications of superconductivity such as MRI and accelerator magnets. Tolerance to the effects of interactions with changing (external) magnetic fields is an important consideration in wire design, in order to make the most efficient use of the superconducting material. This project aims to develop robust analytical models of the power dissipation in real conductor geometries across a broad frequency range of external field changes, with a view to developing wire designs that minimise these effects.

Fri, 09 May 2014

10:00 - 11:00
L5

Homogenising the wave equation: do we even understand the 1-D problem?

Chris Farmer and John Ockendon
(Oxford)
Abstract

Seismic exploration in the oil industry is one example where wave equations are used as models. When the wave speed is spatially varying one is naturally concerned with questions of homogenisation or upscaling, where one would like to calculate an effective or average wave speed. As a canonical problem this short workshop will introduce the one-dimensional acoustic wave equation with a rapidly varying wave speed, perhaps even a periodic variation. Three questions will be asked: (i) how do you calculate a sensible average wave speed (ii) does the wave equation suffice or is there a change of form after averaging and (iii) if one can induce any particular excitation at one end of a finite one-dimensional medium, and make any observations that we like at that end, what - if anything - can be inferred about the spatial variability of the wave speed?

Fri, 14 Mar 2014

10:00 - 11:00
L5

Two-phase Flow Problems in the Chemical Engineering Industry - a report of work done following OCIAM workshop on 8/3/13

Nick Hall Taylor, Ian Hewitt and John Ockendon
Abstract

This topic was the subject of an OCIAM workshop on 8th March 2013

given by Nick Hall Taylor . The presentation will start with a review

of the physical problem and experimental evidence. A mathematical

model leading to a hydrodynamic free boundary problem has been derived

and some mathematical and computational results will be described.

Finally we will assess the results so far and list a number of

interesting open problems.

----------------------------------------------------------------------------------------------------------------------------------------------------

After the workshop and during coffee at 11:30, we will also give a preview of the

upcoming problems at the Malaysian Study Group (Mar. 17-21). Problem

descriptions can be found here:

www.maths.ox.ac.uk/~trinh/2014_studygroup_problems.pdf.

Fri, 07 Mar 2014

10:00 - 11:00
L5

Mathematics and energy policy. Markets or central control power

John Rhys (The Oxford Institute for Energy Studies)
Abstract

This talk is intended to explain the link between some relatively straightforward mathematical concepts, in terms of linear programming and optimisation over a convex set of feasible solutions, and questions for the organisation of the power sector and hence for energy policy.

Both markets and centralised control systems should in theory optimise the use of the current stock of generation assets and ensure electricity is generated at least cost, by ranking plant in ascending order of short run marginal cost (SRMC), sometimes known as merit order operation. Wholesale markets, in principle at least, replicate exactly what would happen in a perfect but centrally calculated optimal dispatch of plant. This happens because the SRMC of each individual plant is “discovered” through the market and results in a price equal to “system marginal cost” (SMC), which is just high enough to incentivise the most costly plant required to meet the actual load.

More generally, defining the conditions for this to work - “decentralised prices replicate perfect central planning” - is of great interest to economists. Quite apart from any ideological implications, it also helps to define possible sources of market failure. There is an extensive literature on this, but we can explain why it has appeared to work so well, and so obviously, for merit order operation, and then consider whether the conditions underpinning its success will continue to apply in the future.

The big simplifying assumptions, regarded as an adequate approximation to reality, behind most current power markets are the following:

• Each optimisation period can be considered independent of all past and future periods.

• The only relevant costs are well defined short term operating costs, essentially fuel.

• (Fossil) plant is (infinitely) flexible, and costs vary continuously and linearly with output.

• Non-fossil plant has hitherto been intra-marginal, and hence has little impact

The merit order is essentially very simple linear programming, with the dual value of the main constraint equating to the “correct” market price. Unfortunately the simplifying assumptions cease to apply as we move towards types of plant (and consumer demand) with much more complex constraints and cost structures. These include major inflexibilities, stochastic elements, and storage, and many non-linearities. Possible consequences include:

• Single period optimisation, as a concept underlying the market or central control, will need to be abandoned. Multi period optimisation will be required.

• Algorithms much more complicated than simple merit order will be needed, embracing non-linearities and complex constraints.

• Mathematically there is no longer a “dual” price, and the conditions for decentralisation are broken. There is no obvious means of calculating what the price “ought” to be, or even knowing that a meaningful price exists.

The remaining questions are clear. The theory suggests that current market structures may be broken, but how do we assess or show when and how much this might matter?

Fri, 07 Feb 2014
10:00
L5

Droplet snap-off and coalescence in colloidal (lyotropic) liquid crystals

Lia Verhoeff (Chemistry, Oxford)
Abstract

Droplet snap-off and coalescence are very rich hydrodynamic phenomena that are even richer in liquid crystals where both the bulk phase and the interface have anisotropic properties. We studied both phenomena in suspensions of colloidal platelets with isotropic-nematic phase coexistence.

We observed two different scenarios for droplet snap-off depending on the relative values of the elastic constant and anchoring strength, in both cases markedly different from Newtonian pinching.[1] Furthermore, we studied coalescence of nematic droplets with the bulk nematic phase. For small droplets this qualitatively resembles coalescence in isotropic fluids, while larger droplets act as if they are immiscible with their own bulk phase. We also observed an interesting deformation of the director field inside the droplets as they sediment towards the bulk phase, probably as a result of flow inside the droplet. Finally, we found that mutual droplet coalescence is accompanied by large droplet deformations that closely resemble coalescence of isotropic droplets.[2]

[1] A.A. Verhoeff and H.N.W. Lekkerkerker, N. J. Phys. 14, 023010 (2012)

[2] M. Manga and H.A. Stone, J. Fluid Mech. 256, 647 (1993)


Fri, 24 Jan 2014

10:00 - 11:00
L5

4-dimensional trajectories: path planning for unmanned vehicles

Tim Aitken
(Quintec (Thales))
Abstract
The problem is based on real time computation for 4D (3D+time) trajectory planning for unmanned vehicles (UVs). The ability to quickly predict a 4D trajectory/path enables safe, flexible, efficient use of UVs in a collaborative space is a key objective for autonomous mission and task management. 

The problem/topic proposal will consist of 3 challenges: 
1. A single UV 4D path planning problem.
2. Multi UV 4D path planning sharing the same space and time.
3. Assignment of simultaneous tasks for multiple UVs based on the 4D path finding solution.
Fri, 15 Nov 2013

10:00 - 11:00
L5

Finding the Direction of Supersonic Travel from Shock Wave Measurements

Philip Pidsley, Thales Underwater Systems
Abstract

A projectile travelling supersonically in air creates a shock wave in the shape of a cone, with the projectile at the tip of the Mach cone. When the projectile travels over an array of microphones the shock wave is detected with different times of arrival at each microphone. Given measurements of the times of arrival, we are trying to calculate the azimuth direction of travel of the projectile. We have found a solution when the speed of the projectile is known. However the solution is ambiguous, and can take one of two possible values. Therefore we are seeking a new mathematical approach to resolve the ambiguity and thus find the azimuth direction of travel.

Fri, 08 Nov 2013

10:00 - 11:00
L5

The kinetics of ice formation

Philip Roberts (Sharp)
Abstract

Sharp Labs of Europe is interested in understanding the kinetics of ice on the inside of a rectangular channel through which water is flowing. The channel can be considered to be a long hole milled into a metal block. The block is maintained at a fixed temperature (<0°C). Nucleation is provided by ultrasonication. We are interested in:
- The position along the channel that ice begins to form / block the channel. 
- The ice profile (thickness) along the length of the channel as it grows. 
- The effect of channel size and profile (straight, fan shaped etc) on the ice profile.
- Effect of flow speed on ice formation.
Fri, 01 Nov 2013

10:00 - 11:00
L5

TBA

Svenn Anton Halvorsen, Teknova
(Teknova)
Fri, 14 Jun 2013

09:45 - 11:00

TBA

Abstract

Note early start to avoid a clash with the OCCAM group meeting.

Fri, 07 Jun 2013

10:00 - 11:00
DH 1st floor SR

Microelectromechanical Systems, Inverse Eigenvalue Analysis and Nonlinear Lattices

Bhaskar Choubey
(Department of Engineering Science, University of Oxford)
Abstract

Collective behaviours of coupled linear or nonlinear resonators have been of interest to engineers as well as mathematician for a long time. In this presentation, using the example of coupled resonant nano-sensors (which leads to a Linear pencil with a Jacobian matrix), I will show how previously feared and often avoided coupling between nano-devices along with their weak nonlinear behaviour can be used with inverse eigenvalue analysis to design multiple-input-single-output nano-sensors. We are using these matrices in designing micro/Nano electromechanical systems, particularly resonant sensors capable for measuring very small mass for use as environmental as well as biomedical monitors. With improvement in fabrication technology, we can design and build several such sensors on one substrate. However, this leads to challenges in interfacing them as well as introduces undesired parasitic coupling. More importantly, increased nonlinearity is being observed as these sensors reduce in size. However, this also presents an opportunity to experimentally study chains or matrices of coupled linear and/or nonlinear structures to develop new sensing modalities as well as to experimentally verify theoretically or numerically predicted results. The challenge for us is now to identify sensing modalities with chain of linear or nonlinear resonators coupled either linearly or nonlinearly. We are currently exploring chains of Duffing resonators, van der Pol oscillators as well as FPU type lattices.

Fri, 31 May 2013

10:00 - 11:15
DH 1st floor SR

Understanding Composite Hydrophones' Sensitivity at Low Frequency

Mike Clifton
(Thales UK (Underwater Systems))
Abstract

In order to reduce cost, the MOD are attempting to reduce the number of array types fitted to their assets. There is also a requirement for the arrays to increase their frequency coverage. A wide bandwidth capability is thus needed from a single array. The need for high sensitivity and comparatively high frequencies of operation has led to the view that 1 3 composites are suitable hydrophones for this purpose. These hydrophones are used widely in ultra-sonics, but are not generally used down to the frequency of the new arrays.

Experimental work using a single hydrophone (small in terms of wavelengths) has shown that the sensitivity drops significantly as the frequency approaches the bottom of the required band, and then recovers as the frequency reduces further. Complex computer modelling appears to suggest the loss in sensitivity is due to a "lateral mode" where the hydrophone "breathes" in and out. In order to engineer a solution, the mechanics of the cause of this problem and the associated parameters of the materials need to be identified (e.g. is changing the 1 3 filler material the best option?). In order to achieve this understanding, a mathematical model of the 1 3 composite hydrophone (ceramic pegs and filler) is required that can be used to explain why the hydrophone changes from the simple compression and expansion in the direction of travel of the wave front to a lateral "breathing" mode.

More details available from @email

Fri, 24 May 2013

10:00 - 11:15
DH 1st floor SR

Flash Sintering

Richard Todd
(Dept. of Materials)
Abstract

“Flash sintering” is a process reported by R Raj and co-workers in which very rapid densification of a ceramic powder compact is achieved by the passage of an electrical current through the specimen. Full density can be achieved in a few seconds (sintering normally takes several hours) and at furnace temperatures several hundred Kelvin below the temperatures required with conventional sintering. The name of the process comes from a runaway power spike that is observed at the point of sintering. Although it is acknowledged by Raj that Joule heating plays a role in the process, he and his co-authors claim that this is of minor importance and that entirely new physical effects must also be involved. However, the existence and possible relevance of these other effects of the electric field/current remains controversial. The aim of this workshop is to introduce the subject and to stimulate discussion of how mathematics could shed light on some the factors that are difficult to measure and understand experimentally.

Fri, 26 Apr 2013

10:00 - 11:15
DH 3rd floor SR

Analysis of travel patterns from departure and arrival times

Charles Offer
(Thales UK)
Abstract

Please note the change of venue!

Suppose there is a system where certain objects move through a network. The objects are detected only when they pass through a sparse set of points in the network. For example, the objects could be vehicles moving along a road network, and observed by a radar or other sensor as they pass through (or originate or terminate at) certain key points in the network, but which cannot be observed continuously and tracked as they travel from one point to another. Alternatively they could be data packets in a computer network. The detections only record the time at which an object passes by, and contain no information about identity that would trivially allow the movement of an individual object from one point to another to be deduced. It is desired to determine the statistics of the movement of the objects through the network. I.e. if an object passes through point A at a certain time it is desired to determine the probability density that the same object will pass through a point B at a certain later time.

The system might perhaps be represented by a graph, with a node at each point where detections are made. The detections at each node can be represented by a signal as a function of time, where the signal is a superposition of delta functions (one per detection). The statistics of the movement of objects between nodes must be deduced from the correlations between the signals at each node. The problem is complicated by the possibility that a given object might move between two nodes along several alternative routes (perhaps via other nodes or perhaps not), or might travel along the same route but with several alternative speeds.

What prior knowledge about the network, or constraints on the signals, are needed to make this problem solvable? Is it necessary to know the connections between the nodes or the pdfs for the transition time between nodes a priori, or can this be deduced? What conditions are needed on the information content of the signals? (I.e. if detections are very sparse on the time scale for passage through the network then the transition probabilities can be built up by considering each cascade of detections independently, while if detections are dense then it will presumably be necessary to assume that objects do not move through the network independently, but instead tend to form convoys that are apparent as a pattern of detections that persist for some distance on average). What limits are there on the noise in the signal or amount of unwanted signal, i.e. false detections, or objects which randomly fail to be detected at a particular node, or objects which are detected at one node but which do not pass through any other nodes? Is any special action needed to enforce causality, i.e. positive time delays for transitions between nodes?

Mon, 11 Mar 2013

10:00 - 12:00
Gibson 1st Floor SR

Dislocations

Tim Blass
(Carnegie Mellon University & OxPDE)
Abstract

Please note the unusual day of the week for this workshop (a Monday) and also the unusual location.

Fri, 08 Mar 2013

09:45 - 11:00
DH 1st floor SR

Experimental results in two-phase flow

Nick Hall-Taylor
(TBC)
Abstract

In vertical annular two-phase flow, large amplitude waves ("disturbance waves") are the most significant means by which the liquid is transported by the action of the gas phase. The presentation is of certain experimental results with the intention of defining a conceptual model suitable for possible mathematical interpretation.

These large waves have been studied for over 50 years but there has been little corresponding advance in the mathematical understanding of the phenomenon.

The aim of the workshop is to discuss what analysis might be possible and how this might contribute to the understanding of the phenomena involved.

Fri, 01 Mar 2013

10:00 - 11:15
DH 1st floor SR

The fluid mechanics of household appliances; a fascinating world!

Paul Duinveld
(Philips)
Abstract

An overview will be given for several examples of fluid mechanical problems in developing household appliances, we discuss some examples of e.g. baby bottles, water treatment, irons, fruit juicers and focus on oral health care where a new air floss product will be discussed.

Fri, 22 Feb 2013

10:00 - 11:37
DH 1st floor SR

Modelling chronic diseases and their consequences into the future reliably and usefully

Klim McPherson
(Obstetrics & Gynaecology, Oxford)
Abstract

We wish to discuss the role of Modelling in Health Care. While risk factor prevalences vary and change with time it is difficult to anticipate the change in disease incidence that will result without accurately modelling the epidemiology. When detailed study of the prevalence of obesity, tobacco and salt intake, for example, are studied clear patterns emerge that can be extrapolated into the future. These can give rise to estimated probability distributions of these risk factors across age, sex, ethnicity, social class groups etc into the future. Micro simulation of individuals from defined populations (eg England 2012) can then estimate disease incidence, prevalence, death, costs and quality of life. Thus future health and other needs can be estimated, and interventions on these risk factors can be simulated for their population effect. Health policy can be better determined by a realistic characterisation of public health. The Foresight microsimulation modelling of the National Heart Forum (UK Health Forum) will be described. We will emphasise some of the mathematical and statistical issues associated with so doing.

Fri, 15 Feb 2013

10:00 - 11:15
DH 1st floor SR

Investigating continental deformation using InSAR

Victoria Nockles
(Department of Earth Sciences, University of Oxford)
Abstract

InSAR (Interferometric Synthetic Aperture Radar) is an important space geodetic technique (i.e. a technique that uses satellite data to obtain measurements of the Earth) of great interest to geophysicists monitoring slip along fault lines and other changes to shape of the Earth. InSAR works by using the difference in radar phase returns acquired at two different times to measure displacements of the Earth’s surface. Unfortunately, atmospheric noise and other problems mean that it can be difficult to use the InSAR data to obtain clear measurements of displacement.

Persistent Scatterer (PS) InSAR is a later adaptation of InSAR that uses statistical techniques to identify pixels within an InSAR image that are dominated by a single back scatterer, producing high amplitude and stable phase returns (Feretti et al. 2001, Hooper et al. 2004). PS InSAR has the advantage that it (hopefully) chooses the ‘better’ datapoints, but it has the disadvantage that it throws away a lot of the data that might have been available in the original InSAR signal.

InSAR and PS InSAR have typically been used in isolation to obtain slip-rates across faults, to understand the roles that faults play in regional tectonics, and to test models of continental deformation. But could they perhaps be combined? Or could PS InSAR be refined so that it doesn’t throw away as much of the original data? Or, perhaps, could the criteria used to determine what data are signal and what are noise be improved?

The key aim of this workshop is to describe and discuss the techniques and challenges associated with InSAR and PS InSAR (particularly the problem of atmospheric noise), and to look at possible methods for improvement, by combining InSAR and PS InSAR or by methods for making the choice of thresholds.

Fri, 18 Jan 2013

09:45 - 11:00

DH12 Alan Tayler Room

OCIAM Meeting
Abstract

DH common room at 09:45 and from 10:00 in DH12

Fri, 23 Nov 2012

10:00 - 11:30
DH 1st floor SR

Virtual Anglo-Saxons. Agent-based modelling in archaeology and palaeodemography

Andreas Duering
(Archaeology, Oxford)
Abstract

The University of Oxford’s modelling4all software is a wonderful tool to simulate early medieval populations and their cemeteries in order to evaluate the influence of palaeodemographic variables, such as mortality, fertility, catastrophic events and disease on settlement dispersal. In my DPhil project I will study archaeological sites in Anglo-Saxon England and the German south-west in a comparative approach. The two regions have interesting similarities in their early medieval settlement pattern and include some of the first sites where both cemeteries and settlements were completely excavated.

An important discovery in bioarchaeology is that an excavated cemetery is not a straightforward representation of the living population. Preservation issues and the limitations of age and sex estimation methods using skeletal material must be considered. But also the statistical procedures to calculate the palaeodemographic characteristics of archaeological populations are procrustean. Agent-based models can help archaeologists to virtually bridge the chasm between the excavated dead populations and their living counterparts in which we are really interested in.

This approach leads very far away from the archaeologist’s methods and ways of thinking and the major challenge therefore is to balance innovative ideas with practicability and tangibility.

Some of the problems for the workshop are:

1.) Finding the best fitting virtual living populations for the excavated cemeteries

2.) Sensitivity analyses of palaeodemographic variables

3.) General methodologies to evaluate the outcome of agent based models

4.) Present data in a way that is both statistically correct and up to date & clear for archaeologists like me

5.) Explore how to include analytical procedures in the model to present the archaeological community with a user-friendly and not necessarily overwhelming toolkit

 

Fri, 16 Nov 2012

10:00 - 13:00
DH 1st floor SR

Time-To-Go Estimation

Owen Thomas
(Thales Optronics)
Abstract

The task is to estimate approach time (time-to-go (TTG)) of non-ballistic threats (e.g. missiles) using passive infrared imagery captured from a sensor on the target platform (e.g. a helicopter). The threat information available in a frame of data is angular position and signal amplitude.

A Kalman filter approach is presented that is applied to example amplitude data to estimate TTG. Angular information alone is not sufficient to allow analysis of missile guidance dynamics to provide a TTG estimate. Detection of the launch is required as is additional information in the form of a terrain database to determine initial range. Parameters that relate to missile dynamics might include proportional navigation constant and motor thrust. Differences between actual angular position observations and modelled values can beused to form an estimator for the parameter set and thence to the TTG.

The question posed here is, "how can signal amplitude information be employed to establish observability in a state-estimation-based model of the angular data to improve TTG estimate performance without any other source of range information?"

Fri, 09 Nov 2012

09:45 - 11:00
DH 1st floor SR

Tracking lipid surface area in the human influenza A virus

Tyler Reddy
(Department of Biochemistry)
Abstract

PLEASE NOTE EARLY START TIME TO AVOID CLASH WITH OCCAM GROUP MEETING

The human influenza A virus causes three to five million cases of severe illness and about 250 000 to 500 000 deaths each year. The 1918 Spanish Flu may have killed more than 40 million people. Yet, the underlying cause of the seasonality of the human influenza virus, its preferential transmission in winter in temperate climates, remains controversial. One of the major forms of the human influenza virus is a sphere made up of lipids selectively derived from the host cell along with specialized viral proteins. I have employed molecular dynamics simulations to study the biophysical properties of a single transmissible unit--an approximately spherical influenza A virion in water (i.e., to mimic the water droplets present in normal transmission of the virus). The surface area per lipid can't be calculated as a ratio of the surface area of the sphere to the number of lipids present as there are many different species of lipid for which different surface area values should be calculated. The 'mosaic' of lipid surface areas may be regarded quantitatively as a Voronoi diagram, but construction of a true spherical Voronoi tessellation is more challenging than the well-established methods for planar Voronoi diagrams. I describe my attempt to implement an approach to the spherical Voronoi problem (based on: Hyeon-Suk Na, Chung-Nim Lee, Otfried Cheong. Computational Geometry 23 (2002) 183–194) and the challenges that remain in the implementation of this algorithm.

Fri, 02 Nov 2012

10:00 - 12:33
DH 1st floor SR

MSc project proposals

various
(Industry)
Abstract

This is the session for our industrial sponsors to propose project ideas. Academic staff are requested to attend to help shape the problem statements and to suggest suitable internal supervisors for the projects. 

Fri, 19 Oct 2012

10:00 - 11:31
DH 1st floor SR

From Patterns to Modelling - Mathmagics in Land, Sea and Sky: What We Know, Don't Know and What We Think

Visitor
(Maths, Oxford)
Abstract

Links between:

• storm tracks, sediment movement and an icy environment

• fluvial flash flooding to coastal erosion in the UK

Did you know that the recent Japanese, Chilean and Samoan tsunami all led to strong currents from resonance at the opposite end of the ocean?

Journey around the world, from the north Atlantic to the south Pacific, on a quest to explore and explain the maths of nature.

Fri, 01 Jun 2012

10:00 - 12:30
DH 1st floor SR

Sensor Resource Management

Andy Stove
(Thales UK)
Abstract

The issue of resource management arises with any sensor which is capable either of sensing only a part of its total field of view at any one time, or which is capable of having a number of operating modes, or both.

A very simple example is a camera with a telephoto lens.  The photographer has to decide what he is going to photograph, and whether to zoom in to get high resolution on a part of the scene, or zoom out to see more of the scene.  Very similar issues apply, of course, to electro-optical sensors (visible light or infra-red 'TV' cameras) and to radars.

The subject has, perhaps, been most extensively studied in relation to multi mode/multi function radars, where approaches such as neural networks, genetic algorithms and auction mechanisms have been proposed as well as more deterministic mangement schemes, but the methods which have actually been implemented have been much more primitive.

The use of multiple, disparate, sensors on multiple mobile, especially airborne, platforms adds further degrees of freedom to the problem - an extension is of growing interest.

The presentation will briefly review the problem for both the single-sensor and the multi-platform cases, and some of the approaches which have been proposed, and will highlight the remaining current problems.

Fri, 25 May 2012

11:00 - 12:30
DH 1st floor SR

Parameter estimation for electrochemical cells

David Howey
(Department of Engineering Science, University of Oxford)
Abstract

Please note the unusual start-time.

In order to run accurate electrochemical models of batteries (and other devices) it is necessary to know a priori the values of many geometric, electrical and electrochemical parameters (10-100 parameters) e.g. diffusion coefficients, electrode thicknesses etc. However a basic difficulty is that the only external measurements that can be made on cells without deconstructing and destroying them are surface temperature plus electrical measurements (voltage, current, impedance) at the terminals. An interesting research challenge therefore is the accurate, robust estimation of physically realistic model parameters based only on external measurements of complete cells. System identification techniques (from control engineering) including ‘electrochemical impedance spectroscopy’ (EIS) may be applied here – i.e. small signal frequency response measurement. However It is not clear exactly why and how impedance correlates to SOC/ SOH and temperature for each battery chemistry due to the complex interaction between impedance, degradation and temperature.

I will give a brief overview of some of the recent work in this area and try to explain some of the challenges in the hope that this will lead to a fruitful discussion about whether this problem can be solved or not and how best to tackle it.

Fri, 11 May 2012

09:30 - 11:00
DH 3rd floor SR

OCIAM meeting

chair: Jon Chapman
Fri, 04 May 2012

10:00 - 11:30
DH 1st floor SR

Noise reduction for airborne gravity gradiometer instrumentation

Gary Barnes
(Arkex)
Abstract

ARKeX is a geophysical exploration company that conducts airborne gravity gradiometer surveys for the oil industry. By measuring the variations in the gravity field it is possible to infer valuable information about the sub-surface geology and help find prospective areas.

A new type of gravity gradiometer instrument is being developed to have higher resolution than the current technology. The basic operating principles are fairly simple - essentially measuring the relative displacement of two proof masses in response to a change in the gravity field. The challenge is to be able to see typical signals from geological features in the presence of large amounts of motional noise due to the aircraft. Fortunately, by making a gradient measurement, a lot of this noise is cancelled by the instrument itself. However, due to engineering tolerances, the instrument is not perfect and residual interference remains in the measurement.

Accelerometers and gyroscopes record the motional disturbances and can be used to mathematically model how the noise appears in the instrument and remove it during a software processing stage. To achieve this, we have employed methods taken from the field of system identification to produce models having typically 12 inputs and a single output. Generally, the models contain linear transfer functions that are optimised during a training stage where controlled accelerations are applied to the instrument in the absence of any anomalous gravity signal. After training, the models can be used to predict and remove the noise from data sets that contain signals of interest.

High levels of accuracy are required in the noise correction schemes to achieve the levels of data quality required for airborne exploration. We are therefore investigating ways to improve on our existing methods, or find alternative techniques. In particular, we believe non-linear and non-stationary models show benefits for this situation.

Fri, 27 Apr 2012

10:00 - 11:22
DH 3rd floor SR
Fri, 20 Apr 2012

10:00 - 11:30
DH 3rd floor SR

CANCELLED

Harry Walton
(Sharp Labs)
Abstract

Sorry, this has been cancelled at short notice!