BP workshop
Abstract
Topic to be confirmed. (This is the postponed workshop from Michaelmas term!)
Forthcoming events in this series
Topic to be confirmed. (This is the postponed workshop from Michaelmas term!)
'Pattern-of-life' is a current buzz-word in sensor systems. One aspect to this is the automatic estimation of traffic flow patterns, perhaps where existing road maps are not available. For example, a sensor might measure the position of a number of vehicles in 2D, with a finite time interval between each observation of the scene. It is desired to estimate the time-average spatial density, current density, sources and sinks etc. Are there practical methods to do this without tracking individual vehicles, given that there may also be false 'clutter' detections, the density of vehicles may be high, and each vehicle may not be detected in every timestep? And what if the traffic flow has periodicity, e.g. variations on the timescale of a day?
Problem #1: (marker-less scaling) Poikos ltd. has created algorithms for matching photographs of humans to three-dimensional body scans. Due to variability in camera lenses and body sizes, the resulting three-dimensional data is normalised to have unit height and has no absolute scale. The problem is to assign an absolute scale to normalised three-dimensional data.
Prior Knowledge: A database of similar (but different) reference objects with known scales. An imperfect 1:1 mapping from the input coordinates to the coordinates of each object within the reference database. A projection matrix mapping the three-dimensional data to the two-dimensional space of the photograph (involves a non-linear and non-invertible transform; x=(M*v)_x/(M*v)_z, y=(M*v)_y/(M*v)_z).
Problem #2: (improved silhouette fitting) Poikos ltd. has created algorithms for converting RGB photographs of humans in (approximate) poses into silhouettes. Currently, a multivariate Gaussian mixture model is used as a first pass. This is imperfect, and would benefit from an improved statistical method. The problem is to determine the probability that a given three-component colour at a given two-component location should be considered as "foreground" or "background".
Prior Knowledge: A sparse set of colours which are very likely to be skin (foreground), and their locations. May include some outliers. A (larger) sparse set of colours which are very likely to be clothing (foreground), and their locations. May include several distributions in the case of multi-coloured clothing, and will probably include vast variations in luminosity. A (larger still) sparse set of colours which are very likely to be background. Will probably overlap with skin and/or clothing colours. A very approximate skeleton for the subject.
Limitations: Sample colours are chosen "safely". That is, they are chosen in areas known to be away from edges. This causes two problems; highlights and shadows are not accounted for, and colours from arms and legs are under-represented in the model. All colours may be "saturated"; that is, information is lost about colours which are "brighter than white". All colours are subject to noise; each colour can be considered as a true colour plus a random variable from a gaussian distribution. The weight of this gaussian model is constant across all luminosities, that is, darker colours contain more relative noise than brighter colours.
A SMEC device is an array of aerofoil-shaped parallel hollow vanes forming linear venturis, perforated at the narrowest point where the vanes most nearly touch. When placed across a river or tidal flow, the water accelerates through the venturis between each pair of adjacent vanes and its pressure drops in accordance with Bernoulli’s Theorem. The low pressure zone draws a secondary flow out through the perforations in the adjacent hollow vanes which are all connected to a manifold at one end. The secondary flow enters the manifold through an axial flow turbine.
SMEC creates a small upstream head uplift of, say 1.5m – 2.5m, thereby converting some of the primary flow’s kinetic energy into potential energy. This head difference across the device drives around 80% of the flow between the vanes which can be seen to act as a no-moving-parts venturi pump, lowering the head on the back face of the turbine through which the other 20% of the flow is drawn. The head drop across this turbine, however, is amplified from, say, 2m up to, say, 8m. So SMEC is analogous to a step-up transformer, converting a high-volume low-pressure flow to a higher-pressure, lower-volume flow. It has all the same functional advantages of a step-up transformer and the inevitable transformer losses as well.
The key benefit is that a conventional turbine (or Archimedes Screw) designed to work efficiently at a 1.5m – 2.5m driving head has to be of very large diameter with a large step-up gearbox. In many real-World locations, this makes it too expensive or simply impractical, in shallow water for example.
The work we did in 2009-10 for DECC on a SMEC across the Severn Estuary concluded that compared to a conventional barrage, SMEC would output around 80% of the power at less than half the capital cost. Crucially, however, this greatly superior performance is achieved with minimal environmental impact as the tidal signal is preserved in the upstream lagoon, avoiding the severe damage to the feeding grounds of migratory birdlife that is an unwelcome characteristic of a conventional barrage.
To help successfully commercialise the technology, however, we will eventually want to build a reliable (CFD?) computer model of SMEC which even if partly parametric, would benefit hugely from an improved understanding of the small-scale turbulence and momentum transfer mechanisms in the mixing section.
DNA double strand breaks (DSB) are the most deleterious type of DNA damage induced by ionizing radiation and cytotoxic agents used in the treatment of cancer. When DSBs are formed, the cell attempts to repair the DNA damage through activation of a variety of molecular repair pathways. One of the earliest events in response to the presence of DSBs is the phosphorylation of a histone protein, H2AX, to form γH2AX. Many hundreds of copies of γH2AX form, occupying several mega bases of DNA at the site of each DSB. These large collections of γH2AX can be visualized using a fluorescence microscopy technique and are called ‘γH2AX foci’. γH2AX serves as a scaffold to which other DNA damage repair proteins adhere and so facilitates repair. Following re-ligation of the DNA DSB, the γH2AX is dephosphorylated and the foci disappear.
We have developed a contrast agent, 111In-anti-γH2AX-Tat, for nuclear medicine (SPECT) imaging of γH2AX which is based on an anti-γH2AX monoclonal antibody. This agent allows us to image DNA DSB in vitro in cells, and in in vivo model systems of cancer. The ability to track the spatiotemporal distribution of DNA damage in vivo would have many potential clinical applications, including as an early read-out of tumour response or resistance to particular anticancer drugs or radiation therapy.
The imaging tracer principle states that a contrast agent should not interfere with the physiology of the process being imaged. Therefore, we have investigated the influence of the contrast agent itself on the kinetics of DSB formation, repair and on γH2AX foci formation and resolution and now wish to synthesise these data into a coherent kinetic-dynamic model.
Please note that this is taking place in the afternoon - partly to avoid a clash with the OCCAM group meeting in the morning.
The standard mathematical treatment of risk combines numerical measures of uncertainty (usually probabilistic) and loss (money and other natural estimators of utility). There are significant practical and theoretical problems with this interpretation. A particular concern is that the estimation of quantitative parameters is frequently problematic, particularly when dealing with one-off events such as political, economic or environmental disasters. Practical decision-making under risk, therefore, frequently requires extensions to the standard treatment.
An intuitive approach to reasoning under uncertainty has recently become established in computer science and cognitive science in which general theories (formalised in a non-classical first-order logic) are applied to descriptions of specific situations in order to construct arguments for and/or against claims about possible events. Collections of arguments can be aggregated to characterize the type or degree of risk, using the logical grounds of the arguments to explain, and assess the credibility of, the supporting evidence for competing claims. Discussions about whether a complex piece of equipment or software could fail, the possible consequences of such failure and their mitigation, for example, can be based on the balance and relative credibility of all the arguments. This approach has been shown to offer versatile risk management tools in a number of domains, including clinical medicine and toxicology (e.g. www.infermed.com; www.lhasa.com). Argumentation frameworks are also being used to support open discussion and debates about important issues (e.g. see debate on environmental risks at www.debategraph.org).
Despite the practical success of argument-based methods for risk assessment and other kinds of decision making they typically ignore measurement of uncertainty even if some quantitative data are available, or combine logical inference with quantitative uncertainty calculations in ad hoc ways. After a brief introduction to the argumentation approach I will demonstrate medical risk management applications of both kinds and invite suggestions for solutions which are mathematically more satisfactory.
Definitions (Hubbard: http://en.wikipedia.org/wiki/Risk)
Uncertainty: The lack of complete certainty, that is, the existence of more than one possibility. The "true" outcome/state/result/value is not known.
Measurement of uncertainty: A set of probabilities assigned to a set of possibilities. Example:"There is a 60% chance this market will double in five years"
Risk: A state of uncertainty where some of the possibilities involve a loss, catastrophe, or other undesirable outcome.
Measurement of risk: A set of possibilities each with quantified probabilities and quantified losses. Example: "There is a 40% chance the proposed oil well will be dry with a loss of $12 million in exploratory drilling costs".
The conceptual background to the argumentation approach to reasoning under uncertainty is reviewed in the attached paper “Arguing about the Evidence: a logical approach”.
The following two topics are likely to be discussed.
A) Modelling the collective behaviour of chicken
flocks. Marian Dawkins has a joint project with Steve Roberts in Engineering studying the patterns of optical flow in large flocks of commercial
broiler chickens. They have found that various measurements of flow (such as skew
and kurtosis) are predictive of future mortality. Marian would be interested in
seeing whether we can model these effects.
B) Asymmetrical prisoners’ dilemma games. Despite massive theoretical interest,
there are very few (if any) actual examples of animals showing the predicted
behaviour of reciprocity with delayed reward. Marian Dawkins suspects that the reason for
this is that the assumptions made are unrealistic and she would like to explore
some ideas about this.
Please note the slightly early start to accommodate the OCCAM group meeting that follows.
10am Radius Health - Mark Evans
10:30am NAG - Mick Pont and Lawrence Mulholland
Please note, that Thales are also proposing several projects but the academic supervisors have already been allocated.
There will be a BP workshop but we are waiting for some suggested alternative dates.
Many radar designs transmit trains of pulses to estimate the Doppler shift from moving targets, in order to distinguish them from the returns from stationary objects (clutter) at the same range. The design of these waveforms is a compromise, because when the radar's pulse repetition frequency (PRF) is high enough to sample the Doppler shift without excessive ambiguity, the range measurements often also become ambiguous. Low-PRF radars are designed to be unambiguous in range, but are highly ambiguous in Doppler. High-PRF radars are, conversely unambiguous in Doppler but highly ambiguous in range. Medium-PRF radars have a moderate degree of ambiguity (say five times) in both range and Doppler and give better overall performance.
The ambiguities mean that multiple PRFs must be used to resolve the ambiguities (using the principle of the Chinese Remainder Theorom). A more serious issue, however, is that each PRF is now 'blind' at certain ranges, where the received signal arrives at the same time as the next pulse is transmitted, and at certain Doppler shifts (target speeds), when the return is 'folded' in Doppler so that it is hidden under the much larger clutter signal.
A practical radar therefore transmits successive bursts of pulses at different PRFs to overcome the 'blindness' and to resolve the ambiguities. Analysing the performance, although quite complex if done in detail, is possible using modern computer models, but the inverse problems of synthesing waveforms with a given performance remains difficult. Even more difficult is the problem of gaining intuitive insights into the likely effect of altering the waveforms. Such insights would be extremely valuable for the design process.
This problem is well known within the radar industry, but it is hoped that by airing it to an audience with a wider range of skills, some new ways of looking at the problem might be found.
Emma Warneford: "Formation of Zonal Jets and the Quasigeostrophic Theory of the Thermodynamic Shallow Water Equations"
Georgios Anastasiades: "Quantile forecasting of wind power using variability indices"
Due to illness the speaker has been forced to postpone at short notice. A new date will be announced as soon as possible.
This will be on the topic of the CASE project Thales will be sponsoring from Oct '11.
We apply the novel method of potential analysis to study climatic records. The method comprises (i) derivation of the number of climate states from time series, (ii) derivation of the potential coefficients. Dynamically
monitoring patterns of potential analysis yields indications of possible bifurcations and transitions of the system.
The method is tested on artificial data and then applied to various climatic records [1,2]. It can be applied to a wide range of stochastic systems where time series of sufficient length and temporal resolution are available and transitions or bifurcations are surmised. A recent application of the method in a model of globally coupled bistable systems [3] confirms its general applicability for studying time series in statistical physics.
[1] Livina et al, Climate of the Past, 2010.
[2] Livina et al, Climate Dynamics (submitted)
[3] Vaz Martins et al, Phys. Rev. E, 2010
There will be three problems discussed all of which are open for consideration as MSc projects.
1. Reduction of Ndof in Adaptive Signal Processing
2. Calculus of Convex Sets
3. Dynamic Response of a disc with an off centre hole(s)
This is the session for industrial sponsors of the MSc in MM and SC to present the project ideas for 2010-11 academic year. Potential supervisors should attend to clarify details of the projects and meet the industrialists.
The schedule is 10am: Introduction; 10:05am David Sayers for NAG; 10:35am Andy Stove for Thales.Please note the earlier than usual start-time!
PLEASE NOTE THAT THIS WORKSHOP IS TO BE HELD IN 21 BANBURY ROAD BEGINNING AT 9AM! \\
We will give three short presentations of current work here on small scale mechanics :
1) micron-scale cantilever testing and nanoindentation - Dave Armstrong
2) micron-scale pillar compression – Ele Grieveson
3) Dislocation loop shapes – Steve Fitzgerald
These should all provide fuel for discussion, and I hope ideas for future collaborative work.\\
The meeting will be in the committee room in 21 Banbury Rd (1st floor, West end).
John Allen: The Bennett Pinch revisited
Abstract: The original derivation of the well-known Bennett relation is presented. Willard H. Bennett developed a theory, considering both electric and magnetic fields within a pinched column, which is completely different from that found in the textbooks. The latter theory is based on simple magnetohydrodynamics which ignores the electric field.
The discussion leads to the interesting question as to whether the possibility of purely electrostatic confinement should be seriously considered.
Angela Mihai: A mathematical model of coupled chemical and electrochemical processes arising in stress corrosion cracking
Abstract: A general mathematical model for the electrochemistry of corrosion in a long and narrow metal crack is constructed by extending classical kinetic models to also incorporate physically realistic kinematic conditions of metal erosion and surface film growth. In this model, the electrochemical processes are described by a system of transport equations coupled through an electric field, and the movement of the metal surface is caused, on the one hand, by the corrosion process, and on the other hand, by the undermining action of a hydroxide film, which forms by consuming the metal substrate. For the model problem, approximate solutions obtained via a combination of analytical and numerical methods indicate that, if the diffusivity of the metal ions across the film increases, a thick unprotective film forms, while if the rate at which the hydroxide produces is increased, a thin passivating film develops.
We will try to cover the following problems in the workshop:
(1) Modelling of aortic aneurisms showing the changes in blood flow / wall loads before and after placements of aortic stents;
(2) Modelling of blood flows / wall loads in interracial aneurisms when flow diverters are used;
(3) Metal artefact reduction in computer tomography (CT).
If we run out of time the third topic may be postponed.
This workshop is half-seminar, half-workshop. \\ \\ HSBC have an on-going problem and they submitted a proposal for an MSc in Applied Stats project on this topic. Unfortunately, the project was submitted too late for this cohort of students. Eurico will talk about "the first approach at the problem" but please be aware that it is an open problem which requires further work. Eurico's abstract is as follows. \\ \\
This article examines modelling yield curves through chaotic dynamical systems whose dynamics can be unfolded using non-linear embeddings in higher dimensions. We then refine recent techniques used in the state space reconstruction of spatially extended time series in order to forecast the dynamics of yield curves.
We use daily LIBOR GBP data (January 2007-June 2008) in order to perform forecasts over a 1-month horizon. Our method seems to outperform random walk and other benchmark models on the basis of mean square forecast error criteria.
Puck Rombach;
"Weighted Generalization of the Chromatic Number in Networks with Community Structure",
Christopher Lustri;
"Exponential Asymptotics for Time-Varying Flows,
Alex Shabala
"Mathematical Modelling of Oncolytic Virotherapy",
Martin Gould;
"Foreign Exchange Trading and The Limit Order Book"
'Compressive sampling' is a topic of current interest. It relies on data being sparse in some domain, which allows what is apparently 'sub Nyquist' sampling so that the quantities of data which must be handled become more closely related to the information rate. This principal would appear to have (at least) three applications for radar and electronic warfare: \\
The most modest application is to reduce the amount of data which we must handle: radar and electronic warfare receivers generate vast amounts of data (up to 1Gbit/second or even 10Gbit.sec). It is desirable to be able to store this data for future analysis and it is also becoming increasingly important to be able to share it between different sensors, which, prima facie, requires vast communication bandwidths and it would be valuable to be able to find ways to handle this more efficiently. \\
The second advantage is that if suitable data domains can be identified, it may also be possible to pre-process the data before the analogue to digital converters in the receivers, to reduce the demands on these critical components. \\
The most ambitious use of compressive sensing would be to find ways of modifying the radar waveforms, and the electronic warfare receiver sampling strategies, to change the domain in which the information is represented to reduce the data rates at the receiver 'front ends', i.e. make the data at the front end better match the information we really want to acquire.\\
The aim of the presentation will be to describe the issues with which we are faced, and to discuss how compressive sampling might be able to help. A particular issue which will be raised is how we might find domains in which the data is sparse.
9:50am Welcome \\
10:00am Malcolm McCulloch (Engineering, Oxford), "Dual usage of land: Solar power and cattle grazing"; \\
10:45am Jonathan Moghal (Materials, Oxford), “Anti-reflectance coatings: ascertaining microstructure from optical properties”; \\
11:15am (approx) Coffee \\
11:45am Agnese Abrusci (Physics, Oxford), "P3HT based dye-sensitized solar cells"; \\
12:15pm Peter Foreman (Destertec UK), "Concentrating Solar Power and Financial Issues" \\
1:00pm Lunch.
Andrew Stewart -
The role of the complete Coriolis force in ocean currents that cross the equator
Large scale motions in the atmosphere and ocean are dominated by the Coriolis force due to the Earth's rotation. This tends to prevent fluid crossing the equator from one hemisphere to the other. We investigate the flow of a deep ocean current, the Antarctic Bottom Water, across the equator using a shallow water model that includes the Earth's complete Coriolis force. By contrast, most theoretical models of the atmosphere and ocean use the so-called traditional approximation that neglects the component of the Coriolis force associated with the locally horizontal component of the Earth's rotation vector. Using a combination of analytical and numerical techniques, we show that the cross-equatorial transport of the Antarctic Bottom Water may be substantially influenced by the interaction of the complete Coriolis force with bottom topography.
NO WORKSHOP - 09:45 coffee in DH Common Room for those attending the OCIAM Meeting
• Amy Smith presents: “Multiscale modelling of coronary blood flow derived from the microstructure”
• Laura Gallimore presents: “Modelling Cell Motility”
• Jean-Charles Seguis presents: “Coupling the membrane with the cytosol: a first encounter”
This will not be a normal workshop with a single scientist presenting an unsolved problem where mathematics may help. Instead it is more of a discussion meeting with a few speakers all interested in a single theme. So far we have:
Lenny Smith (LSE) on Using Empirically Inadequate Models to inform Your Subjective Probabilities: How might Solvency II inform climate change decisions?
Dan Rowlands (AOPP, Oxford) on "objective" climate forecasting;
Tim Palmer (ECMWF and AOPP, Oxford) on Constraining predictions of climate change using methods of data assimilation;
Chris Farmer (Oxford) about the problem of how to ascertain the error in the equations of a model when in the midst of probabilistic forecasting and prediction.
Synthetic Aperture Radars (SARs) produce high resolution images over large areas at high data rates. An aircraft flying at 100m/s can easily image an area at a rate of 1square kilometre per second at a resolution of 0.3x0.3m, i.e. 10Mpixels/sec with a dynamic range of 60-80dB (10-13bits). Unlike optical images, the SAR image is also coherent and this coherence can be used to detect changes in the terrain from one image to another, for example to detect the distortions in the ground surface which precede volcanic eruptions.
It is clearly very desirable to be able to compress these images before they are relayed from one place to another, most particularly down to the ground from the aircraft in which they are gathered.
Conventional image compression techniques superficially work well with SAR images, for example JPEG 2000 was created for the compression of traditional photographic images and optimised on that basis. However there is conventional wisdom that SAR data is generally much less correlated in nature and therefore unlikely to achieve the same compression ratios using the same coding schemes unless significant information is lost.
Features which typically need to be preserved in SAR images are:
o texture to identify different types of terrain
o boundaries between different types of terrain
o anomalies, such as military vehicles in the middle of a field, which may be of tactical importance and
o the fine details of the pixels on a military target so that it might be recognised.
The talk will describe how Synthetic Aperture Radar images are formed and the features of them which make the requirements for compression algorithms different from electro-optical images and the properties of wavelets which may make them appropriate for addressing this problem. It will also discuss what is currently known about the compression of radar images in general.
Heike Gramberg - Flagellar beating in trypanosomes
Robert Whittaker - High-Frequency Self-Excited Oscillations in 3D Collapsible Tube Flows