Fri, 26 Apr 2013

10:00 - 11:15
DH 3rd floor SR

Analysis of travel patterns from departure and arrival times

Charles Offer
(Thales UK)
Abstract

Please note the change of venue!

Suppose there is a system where certain objects move through a network. The objects are detected only when they pass through a sparse set of points in the network. For example, the objects could be vehicles moving along a road network, and observed by a radar or other sensor as they pass through (or originate or terminate at) certain key points in the network, but which cannot be observed continuously and tracked as they travel from one point to another. Alternatively they could be data packets in a computer network. The detections only record the time at which an object passes by, and contain no information about identity that would trivially allow the movement of an individual object from one point to another to be deduced. It is desired to determine the statistics of the movement of the objects through the network. I.e. if an object passes through point A at a certain time it is desired to determine the probability density that the same object will pass through a point B at a certain later time.

The system might perhaps be represented by a graph, with a node at each point where detections are made. The detections at each node can be represented by a signal as a function of time, where the signal is a superposition of delta functions (one per detection). The statistics of the movement of objects between nodes must be deduced from the correlations between the signals at each node. The problem is complicated by the possibility that a given object might move between two nodes along several alternative routes (perhaps via other nodes or perhaps not), or might travel along the same route but with several alternative speeds.

What prior knowledge about the network, or constraints on the signals, are needed to make this problem solvable? Is it necessary to know the connections between the nodes or the pdfs for the transition time between nodes a priori, or can this be deduced? What conditions are needed on the information content of the signals? (I.e. if detections are very sparse on the time scale for passage through the network then the transition probabilities can be built up by considering each cascade of detections independently, while if detections are dense then it will presumably be necessary to assume that objects do not move through the network independently, but instead tend to form convoys that are apparent as a pattern of detections that persist for some distance on average). What limits are there on the noise in the signal or amount of unwanted signal, i.e. false detections, or objects which randomly fail to be detected at a particular node, or objects which are detected at one node but which do not pass through any other nodes? Is any special action needed to enforce causality, i.e. positive time delays for transitions between nodes?

Fri, 01 Jun 2012

10:00 - 12:30
DH 1st floor SR

Sensor Resource Management

Andy Stove
(Thales UK)
Abstract

The issue of resource management arises with any sensor which is capable either of sensing only a part of its total field of view at any one time, or which is capable of having a number of operating modes, or both.

A very simple example is a camera with a telephoto lens.  The photographer has to decide what he is going to photograph, and whether to zoom in to get high resolution on a part of the scene, or zoom out to see more of the scene.  Very similar issues apply, of course, to electro-optical sensors (visible light or infra-red 'TV' cameras) and to radars.

The subject has, perhaps, been most extensively studied in relation to multi mode/multi function radars, where approaches such as neural networks, genetic algorithms and auction mechanisms have been proposed as well as more deterministic mangement schemes, but the methods which have actually been implemented have been much more primitive.

The use of multiple, disparate, sensors on multiple mobile, especially airborne, platforms adds further degrees of freedom to the problem - an extension is of growing interest.

The presentation will briefly review the problem for both the single-sensor and the multi-platform cases, and some of the approaches which have been proposed, and will highlight the remaining current problems.

Fri, 02 Mar 2012

10:00 - 13:30
DH 1st floor SR

"Pattern of Life" and traffic

Charles Offer
(Thales UK)
Abstract

'Pattern-of-life' is a current buzz-word in sensor systems. One aspect to this is the automatic estimation of traffic flow patterns, perhaps where existing road maps are not available. For example, a sensor might measure the position of a number of vehicles in 2D, with a finite time interval between each observation of the scene. It is desired to estimate the time-average spatial density, current density, sources and sinks etc. Are there practical methods to do this without tracking individual vehicles, given that there may also be false 'clutter' detections, the density of vehicles may be high, and each vehicle may not be detected in every timestep? And what if the traffic flow has periodicity, e.g. variations on the timescale of a day?

Fri, 24 Jun 2011

10:00 - 13:00
DH 1st floor SR

Medium-PRF Radar Waveform Design and Understanding

Andy Stove
(Thales UK)
Abstract

Many radar designs transmit trains of pulses to estimate the Doppler shift from moving targets, in order to distinguish them from the returns from stationary objects (clutter) at the same range. The design of these waveforms is a compromise, because when the radar's pulse repetition frequency (PRF) is high enough to sample the Doppler shift without excessive ambiguity, the range measurements often also become ambiguous. Low-PRF radars are designed to be unambiguous in range, but are highly ambiguous in Doppler. High-PRF radars are, conversely unambiguous in Doppler but highly ambiguous in range. Medium-PRF radars have a moderate degree of ambiguity (say five times) in both range and Doppler and give better overall performance.

The ambiguities mean that multiple PRFs must be used to resolve the ambiguities (using the principle of the Chinese Remainder Theorom). A more serious issue, however, is that each PRF is now 'blind' at certain ranges, where the received signal arrives at the same time as the next pulse is transmitted, and at certain Doppler shifts (target speeds), when the return is 'folded' in Doppler so that it is hidden under the much larger clutter signal.

A practical radar therefore transmits successive bursts of pulses at different PRFs to overcome the 'blindness' and to resolve the ambiguities. Analysing the performance, although quite complex if done in detail, is possible using modern computer models, but the inverse problems of synthesing waveforms with a given performance remains difficult. Even more difficult is the problem of gaining intuitive insights into the likely effect of altering the waveforms. Such insights would be extremely valuable for the design process.

This problem is well known within the radar industry, but it is hoped that by airing it to an audience with a wider range of skills, some new ways of looking at the problem might be found.

Fri, 25 Feb 2011

10:00 - 13:00
DH 1st floor SR

Graph Theoretical Algorithms

Paul Davies, Edward Stansfield and Ian Ellis
(Thales UK)
Abstract

This will be on the topic of the CASE project Thales will be sponsoring from Oct '11.

Fri, 05 Mar 2010

10:00 - 13:00
DH 3rd floor SR

Compression of Synthetic Aperture Radar Images

Ralph Brownie and Andy Stove
(Thales UK)
Abstract

Synthetic Aperture Radars (SARs) produce high resolution images over large areas at high data rates. An aircraft flying at 100m/s can easily image an area at a rate of 1square kilometre per second at a resolution of 0.3x0.3m, i.e. 10Mpixels/sec with a dynamic range of 60-80dB (10-13bits). Unlike optical images, the SAR image is also coherent and this coherence can be used to detect changes in the terrain from one image to another, for example to detect the distortions in the ground surface which precede volcanic eruptions.

It is clearly very desirable to be able to compress these images before they are relayed from one place to another, most particularly down to the ground from the aircraft in which they are gathered.

Conventional image compression techniques superficially work well with SAR images, for example JPEG 2000 was created for the compression of traditional photographic images and optimised on that basis. However there is conventional wisdom that SAR data is generally much less correlated in nature and therefore unlikely to achieve the same compression ratios using the same coding schemes unless significant information is lost.

Features which typically need to be preserved in SAR images are:

o texture to identify different types of terrain

o boundaries between different types of terrain

o anomalies, such as military vehicles in the middle of a field, which may be of tactical importance and

o the fine details of the pixels on a military target so that it might be recognised.

The talk will describe how Synthetic Aperture Radar images are formed and the features of them which make the requirements for compression algorithms different from electro-optical images and the properties of wavelets which may make them appropriate for addressing this problem. It will also discuss what is currently known about the compression of radar images in general.

Fri, 05 Jun 2009

10:00 - 11:30
DH 1st floor SR

Radar Multipath

Andy Stove and Mike Newman
(Thales UK)
Subscribe to Thales UK