The issue of resource management arises with any sensor
which is capable either of sensing only a part of its total field of view at
any one time, or which is capable of having a number of operating modes, or
both.
A very simple example is a camera with a telephoto
lens. The photographer has to decide
what he is going to photograph, and whether to zoom in to get high resolution
on a part of the scene, or zoom out to see more of the scene. Very similar issues apply, of course, to
electro-optical sensors (visible light or infra-red 'TV' cameras) and to
radars.
The subject has, perhaps, been most extensively
studied in relation to multi mode/multi function radars, where approaches such
as neural networks, genetic algorithms and auction mechanisms have been
proposed as well as more deterministic mangement schemes, but the methods which
have actually been implemented have been much more primitive.
The use of multiple, disparate, sensors on multiple
mobile, especially airborne, platforms adds further degrees of freedom to the
problem - an extension is of growing interest.
The presentation will briefly review the problem
for both the single-sensor and the multi-platform cases, and some of the
approaches which have been proposed, and will highlight the remaining current
problems.