The successful mission of an autonomous airborne system like an unmanned aerial vehicle (UAV) strongly depends on an accurate target approach as well as the real time acquisition of detailed knowledge about the target area. An automatic 3D scene reconstruction of the overflown ground by a structure from motion system enables to interpret the scenario and to react on possible changes by optimization of flight path or adjustment of mission objectives. Additionally, detection of the target itself can be improved due to the analysis of the reconstructed 3D target scenario.
In this work a newly developed system for automatic 3D reconstruction of a scene from aerial infrared imagery is presented. For more accuracy in the reconstruction and to overcome the drawbacks of feature tracking in IR images, pose information given by an IMU (Inertial Measurement Unit) are used for computation of 3D structure. Detected 2D image features are tracked image by image to calculate corresponding 3D information. Each estimated 3D point is assessed by means of its covariance matrix which is associated with the respective uncertainty. Finally a non-linear optimization (Gauss-Newton iteration) of the reconstruction result yields the completed 3D point cloud. As possible applications, approaches for target recognition in fused IR images and 3D point clouds as well as registration of point clouds for image based navigation update are presented.
The general demand for the prevention of collateral damages in military operations requires methods of robust automatic
identification of target objects like vehicles especially during target approach. This requires the development of
sophisticated techniques for automatic and semi-automatic interpretation of sensor data. In particular the automatic pre-analysis
of reconnaissance data is important for the human observer as well as for autonomous systems. In the phase of
target approach fully automatic methods are needed for the recognition of predefined objects. For this purpose
appropriate sensors are used like imaging IR sensors suitable for day/night operation and laser radar supplying 3D
information of the scenario. Classical methods for target recognition based on comparison with synthetic IR object
models imply certain shortcomings, e.g. unknown weather conditions and the engine status of vehicles.
We propose a concept of generating efficient 2D templates for IR target signatures based on the evaluation of a precise
3D model of the target generated from real multisensor data. This model is created from near-term laser range and IR
data gathered by reconnaissance in advance to gain realistic and up-to-date target signatures. It consists of the visible part
of the object surface textured with measured infrared values. This enables recognition from slightly differing viewing
angles. Our test bed is realized by a helicopter equipped with a multisensor suite (laser radar, imaging IR, GPS, and
IMU). Results are demonstrated by the analysis of a complex scenario with different vehicles.
The increasing demand for the protection of persons and facilities requires the application of sophisticated technologies
for surveillance and object tracking. For this purpose appropriate sensors are used like imaging IR sensors suitable for
day/night operation and laser radar supplying 3D information of the scenario. In this context there is a requirement of
automatic and semi-automatic methods supporting the human observer in his decision-making process. A prevalent task
is automatic tracking of striking objects like vehicles or individual persons in an image sequence during a time slice.
Classical methods are based on template matching implying certain shortcomings concerning homogeneous background
or passing objects occluding the target object. The authors propose a new concept for generating templates for IR target
signatures based on the interpretation of laser range data in order to optimize the tracking process. The testbed is realized
by a helicopter equipped with a multisensor suite (laser radar, imaging IR, GPS, IMU). Results are demonstrated by the
analysis of an exemplary data set. A vehicle situated in a complex scenario is acquired by a forward moving sensor
platform and is tracked robustly by the proposed method.
An automatic target recognition system has been assembled and tested at the Research Institute for Optronics and
Pattern Recognition in Germany over the last years. Its multisensorial design comprises off-the-shelf components: an
FPA infrared camera, a scanning laser radar und an inertial measurement unit. In the paper we describe several
possibilities for the use of this multisensor equipment during helicopter missions. We discuss suitable data processing
methods, for instance the automatic time synchronization of different imaging sensors, the pixel-based data fusion and
the incorporation of collateral information. The results are visualized in an appropriate way to present them on a cockpit
display. We also show how our system can act as a landing aid for pilots within brownout conditions (dust clouds
caused by the landing helicopter).
Future military helicopters will be provided with multiple information sources for self-protection and reconnaissance, e.g. imaging IR, laser radar and GPS. In addition, knowledge bases like maps, aerial images, geographical information (GIS) and other previously acquired data can be used for the interpretation of the current scenario. To support the mission, results of data fusion and information management have to be presented to the pilot in an appropriate way. This paper describes concepts and results of our work on IR and laser data fusion for airborne systems. Data is gathered by forward-looking sensors mounted in a helicopter. For further improvement, fusion with collateral information (laser elevation data, aerial images) is used for change detection and definition of regions of interest with respect to the stored and continuously updated database. Results are demonstrated by the analysis of an exemplary data set, showing a scenario with a group of vehicles. Two moving vehicles are detected automatically in both channels (IR, laser) and the results are combined to achieve improved visualization for the pilot.
Remote sensing using unmanned aerial vehicles is gaining more and more importance during peace keeping missions for military reconnaissance. Those applications nowadays have to take into account that under civil war conditions a mix-up of sensors within sensitive urban terrain may be useful. These tasks typically have to be fulfilled also under adverse weather conditions, which mainly can be served by airborne imaging radar sensors. Advanced radar sensors are able to deliver highly resolved images with considerable information content, as polarimetry, 3d-features and robustness against changing environmental and operational conditions. Extending the knowledge base for an object by fusion of radar data with Ladar-information or IR, a safe detection and even identification of objects becomes feasible allowing an optimized signal processing by distributing the assignments between the combined sensors. The contribution describes the different sensors and gives an overview over the image data for the sample scenes. The methods of object discrimination are discussed and representative results are shown.
The successful mission of autonomous airborne systems like unmanned aerial vehicles (UAV) strongly depends on the performance of automatic image processing used for navigation, target acquisition and terminal homing. In this paper we propose a method for automatic determination of missile position and orientation (pose) during target approach. The flight path is characterized by a forward motion, i.e. an approximately linear motion along the optical axis of the sensor system. Due to the lack of disparity, classical methods based on stereo triangulation are not suitable for 3D object recognition. To handle this we applied the SoftPOSIT algorithm, originally proposed by D. DeMenthon, and adapted it to fit our specific needs: The gathering of image points is done by multi-threshold segmentation, texture analysis, 2D tracking and edge detection. The reference points are updated in each loop and the calculated pose is smoothed using the quaternion representation of the model’s orientation in order to stabilize the computations. We will show some results of image based determination of trajectories for airborne systems. Terminal homing is demonstrated by tracking the 3D pose of a vehicle in an image sequence taken in oblique view and gathered by an infrared sensor mounted to a helicopter.
This paper on adaptive image segmentation and classification describes research activities on statistical pattern recognition in combination with methods of object recognition by geometric matching of model and image structures. In addition, aspects of sensor fusion for airborne application systems like terminal missile guidance were considered using image sequences of multispectral data from real sensor systems and from computer simulations. The main aspect of the adaptive classification is the support of model-based structural image analysis by detection of image segments representing specific objects, e.g. forests, rivers and urban areas. The classifier, based on textural features, is automatically adapted to the changes of textural signatures during target approach by interpretation of the segmentation results of each actual frame of the image sequence.
Surveillance systems against missile attacks require the automatic detection of targets with low false alarm rate (FAR). Infrared Search and Track (IRST) systems offer a passive detection of threats at long ranges. For maximum reaction time and the arrangement of counter measurements, it is necessary to declare the objects as early as possible. For this purpose the detection and tracking algorithms have to deal with point objects. Conventional object features like shape, size and texture are usually unreliable for small objects. More reliable features of point objects are three-dimensional spatial position and velocity. At least two sensors observing the same scene are required for multi-ocular stereo vision. Mainly three steps are relevant for successful stereo image processing. First of all the precise camera calibration (estimating the intrinsic and extrinsic parameters) is necessary to satisfy the demand of high degree of accuracy, especially for long range targets. Secondly the correspondence problem for the detected objects must be solved. Thirdly the three-dimensional location of the potential target has to be determined by projective transformation. For an evaluation a measurement campaign to capture image data was carried out with real targets using two identical IR cameras and additionally synthetic IR image sequences have been generated and processed. In this paper a straightforward solution for stereo analysis based on stationary bin-ocular sensors is presented, the current results are shown suggestions for future work are given.
In this paper we describe a method for automatic determination of sensor pose (position and orientation) related to a 3D landmark or scene model. The method is based on geometrical matching of 2D image structures with projected elements of the associated 3D model. For structural image analysis and scene interpretation, a blackboard-based production system is used resulting in a symbolic description of image data. Knowledge of the approximated sensor pose measured for example by IMU or GPS enables to estimate an expected model projection used for solving the correspondence problem of image structures and model elements. These correspondences are presupposed for pose computation carried out by nonlinear numerical optimization algorithms. We demonstrate the efficiency of the proposed method by navigation update approaching a bridge scenario and flying over urban area, whereas data were taken with airborne infrared sensors in high oblique view. In doing so we simulated image-based navigation for target engagement and midcourse guidance suited for the concepts of future autonomous systems like missiles and drones.
This paper describes a model-based method for the automatic recognition of high value targets in multisensor data. A production net is used to represent the knowledge about target structures. The analysis is carried out by a blackboard-based production system with a database stored in an associative memory. The efficiency of the analysis system is illustrated by an example involving the detection of bridges. For this, sensor data has been interpreted which were recorded with an experimental dualmode sensor (IIR, mmW) mounted to a helicopter while flying over the scenario. Image sequences are taken in oblique view with high frequency. The analysis starts with an intra-frame process by extracting cues in the actual sensor data and by the combination of orthogonal information of IR (intensity, direction) and radar (RCS, range) data to estimate target location in space. To improve detection and reduce false alarms, inter-frame processing is applied to exploit intra- frame results of overlapping images of the scanning sensor system resulting in a higher confirmation of the real target and a better discrimination of false alarms.
The automatic recognition and interpretation of complex structures from imagery, e.g. radar or IR images, needs a structure-oriented analysis method. This paper describes a syntactic approach for object classification in radar and IR images, by which the geometric structures of targets are analyzed using a knowledge-based production system. Those objects which are describable by two- or three-dimensional models have to be acquired independently of changes of illumination, weather conditions or status of vegetation. Perspective distortions and model description errors also have to be tolerated. The aim of the implemented method is to classify the interesting targets in uncertain data with uncertain models.