Due to the high availability and the easy handling of small drones, the number of reported incidents caused by UAVs both intentionally and accidentally is increasing. To be capable to prevent such incidents in future, it is essential to be able to detect UAVs. However, not every small flying object poses a potential threat and therefore the object not only has to be detected, but also classified or identified. Typical 360◦ scanning LiDAR systems can be deployed to detect and track small objects in the 3D sensor data at ranges of up to 50 m. Unfortunately, in most cases the verification and classification of the detected objects is not possible due to the low resolution of this type of sensor. In high-resolution 2D images, a differentiation of flying objects seems to be more practical, and at least cameras in the visible spectrum are well established and inexpensive. The major drawback of this type of sensor is the dependence on an adequate illumination. An active illumination could be a solution to this problem, but it is usually impossible to illuminate the scene permanently. A more practical way would be to select a sensor with a different spectral sensitivity, for example in the thermal IR. In this paper, we present an approach for a complete chain of detection, tracking and classification of small flying objects such as micro UAVs or birds, using a mobile multi-sensor platform with two 360◦ LiDAR scanners and pan-and-tilt cameras in the visible and thermal IR spectrum. The flying objects are initially detected and tracked in 3D LiDAR data. After detection, the cameras (a grayscale camera in the visible spectrum and a bolometer sensitive in the wavelength range of 7.5 µm to 14 µm) are automatically pointed to the object’s position, and each sensor records a 2D image. A convolutional neural network (CNN) realizes the identification of the region of interest (ROI) as well as the object classification (we consider classes of eight different types of UAVs and birds). In particular, we compare the classification results of the CNN for the two camera types, i.e. for the different wavelengths. The high number of training data for the CNN as well as the test data used for the experiments described in this paper were recorded at a field trial of the NATO group SET-260 (“Assessment of EO/IR Technologies for Detection of Small UAVs in an Urban Environment”) at CENZUB, Sissonne, France.
The number of reported incidents caused by UAVs, intentional as well as accidental, is rising. To avoid such incidents in future, it is essential to be able to detect UAVs. However, not every small flying object is a potential threat and therefore the object not only has to be detected, but classified or identified. Typical 360° scanning LiDAR sensors, like those developed for automotive applications, can be deployed for the detection and tracking of small objects in ranges up to 50 m. Unfortunately, the verification and classification of the detected objects is not possible in most cases, due to the low resolution of that kind of sensor. In visual images a differentiation of flying objects seems more practical. In this paper, we present a method for the distinction between UAVs and birds in multi-sensor data (LiDAR point clouds and visual images). The flying objects are initially detected and tracked in LiDAR data. After detection, a grayscale camera is automatically pointed onto the object and an image is recorded. The differentiation between UAV and bird is then realized by a convolutional neural network (CNN). In addition, we investigate the potential of this approach for a more detailed classification of the type of UAV. The paper shows first results of this multi-sensor classification approach. The high number of training data for the CNN as well as the test data of the experiments were recorded at a field trial of the NATO group SET-260 ("Assessment of EO/IR Technologies for Detection of Small UAVs in an Urban Environment").
In this paper, we describe a method for automatic extrinsic self-calibration of an operational mobile LiDAR sensing system (MLS), that is additionally equipped with a POS position and orientation subsystem (e.g., GNSS/IMU, odometry). While commercial mobile mapping systems or civil LiDAR-equipped cars can be calibrated on a regular basis using a dedicated calibration setup, we aim at a method for automatic in-field (re-)calibration of such sensor systems, which is even suitable for future military combat vehicles. Part of the intended use of a mobile LiDAR or laser scanning system is 3D mapping of the terrain by POS-based direct georeferencing of the range measurements, resulting in 3D point clouds of the terrain. The basic concept of our calibration approach is to minimize the average scatter of the 3D points, assuming a certain occurrence of smooth surfaces in the scene which are scanned multiple times. The point scatter is measured by local principal component analysis (PCA). Parameters describing the sensor installation are adjusted to reach a minimal value of the PCA's average smallest eigenvalue. While sensor displacements (lever arms) are still difficult to correct in this way, our approach succeeds in eliminating misalignments of the 3D sensors (boresight alignment). The focus of this paper is on quantifying the influence of driving maneuvers and, particularly, scene characteristics on the calibration method and its results. One finding is that a curvy driving style in an urban environment provides the best conditions for the calibration of the MLS system, but other structured environments may still be acceptable.
The number of reported incidents caused by UAVs, intentional as well as accidental, is rising. To avoid such incidents in future, it is essential to be able to detect UAVs. However, not every UAV is a potential threat and therefore the UAV not only has to be detected, but classified or identified. 360o scanning LiDAR systems can be deployed for the detection and tracking of (micro) UAVs in ranges up to 50 m. Unfortunately, the verification and classification of the detected objects is not possible in most cases, due to the low resolution of that kind of sensor. In this paper, we propose an automatic alignment of an additional sensor (mounted on a pan-tilt head) for the identification of the detected objects. The classification sensor is directed by the tracking results of the panoramic LiDAR sensor. If the alignable sensor is an RGB- or infrared camera, the identification of the objects can be done by state-of-the-art image processing algorithms. If a higher-resolution LiDAR sensor is used for this task, algorithms have to be developed and implemented. For example, the classification could be realized by a 3D model matching method. After the handoff of the object position from the 360o LiDAR to the verification sensor, this second system can be used for a further tracking of the object, e.g., if the trajectory of the UAV leaves the field of view of the primary LiDAR system. The paper shows first results of this multi-sensor classification approach.
In this paper, we examine crosstalk effects that can arise in multi-LiDAR configurations, and we show a data-based approach to mitigate these effects. Due to the ability to acquire precise 3D data of the environment, LiDAR-based sensor systems (sensors based on “Light Detection and Ranging”, e.g., laser scanners) increasingly find their way into various applications, e.g. in the automotive sector. However, with an increasing number of LiDAR sensors operating within close vicinity, the problem of potential crosstalk between these devices arises. “Crosstalk” outlines the following effect: In a typical LiDAR-based sensor, short laser pulses are emitted into the scene and the distance between sensor and object is derived from the time measured until an “echo” is received. In case multiple laser pulses of the same wavelength are emitted at the same time, the detector may not be able to distinguish between correct and false matches of laser pulses and echoes, resulting in erroneous range measurements and 3D points. During operation of our own multi-LiDAR sensor system, we were able to observe crosstalk effects in the acquired data. Having compared different spatial filtering approaches for the elimination of erroneous points in the 3D data, we propose a data-based spatio-temporal filtering and show its results, which may be sufficient depending on the application. However, technical solutions are desired for future LiDAR sensors.
The number of reported incidents caused by small UAVs, intentional as well as accidental, is rising. To avoid such incidents in future, it is essential to be able to detect UAVs. LiDAR sensors (e.g., laser scanners) are well known to be adequate sensors for object detection and tracking.
In this paper, we expand our existing LiDAR-based approach for the tracking and detection of (low) flying small objects like commercial mini/micro UAVs. We show that UAVs can be detected by the proposed methods, as long as the movements of the UAVs correspond to the LiDAR sensor’s capabilities in scanning performance, range and resolution. The trajectory of the tracked object can further be analyzed to support the classification, meaning that UAVs and non- UAV objects can be distinguished by an identification of typical movement patterns. A stable tracking of the UAV is achieved by a precise prediction of its movement. In addition to this precise prediction of the target’s position, the object detection, tracking and classification have to be achieved in real-time.
For the algorithm development and a performance analysis, we analyzed LiDAR data that we acquired during a field trial. Several different mini/micro UAVs were observed by a system of four 360° LiDAR sensors mounted to a car. Using this specific sensor system, the results show that UAVs can be detected and tracked by the proposed methods, allowing a protection of the car against UAV threats within a radius of up to 35 m.
The number of reported incidents caused by UAVs, intentional as well as accidental, is rising. To avoid such incidents in future, it is essential to be able to detect UAVs. LiDAR systems are well known to be adequate sensors for object detection and tracking. In contrast to the detection of pedestrians or cars in traffic scenarios, the challenges of UAV detection lie in the small size, the various shapes and materials, and in the high speed and volatility of their movement. Due to the small size of the object and the limited sensor resolution, a UAV can hardly be detected in a single frame. It rather has to be spotted by its motion in the scene. In this paper, we present a fast approach for the tracking and detection of (low) flying small objects like commercial mini/micro UAVs. Unlike with the typical sequence -track-after-detect-, we start with looking for clues by finding minor 3D details in the 360° LiDAR scans of scene. If these clues are detectable in consecutive scans (possibly including a movement), the probability for the actual detection of a UAV is rising. For the algorithm development and a performance analysis, we collected data during a field trial with several different UAV types and several different sensor types (acoustic, radar, EO/IR, LiDAR). The results show that UAVs can be detected by the proposed methods, as long as the movements of the UAVs correspond to the LiDAR sensor’s capabilities in scanning performance, range and resolution. Based on data collected during the field trial, the paper shows first results of this analysis.
Today it is easily possible to generate dense point clouds of the sensor environment using 360° LiDAR (Light Detection and Ranging) sensors which are available since a number of years. The interpretation of these data is much more challenging. For the automated data evaluation the detection and classification of objects is a fundamental task. Especially in urban scenarios moving objects like persons or vehicles are of particular interest, for instance in automatic collision avoidance, for mobile sensor platforms or surveillance tasks.
In literature there are several approaches for automated person detection in point clouds. While most techniques show acceptable results in object detection, the computation time is often crucial. The runtime can be problematic, especially due to the amount of data in the panoramic 360° point clouds. On the other hand, for most applications an object detection and classification in real time is needed.
The paper presents a proposal for a fast, real-time capable algorithm for person detection, classification and tracking in panoramic point clouds.
The detection of objects, or persons, is a common task in the fields of environment surveillance, object observation or
danger defense. There are several approaches for automated detection with conventional imaging sensors as well as with
LiDAR sensors, but for the latter the real-time detection is hampered by the scanning character and therefore by the data
distortion of most LiDAR systems.
The paper presents a solution for real-time data acquisition of a flash LiDAR sensor with synchronous raw data analysis,
point cloud calculation, object detection, calculation of the next best view and steering of the pan-tilt head of the sensor.
As a result the attention is always focused on the object, independent of the behavior of the object. Even for highly
volatile and rapid changes in the direction of motion the object is kept in the field of view.
The experimental setup used in this paper is realized with an elementary person detection algorithm in medium distances
(20 m to 60 m) to show the efficiency of the system for objects with a high angular speed. It is easy to replace the
detection part by any other object detection algorithm and thus it is easy to track nearly any object, for example a car or a
boat or an UAV in various distances.
The growing interest in unmanned surface vehicles, accident avoidance for naval vessels and automated maritime surveillance leads to a growing need for automatic detection, classification and pose estimation of maritime objects in medium and long ranges. Laser radar imagery is a well proven tool for near to medium range, but up to now for higher distances neither the sensor range nor the sensor resolution was satisfying. As a result of the mentioned limitations of laser radar imagery the potential of laser illuminated gated viewing for automated classification and pose estimation was investigated. The paper presents new techniques for segmentation, pose estimation and model-based identification of naval vessels in gated viewing imagery in comparison with the corresponding results of long range data acquired with a focal plane array laser radar system. The pose estimation in the gated viewing data is directly connected with the model-based identification which makes use of the outline of the object. By setting a sufficient narrow gate, the distance gap between the upper part of the ship and the background leads to an automatic segmentation. By setting the gate the distance to the object is roughly known. With this distance and the imaging properties of the camera, the width of the object perpendicular to the line of sight can be calculated. For each ship in the model library a set of possible 2D appearances in the known distance is calculated and the resulting contours are compared with the measured 2D outline. The result is a match error for each reasonable orientation of each model of the library. The result gained from the gated viewing data is compared with the results of target identification by laser radar imagery of the same maritime objects.
Automatic change detection in 3D environments requires the comparison of multi-temporal data. By comparing current
data with past data of the same area, changes can be automatically detected and identified. Volumetric changes in the scene
hint at suspicious activities like the movement of military vehicles, the application of camouflage nets, or the placement
of IEDs, etc. In contrast to broad research activities in remote sensing with optical cameras, this paper addresses the topic
using 3D data acquired by mobile laser scanning (MLS). We present a framework for immediate comparison of current
MLS data to given 3D reference data. Our method extends the concept of occupancy grids known from robot mapping,
which incorporates the sensor positions in the processing of the 3D point clouds. This allows extracting the information
that is included in the data acquisition geometry. For each single range measurement, it becomes apparent that an object
reflects laser pulses in the measured range distance, i.e., space is occupied at that 3D position. In addition, it is obvious
that space is empty along the line of sight between sensor and the reflecting object. Everywhere else, the occupancy of
space remains unknown. This approach handles occlusions and changes implicitly, such that the latter are identifiable by
conflicts of empty space and occupied space. The presented concept of change detection has been successfully validated
in experiments with recorded MLS data streams. Results are shown for test sites at which MLS data were acquired at
different time intervals.
The general demand for the prevention of collateral damages in military operations requires methods of robust automatic
identification of target objects like vehicles especially during target approach. This requires the development of
sophisticated techniques for automatic and semi-automatic interpretation of sensor data. In particular the automatic pre-analysis
of reconnaissance data is important for the human observer as well as for autonomous systems. In the phase of
target approach fully automatic methods are needed for the recognition of predefined objects. For this purpose
appropriate sensors are used like imaging IR sensors suitable for day/night operation and laser radar supplying 3D
information of the scenario. Classical methods for target recognition based on comparison with synthetic IR object
models imply certain shortcomings, e.g. unknown weather conditions and the engine status of vehicles.
We propose a concept of generating efficient 2D templates for IR target signatures based on the evaluation of a precise
3D model of the target generated from real multisensor data. This model is created from near-term laser range and IR
data gathered by reconnaissance in advance to gain realistic and up-to-date target signatures. It consists of the visible part
of the object surface textured with measured infrared values. This enables recognition from slightly differing viewing
angles. Our test bed is realized by a helicopter equipped with a multisensor suite (laser radar, imaging IR, GPS, and
IMU). Results are demonstrated by the analysis of a complex scenario with different vehicles.
The increasing demand for the protection of persons and facilities requires the application of sophisticated technologies
for surveillance and object tracking. For this purpose appropriate sensors are used like imaging IR sensors suitable for
day/night operation and laser radar supplying 3D information of the scenario. In this context there is a requirement of
automatic and semi-automatic methods supporting the human observer in his decision-making process. A prevalent task
is automatic tracking of striking objects like vehicles or individual persons in an image sequence during a time slice.
Classical methods are based on template matching implying certain shortcomings concerning homogeneous background
or passing objects occluding the target object. The authors propose a new concept for generating templates for IR target
signatures based on the interpretation of laser range data in order to optimize the tracking process. The testbed is realized
by a helicopter equipped with a multisensor suite (laser radar, imaging IR, GPS, IMU). Results are demonstrated by the
analysis of an exemplary data set. A vehicle situated in a complex scenario is acquired by a forward moving sensor
platform and is tracked robustly by the proposed method.
An automatic target recognition system has been assembled and tested at the Research Institute for Optronics and
Pattern Recognition in Germany over the last years. Its multisensorial design comprises off-the-shelf components: an
FPA infrared camera, a scanning laser radar und an inertial measurement unit. In the paper we describe several
possibilities for the use of this multisensor equipment during helicopter missions. We discuss suitable data processing
methods, for instance the automatic time synchronization of different imaging sensors, the pixel-based data fusion and
the incorporation of collateral information. The results are visualized in an appropriate way to present them on a cockpit
display. We also show how our system can act as a landing aid for pilots within brownout conditions (dust clouds
caused by the landing helicopter).
Future military helicopters will be provided with multiple information sources for self-protection and reconnaissance, e.g. imaging IR, laser radar and GPS. In addition, knowledge bases like maps, aerial images, geographical information (GIS) and other previously acquired data can be used for the interpretation of the current scenario. To support the mission, results of data fusion and information management have to be presented to the pilot in an appropriate way. This paper describes concepts and results of our work on IR and laser data fusion for airborne systems. Data is gathered by forward-looking sensors mounted in a helicopter. For further improvement, fusion with collateral information (laser elevation data, aerial images) is used for change detection and definition of regions of interest with respect to the stored and continuously updated database. Results are demonstrated by the analysis of an exemplary data set, showing a scenario with a group of vehicles. Two moving vehicles are detected automatically in both channels (IR, laser) and the results are combined to achieve improved visualization for the pilot.
A new generation of Scanning Laser Doppler Vibrometer (SLDV) has been realized; based on experience and results of a former proof-of-concept design and a number of field tests. This new device SLDV III comprises a number of technical improvements in the transmitter and receiver section as well as in the evaluation of the recorded vibration signals. The subsequent paper summarizes the main features of this instrument.
The combination of a powerful acoustic transmitter with high resolution laser spectroscopy has led to a promising approach for the detection of buried landmines, especially for those with no or only minor metal content. This paper summarizes current R&D work on this new technology including a brief sensor overview and a more detailed description of our data processing methods. The SLDV sensor picks up tiny vibrations of the soil surface in the order of some μm/s in a rectangular grid of measuring points. We use a multi-threshold algorithm for the segmentation of mine cues and reduce false alarms by analyzing the stability of object size, contrast and shape in the frequency domain. In addition to the amplitude of soil vibration the phase is investigated as a secondary information channel to optimize the classification procedure.
The successful mission of autonomous airborne systems like unmanned aerial vehicles (UAV) strongly depends on the performance of automatic image processing used for navigation, target acquisition and terminal homing. In this paper we propose a method for automatic determination of missile position and orientation (pose) during target approach. The flight path is characterized by a forward motion, i.e. an approximately linear motion along the optical axis of the sensor system. Due to the lack of disparity, classical methods based on stereo triangulation are not suitable for 3D object recognition. To handle this we applied the SoftPOSIT algorithm, originally proposed by D. DeMenthon, and adapted it to fit our specific needs: The gathering of image points is done by multi-threshold segmentation, texture analysis, 2D tracking and edge detection. The reference points are updated in each loop and the calculated pose is smoothed using the quaternion representation of the model’s orientation in order to stabilize the computations. We will show some results of image based determination of trajectories for airborne systems. Terminal homing is demonstrated by tracking the 3D pose of a vehicle in an image sequence taken in oblique view and gathered by an infrared sensor mounted to a helicopter.
Acoustic landmine detection (ALD) is a technique for the detection of buried landmines including non-metal mines. An important issue in ALD is the acoustic excitation of the soil. Laser excitation is promising for complete standoff detection using lasers for excitation and monitoring. Acoustic excitation is a more common technique that gives good results but requires an acoustic source close to the measured area. In a field test in 2002 both techniques were compared side by side. A number of buried landmines were measured using both types of excitation. Various types of landmines were used, both anti-tank and anti-personnel, which were buried at various depths in different soil types with varying humidity. Two Laser Doppler Vibrometer (LDV) systems of two different wavelengths for the different approaches were used, one based on a He-Ne laser at 0.633 μm with acoustic excitation and one on an erbium fiber laser at 1.54 μm in the case of laser excitation. The acoustic excitation gives a good contrast between the buried mine and the surrounding soil at certain frequencies. Laser excitation gives a pulse response that is more difficult to interpret but is potentially a faster technique. In both cases buried mines could be detected.