Optical imaging, including infrared imaging, generally has many important applications, both civilian and military. In recent years, technological advances have made multi- and hyperspectral imaging a viable technology in many demanding military application areas. The aim of the CEPA JP 8.10 program has been to evaluate the potential benefit of spectral imaging techniques in tactical military applications. This unclassified executive summary describes the activities in the program and outlines some of the results. More specific results are given in classified reports and presentations.
The JP 8.10 program started in March 2002 and ended in February 2005. The participating nations were France, Germany, Italy, Netherlands, Norway, Sweden and United-Kingdom, each with a contribution of 2 man-years per year. Essential objectives of the program were to:
1) analyze the available spectral information in the optronic landscape from visible to infrared;
2) analyze the operational utility of multi- and hyperspectral imaging for detection, recognition and identification of targets, including low-signature targets;
3) identify applications where spectral imaging can provide a strong gain in performance;
4) propose technical recommendations of future spectral imaging systems and critical components.
Finally, a stated objective of the JP 8.10 program is to "ensure the proper link with the image processing community".
The presentation is organized as follows. In a first step, the two trials (Pirrene and Kvarn) are presented including a summary of the acquired optical properties of the different landscape materials and of the spectral images. Then, a phenomenology study is conducted analyzing the spectral behavior of the optical properties, understanding the signal at the sensor and, by processing spectroradiometric measurements evaluating the potential to discriminate spectral signatures.
Cameo-Sim simulation software is presented including first validation results and the generation of spectral synthetic images. Results obtained on measured and synthetic images are shown and discussed with reference to two main classes of image processing tasks: anomaly detection and signature based target detection. Furthermore, preliminary works on band selection are also presented which aim to optimize the spectral configuration of an image sensor. Finally, the main conclusions of the WEAG program CEPA JP8.10 are given.
Exciting development is taking place in 3 D sensing laser radars. Scanning systems are well established for mapping from airborne and ground sensors. 3 D sensing focal plane arrays (FPAs) enable a full range and intensity image can be captured in one laser shot. Gated viewing systems also produces 3 D target information. Many applications for 3 D laser radars are found in robotics, rapid terrain visualization, augmented vision, reconnaissance and target recognition, weapon guidance including aim point selection and others. The net centric warfare will demand high resolution geo-data for a common description of the environment. At FOI we have a measurement program to collect data relevant for 3 D laser radars using airborne and tripod mounted equipment for data collection. Data collection spans from single pixel waveform collection (1 D) over 2 D using range gated imaging to full 3 D imaging using scanning systems. This paper will describe 3 D laser data from different campaigns with emphasis on range distribution and reflections properties for targets and background during different seasonal conditions. Example of the use of the data for system modeling, performance prediction and algorithm development will be given. Different metrics to characterize the data set will also be discussed.
This paper wil give an overview of 3D laser sensing and related activities at the Swedish Defence Research Agency (FOI) in the view of system needs and applications. Our activites include data collection of laser signatures for target and backgrounds at various wavelengths. We will give examples of such measurements. The results are used in building sythetic environments, modellin of laser radar systems and as training sets for development of algorithms for target recognition and weapon applications. Present work on rapid environment assessment includes the use of data from airborne laser for terrain mapping and depth sounding. Methods for automatic target detection and object classification (buildings, trees, man-made objects etc.) have been developed together with techniques for visualisation. This will be described in more detail in a separate paper. The ability to find and correctly identify "difficult" targets, being either at very long ranges, hidden in the vegetation, behind windows or under camouflage, is one of the top priorities for any military force. Example of such work will be given using range gated imagery and 3D scanning laser radars. Different kinds of signal processing approaches have been studied and will be presented more in two separate papers. We have also developed modeling tools for both 2D and 3D laser imaging. Finally we will discuss the use of 3D laser radars in some system applications in the light of new component technology, processing needs and sensor fusion.
This paper presents our ongoing research activities on target recognition from data generated by 3-D imaging laser radar. In particular, we focus on future full flash imaging 3-D sensors. Several techniques for laser range imaging are applied for modelling and simulation of data from this kind of 3-D sensor systems. Firstly, data from an experimental gated viewing system is used. Processed data from this system is useful in assisting an operator in the target recognition task. Our recent work on target identification at long ranges, using range data from the gated viewing system, provides techniques to handle turbulence, platform motion and illumination variances from scintillation and speckle noise. Moreover, the range data is expanded into 3-D by using a gating technique that provides reconstruction of the target surface structure. This is shown at distances out to 7 km. Secondly, 3-D target data is achieved at short ranges by using different scanning laser radar systems. This provides high-resolution 3-D data from scanning a target from one single view. However, several scans from multiple viewing angles can also quite easily be merged for more detailed target representations. This is, for example, very useful for recognizing targets in vegetation. Hereby, we achieve simulated 3-D sensor data from both short and long
ranges (100 meters out to 7 km) at various spatial resolutions. Thirdly, real data from the 3-D flash imaging system by US Air Force Research Lab (AFRL/SNJM), Wright Patterson Air Force Base, has recently been made available to FOI and also used as input in the development of aided target recognition methods. High-resolution 3-D target models are used in the identification process and compared to the 3-D target data (point cloud) from the various laser radar systems. Finally, we give some examples from our work that clearly show that future 3-D laser radar systems in cooperation with signal- and image analysis techniques have a great potential in the non-cooperative target recognition task and will provide several new and interesting capabilities, for example, to reveal targets hidden in vegetation.
Over the years imaging laser radar systems have been developed for both military and civilian (topographic) applications. Among the applications, 3D data is used for environment modeling and object reconstruction and recognition. The data processing methods are mainly developed separately for military or topographic applications, seldom both application areas are in mind. In this paper, an overview of methods from both areas is presented. First, some of the work on ground surface estimation and classification of natural objects, for example trees, is described. Once natural objects have been detected and classified, we review some of the extensive work on reconstruction and recognition of man-made objects. Primarily we address the reconstruction of buildings and recognition of vehicles. Further, some methods for evaluation of measurement systems and algorithms are described. Models of some types of laser radar systems are reviewed, based on both physical and statistical approaches, for analysis and evaluation of measurement systems and algorithms. The combination of methods for reconstruction of natural and man-made objects is also discussed. By combining methods originating from civilian and military applications, we believe that the tools to analyze a whole scene become available. In this paper we show examples where methods from both application fields are used to analyze a scene.
We present an approach to a general decision support system. The aim is to cover the complete process for automatic
target recognition, from sensor data to the user interface. The approach is based on a query-based information
system, and include tasks like feature extraction from sensor data, data association, data fusion and situation
analysis. Currently, we are working with data from laser radar, infrared cameras, and visual cameras, studying target
recognition from cooperating sensors on one or several platforms. The sensors are typically airborne and at low
The processing of sensor data is performed in two steps. First, several attributes are estimated from the (unknown
but detected) target. The attributes include orientation, size, speed, temperature etc. These estimates are
used to select the models of interest in the matching step, where the target is matched with a number of target models,
returning a likelihood value for each model. Several methods and sensor data types are used in both steps.
The user communicates with the system via a visual user interface, where, for instance, the user can mark an
area on a map and ask for hostile vehicles in the chosen area. The user input is converted to a query in ΣQL, a query
language developed for this type of applications, and an ontological system decides which algorithms should be
invoked and which sensor data should be used. The output from the sensors is fused by a fusion module and answers
are given back to the user. The user does not need to have any detailed technical knowledge about the sensors
(or which sensors that are available), and new sensors and algorithms can easily be plugged into the system.
The main purpose of the work presented here is to study the potential for an active imaging system for target recognition at long distances. This work is motivated by the fact that there are a number of outdoor imaging needs where conventional passive electro optical (EO) and infrared (IR) imaging systems are limited due to lack of photons, disturbing background, obscurants or bad weather. With a pulsed illuminating source, several of these problems are overcome. Using a laser for target illumination, target recognition at 10's of km can be achieved. Powerful diode pumped lasers and camera tubes with high spatial and time resolution will make this technique an interesting complement to passive EO imaging. Beside military applications, civilian applications of gated viewing for search and rescue, vehicle enhanced vision and other applications are in progress. To study the performance limitations of gated viewing systems due to camera, optics and the atmosphere an experimental system was developed. Measurements up to 10 km were made. The measurements were taken at the wavelength 532 nm. To extrapolate the results to future system performance at an eye safe wavelength, 1.5 micrometers nm, a theoretical performance model was developed. This model takes into account the camera and atmospheric influence on resolution and image quality, measured as a signal-to-noise-ratio, SNR. The result indicates turbulence influence, in agreement with the modeling. Different techniques were tested for image quality improvement and the best results were obtained by applying several processing techniques to the images. Moreover, the tests showed that turbulence seriously limits the resolution for horizontal paths close to the ground. A tactical system at 1.5 micrometers should have better performance than the used 532 nm in atmospheric-limited applications close to ground level. The potential to use existing laser range finders and the eye safety issue motivates the future use of 1.5 micrometers for gated viewing.
This paper presents some of the image processing techniques that were applied to seek an answer to the question whether agents of the Federal Bureau of Investigation (FBI) directed gunfired against the Branch Davidian complex in the tragic event that took place in Waco, Texas, U.S., 1993. The task for this investigation was to provide a scientific opinion that clarified the cause of the questioned events, or flashes, that can be seen on one of the surveillance videotapes. These flashes were by several experts, concluded to be evidence of gunfire. However, there were many reasons to question the correctness of that conclusion, such as the fact that some of the flashes appeared on a regular basis. The main hypothesis for this work was that the flashes instead were caused by specular solar reflections. The technical approach for this work was to analyze and compare the flashes appearance. By reconstructing the spatial and temporal position of the sensor, the complex and the sun, the geometrical properties was compared to the theoretical appearance of specular solar reflections. The result showed that the flashes seen on the FLIR videotape, were caused by solar or heat reflections from single or multiple objects. Consequently, they could not form evidence of gunfire. Further, the result highlights the importance of considering the characteristics of the imaging system within investigations that utilizes images as information source. This is due to the need of separating real data from other phenomena (such as solar reflections), distortions and artifacts in a correct manner.
Video generates a rich set of image information and often the useful information is only a very limited set of the available information. Another well-known fact is that visually reviewing of long video recording is a time demanding task. In combination with the continuously increasing number of video surveillance systems, this leads to an increasing need for automated analysis of long image sequences. The goal for this work is to develop and evaluate a method for automatic detection and tracking of events recorded onto a surveillance video, such as appearance of persons or vehicles in a surveyed area, to evaluate the usefulness for forensic applications and real time applications.
Person identification by using biometric methods based on image sequences, or still images, often requires a controllable and cooperative environment during the image capturing stage. In the forensic case the situation is more likely to be the opposite. In this work we propose a method that makes use of the anthropometry of the human body and human actions as cues for identification. Image sequences from surveillance systems are used, which can be seen as monocular image sequences. A 3D deformable wireframe body model is used as a platform to handle the non-rigid information of the 3D shape and 3D motion of the human body from the image sequence. A recursive method for estimating global motion and local shape variations is presented, using two recursive feedback systems.
Video is used as recording media in surveillance system and also more frequently by the Swedish Police Force. Methods for analyzing video using an image processing system have recently been introduced at the Swedish National Laboratory of Forensic Science, and new methods are in focus in a research project at Linkoping University, Image Coding Group. The accuracy of the result of those forensic investigations often depends on the quality of the video recordings, and one of the major problems when analyzing videos from crime scenes is the poor quality of the recordings. Enhancing poor image quality might add manipulative or subjective effects and does not seem to be the right way of getting reliable analysis results. The surveillance system in use today is mainly based on video techniques, VHS or S-VHS, and the weakest link is the video cassette recorder, (VCR). Multiplexers for selecting one of many camera outputs for recording is another problem as it often filters the video signal, and recording is limited to only one of the available cameras connected to the VCR. A way to get around the problem of poor recording is to simultaneously record all camera outputs digitally. It is also very important to build such a system bearing in mind that image processing analysis methods becomes more important as a complement to the human eye. Using one or more cameras gives a large amount of data, and the need for data compression is more than obvious. Crime scenes often involve persons or moving objects, and the available coding techniques are more or less useful. Our goal is to propose a possible system, being the best compromise with respect to what needs to be recorded, movements in the recorded scene, loss of information and resolution etc., to secure the efficient recording of the crime and enable forensic analysis. The preventative effective of having a well functioning surveillance system and well established image analysis methods is not to be neglected. Aspects of this next generation of digital surveillance systems are discussed in this paper.
The anthropometry and movements are unique for every individual human being. We identify persons we know by recognizing the way the look and move. By quantifying these measures and using image processing methods this method can serve as a tool in the work of the police as a complement to the ability of the human eye. The idea is to use virtual 3-D parameterized models of the human body to measure the anthropometry and movements of a crime suspect. The Swedish National Laboratory of Forensic Science in cooperation with SAAB Military Aircraft have developed methods for measuring the lengths of persons from video sequences. However, there is so much unused information in a digital image sequence from a crime scene. The main approach for this paper is to give an overview of the current research project at Linkoping University, Image Coding Group where methods to measure anthropometrical data and movements by using virtual 3-D parameterized models of the person in the crime scene are being developed. The length of an individual might vary up to plus or minus 10 cm depending on whether the person is in upright position or not. When measuring during the best available conditions, the length still varies within plus or minus 1 cm. Using a full 3-D model provides a rich set of anthropometric measures describing the person in the crime scene. Once having obtained such a model the movements can be quantified as well. The results depend strongly on the accuracy of the 3-D model and the strategy of having such an accurate 3-D model is to make one estimate per image frame by using 3-D scene reconstruction, and an averaged 3-D model as the final result from which the anthropometry and movements are calculated.