Translator Disclaimer
29 April 2010 Feature extraction and object recognition in multi-modal forward looking imagery
Author Affiliations +
The U. S. Army Night Vision and Electronic Sensors Directorate (NVESD) recently tested an explosive-hazards detection vehicle that combines a pulsed FLGPR with a visible-spectrum color camera. Additionally, NVESD tested a human-in-the-loop multi-camera system with the same goal in mind. It contains wide field-of-view color and infrared cameras as well as zoomable narrow field-of-view versions of those modalities. Even though they are separate vehicles, having information from both systems offers great potential for information fusion. Based on previous work at the University of Missouri, we are not only able to register the UTM-based positions of the FLGPR to the color image sequences on the first system, but we can register these locations to corresponding image frames of all sensors on the human-in-the-loop platform. This paper presents our approach to first generate libraries of multi-sensor information across these platforms. Subsequently, research is performed in feature extraction and recognition algorithms based on the multi-sensor signatures. Our goal is to tailor specific algorithms to recognize and eliminate different categories of clutter and to be able to identify particular explosive hazards. We demonstrate our library creation, feature extraction and object recognition results on a large data collection at a US Army test site.
© (2010) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
G. Greenwood, S. Blakely, D. Schartman, B. Calhoun, J. M. Keller, T. Ton, D. Wong, and M. Soumekh "Feature extraction and object recognition in multi-modal forward looking imagery", Proc. SPIE 7664, Detection and Sensing of Mines, Explosive Objects, and Obscured Targets XV, 76641T (29 April 2010);


Back to Top