The U.S. Army Research Laboratory (ARL) and McQ Inc. are developing a generic sensor fusion architecture that involves several diverse processes working in combination to create a dynamic task-oriented, real-time informational capability. Processes include sensor data collection, persistent and observational data storage, and multimodal and multisensor fusion that includes the flexibility to modify the fusion program rules for each mission. Such a fusion engine lends itself to a diverse set of sensing applications and architectures while using open-source software technologies. In this paper, we describe a fusion engine architecture that combines multimodal and multi-sensor fusion within an Open Standard for Unattended Sensors (OSUS) framework. The modular, plug-and-play architecture of OSUS allows future fusion plugin methodologies to have seamless integration into the fusion architecture at the conceptual and implementation level. Although beyond the scope of this paper, this architecture allows for data and information manipulation and filtering for an array of applications.
The Open Standards for Unattended Sensors (OSUS) program, formerly named Terra Harvest, was launched in 2009 to develop an open, integrated battlefield unattended ground sensors (UGS) architecture that ensures interoperability among disparate UGS components and systems. McQ has developed a power managed controller, which is a rugged fielded device that runs an embedded Linux operating system using an open Java software architecture, runs for over 30 days on a small battery pack, and provides various critical functions including the required management, monitoring, and control functions. The OSUS power managed controller system overview, design, and compatibility with other systems will be discussed.
Imagery has proven to be a valuable complement to Unattended Ground Sensor (UGS) systems. It provides ultimate verification of the nature of detected targets. However, due to the power, bandwidth, and technological limitations inherent to UGS, sacrifices have been made to the imagery portion of such systems. The result is that these systems produce lower resolution images in small quantities. Currently, a high resolution, wireless imaging system is being developed to bring megapixel, streaming video to remote locations to operate in concert with UGS. This paper will provide an overview of how using Wifi radios, new image based Digital Signal Processors (DSP) running advanced target detection algorithms, and high resolution cameras gives the user an opportunity to take high-powered video imagers to areas where power conservation is a necessity.
Target images are very important for evaluating the situation when Unattended Ground Sensors (UGS) are deployed.
These images add a significant amount of information to determine the difference between hostile and non-hostile
activities, the number of targets in an area, the difference between animals and people, the movement dynamics of
targets, and when specific activities of interest are taking place. The imaging capabilities of UGS systems need to
provide only target activity and not images without targets in the field of view. The current UGS remote imaging systems
are not optimized for target processing and are not low cost. McQ describes in this paper an architectural and technologic
approach for significantly improving the processing of images to provide target information while reducing the cost of
the intelligent remote imaging capability.