The purpose of the NATO SET-153 field experiment was to provide an opportunity to demonstrate multiple sensor
technologies in an urban environment and determine integration capabilities for future development. The Army Research
Laboratory (ARL) experimental aerostat was used primarily as a persistent over watch capability as a substitute for a
UAV. Continuous video was recorded on the aerostat and segments of video were captured of the scenarios on the
ground that the camera was following manually. Some of the segments showing scenario activities will be presented.
The captured pictures and video frames have telemetry in the headers that provides the UTM time and the Inertial
Navigation System (INS) GPS location and the inertial roll, pitch, and yaw as well as the camera gimbal pan and tilt
angles. The timing is useful to synchronize the images with the scenario events providing activity ground truth. The INS,
GPS, and camera gimbal angle values can be used with the acoustic solution for the location of a sound source to
determine the relative accuracy of the solution if the camera is pointed at the sound source. This method will be
confirmed by the use of a propane cannon whose GPS location is logged. During the field experiment, other interesting
acoustic events such as vehicle convoys, platoon level firefights with vehicles using blanks, and a UAV helicopter were
recorded and will be presented in a quick analysis.
The U.S. Army Research Laboratory (ARL) has recently concluded a research experiment to study the benefits of
multimodal sensor fusion for improved hostile-fire-defeat (HFD) in an urban setting. This joint effort was led by ARL
in partnership with other R&D centers and private industry. The primary goals were to detect hostile fire events (small
arms, mortars, rockets, IEDs) and hostile human activities by providing solutions before, during, and after the events to
improve sensor networking technologies; to develop multimodal sensor data fusion; and to determine effective
dissemination techniques for the resultant actionable intelligence. Technologies included ultraviolet, infrared, retroreflection,
visible, glint, Laser Detection and Ranging (LADAR), radar, acoustic, seismic, E-field, magnetic, and narrowband
emission technologies; all were found to provide useful performance. The experiment demonstrated that combing
data and information from diverse sensor modalities can significantly improve the accuracy of threat detections and the
effectiveness of the threat response. It also demonstrated that dispersing sensors over a wide range of platforms (fixed
site, ground vehicles, unmanned ground and aerial vehicles, aerostat, Soldier-worn) added flexibility and agility in
tracking hostile actions. In all, the experiment demonstrated that multimodal fusion will improve hostile event responses,
strike force efficiency, and force protection effectiveness.
A research-oriented Army Technology Objective (ATO) named Sensor and Information Fusion for
Improved Hostile Fire Situational Awareness uniquely focuses on the underpinning technologies to detect
and defeat any hostile threat; before, during, and after its occurrence. This is a joint effort led by the
Army Research Laboratory, with the Armaments and the Communications and Electronics Research,
Development, and Engineering Centers (CERDEC and ARDEC) partners. It addresses distributed sensor
fusion and collaborative situational awareness enhancements, focusing on the underpinning technologies
to detect/identify potential hostile shooters prior to firing a shot and to detect/classify/locate the firing point
of hostile small arms, mortars, rockets, RPGs, and missiles after the first shot. A field experiment
conducted addressed not only diverse modality sensor performance and sensor fusion benefits, but
gathered useful data to develop and demonstrate the ad hoc networking and dissemination of relevant
data and actionable intelligence. Represented at this field experiment were various sensor platforms
such as UGS, soldier-worn, manned ground vehicles, UGVs, UAVs, and helicopters. This ATO continues
to evaluate applicable technologies to include retro-reflection, UV, IR, visible, glint, LADAR, radar,
acoustic, seismic, E-field, narrow-band emission and image processing techniques to detect the threats
with very high confidence. Networked fusion of multi-modal data will reduce false alarms and improve
actionable intelligence by distributing grid coordinates, detection report features, and imagery of threats.
The detection and localization of hostile weapons firing has been demonstrated successfully with acoustic sensor
arrays on unattended ground sensors (UGS), ground-vehicles, and unmanned aerial vehicles (UAVs). Some of the
more mature systems have demonstrated significant capabilities and provide direct support to ongoing counter-sniper
operations. The Army Research Laboratory (ARL) is conducting research and development for a helmet-mounted
system to acoustically detect and localize small arms firing, or other events such as RPG, mortars, and explosions, as
well as other non-transient signatures. Since today's soldier is quickly being asked to take on more and more
reconnaissance, surveillance, & target acquisition (RSTA) functions, sensor augmentation enables him to become a
mobile and networked sensor node on the complex and dynamic battlefield. Having a body-worn threat detection and
localization capability for events that pose an immediate danger to the soldiers around him can significantly enhance
their survivability and lethality, as well as enable him to provide and use situational awareness clues on the networked
battlefield. This paper addresses some of the difficulties encountered by an acoustic system in an urban environment.
Complex reverberation, multipath, diffraction, and signature masking by building structures makes this a very harsh
environment for robust detection and classification of shockwaves and muzzle blasts. Multifunctional acoustic
detection arrays can provide persistent surveillance and enhanced situational awareness for every soldier.
The Army Research Laboratory (ARL) has conducted experiments using acoustic sensor arrays
suspended below tethered aerostats to detect and localize transient signals from mortars, artillery, and small
arms fire. The airborne acoustic sensor array calculates an azimuth and elevation to the originating transient,
and immediately cues a collocated imager to capture the remaining activity at the site of the acoustic
transient. This single array's vector solution defines a ground-intersect region or grid coordinate for threat
reporting. Unattended ground sensor (UGS) systems can augment aerostat arrays by providing additional
solution vectors from several ground-based acoustic arrays to perform a 3D triangulation on a source
location. The aerostat array's advantage over ground systems is that it is not as affected by diffraction and
reflection from man-made structures, trees, or terrain, and has direct line-of-sight to most events.
In this paper, we discuss the NATO Task Group 53 (TG-53) acoustic detection of weapon firing field joint experiment at Yuma Proving Ground during 31 October to 4 November 2005. The participating NATO countries include France, the Netherlands, UK and US. The objectives of the joint experiments are: (i) to collect acoustic signatures of direct and indirect firings from weapons such as sniper, mortar, artillery and C4 explosives and (ii) to share signatures among NATO partners from a variety of acoustic sensing platforms on the ground and in the air distributed over a wide area.
Acoustic sensors mounted to a tethered aerostat detect and localize transient signals from mortars, artillery, C-4, propane cannon, and small arms fire. Significant enhancements to soldier lethality and survivability can be gained when using the aerostat array to detect, localize, and cue an aerial imager to a weapon's launch site, or use the aerostat's instantaneous position and orientation to calculate a vector solution to the ground coordinates of the launch site for threat neutralization. The prototype aerostat-mounted array was tested at Yuma Proving Grounds (YPG) as part of the NATO TG-53 signature collection exercise. Acoustic wave form data was collected simultaneously with aerostat and ground-based sensor arrays for comparing wind noise, signal to noise related parameters, and atmospheric effects on propagation to an elevated array. A test description and summary of localization accuracy will be presented for various altitudes, ranges to target, and under differing meteorological conditions.
As the Army transforms to the Future Force, particular attention must be paid to operations in Complex and Urban Terrain. Because our adversaries realize that we don't have battlefield dominance in the urban environment, and because population growth and migration to urban environments is still on the increase, our adversaries will continue to draw us into operations in the urban environment. The Army Research Laboratory (ARL) is developing technology to equip our soldiers for the urban operations of the future. Sophisticated small robotic platforms with diverse sensor suites will be an integral part of the Future Force, and must be able to collaborate not only amongst themselves but also with their manned partners. The use of acoustic sensors on robotic platforms, as shown in this paper, will greatly aid the soldiers of the future force in performing numerous types of missions including Reconnaissance, Surveillance, and Target Acquisition (RSTA) by providing situational awareness, particularly to the dismounted soldier operating in the urban environment. The work conducted by the Army Research Laboratory, discussed in this paper will be transitioned to the FCS-Small Unattended Ground Vehicle (SUGV) program and FFW. The Army Research Laboratory is already working with these programs to ensure a feasible migration path. This paper focuses on four areas relating to acoustic sensing on robots for the urban environment as demonstrated at the DoD Horizontal Fusion Portfolio’s Warriors Edge (WE) Quantum Leap II (QL II) demonstration at Ft Benning, GA in August, 2004: small (man-portable) robot detection, mule-sized robot detection, sensor fusion across multiple platforms, and soldier/robot team interaction.
The future battlefield will require an unprecedented level of automation in which soldier-operated, autonomous, and semi-autonomous ground, air, and sea platforms along with mounted and dismounted soldiers will function as a tightly coupled team. Sophisticated robotic platforms with diverse sensor suites will be an integral part of the Objective Force, and must be able to collaborate not only amongst themselves but also with their manned partners. The Army Research Laboratory has developed a robot-based acoustic detection system that will detect and localize on an impulsive noise event, such as a sniper's weapon firing. Additionally, acoustic sensor arrays worn on a soldier's helmet or equipment can enhance his situational awareness and RSTA capabilities. The Land Warrior or Objective Force Warrior body-worn computer can detect tactically significant impulsive signatures from bullets, mortars, artillery, and missiles or spectral signatures from tanks, helicopters, UAVs, and mobile robots. Time-difference-of-arrival techniques can determine a sound's direction of arrival, while head attitude sensors can instantly determine the helmet orientation at time of capture. With precision GPS location of the soldier, along with the locations of other soldiers, robots, or unattended ground sensors that heard the same event, triangulation techniques can produce an accurate location of the target. Data from C-4 explosions and 0.50-Caliber shots shows that both helmet and robot systems can localize on the same event. This provides an awesome capability - mobile robots and soldiers working together on an ever-changing battlespace to detect the enemy and improve the survivability, mobility, and lethality of our future warriors.
The Army Research Laboratory has developed body-contacting acoustic sensors that detect diverse physiological sounds such as heartbeats and breaths, high quality speech, and activity. These sensors use an acoustic impedance-matching gel contained in a soft, compliant pad to enhance the body borne sounds, yet significantly repel airborne noises due to an acoustic impedance mismatch. The signals from such a sensor can be used as a microphone with embedded physiology, or a dedicated digital signal processor can process packetized data to separate physiological parameters from voice, and log parameter trends for performance surveillance. Acoustic sensors were placed inside soldier helmets to monitor voice, physiology, activity, and situational awareness clues such as bullet shockwaves from sniper activity and explosions. The sensors were also incorporated into firefighter breathing masks, neck and wrist straps, and other protective equipment. Heart rate, breath rate, blood pressure, voice and activity can be derived from these sensors (reports at www.arl.army.mil/acoustics). Having numerous sensors at various locations provides a means for array processing to reduce motion artifacts, calculate pulse transit time for passive blood pressure measurement, and the origin of blunt/penetrating traumas such as ballistic wounding. These types of sensors give us the ability to monitor soldiers and civilian emergency first-responders in demanding environments, and provide vital signs information to assess their health status and how that person is interacting with the environment and mission at hand. The Objective Force Warrior, Scorpion, Land Warrior, Warrior Medic, and other military and civilian programs can potentially benefit from these sensors.
The future battlefield will require an unprecedented level of automation in which soldier-operated autonomous and semi-autonomous ground, air and sea platforms along with mounted and dismounted soldiers will function as a tightly coupled team. Sophisticated robotic platforms with diverse sensor suites will be an integral part of the Objective Force, and must be able to collaborate not only amongst themselves but also with their manned partners. The Army Research Laboratory has developed a robot-based acoustic detection system that will detect and localize on an impulsive noise event, such as a sniper's weapon firing. Additionally, acoustic sensor arrays worn on a soldier's helmet or equipment can enhance his situational awareness and RSTA capabilities. The Land Warrior or Objective Force Warrior body-worn computer can detect tactically significant impulsive signatures from bullets, mortars, artillery, and missiles or spectral signatures from tanks, helicopters, UAVs, and mobile robots. Time-difference-of-arrival techniques can determine a sound's direction of arrival, while head attitude sensors can instantly determine the helmet orientation at time of capture. With precision GPS location of the soldier, along with the locations of other soldiers, robots, or unattended ground sensors that heard the same event, triangulation techniques can produce an accurate location of the target. Data from C-4 explosions and 0.50-Caliber shots shows that both helmet and robot systems can localize on the same event. This provides an awesome capability - mobile robots and soldiers working together on an ever-changing battlespace to detect the enemy and improve the survivability, mobility, and lethality of our future warriors.
Acoustic sensors have been used to monitor firefighter and soldier physiology to assess health and performance. The Army Research Laboratory has developed a unique body-contacting acoustic sensor that can monitor the health and performance of firefighters and soldiers while they are doing their mission. A gel-coupled sensor has acoustic impedance properties similar to the skin that facilitate the transmission of body sounds into the sensor pad, yet significantly repel ambient airborne noises due to an impedance mismatch. This technology can monitor heartbeats, breaths, blood pressure, motion, voice, and other indicators that can provide vital feedback to the medics and unit commanders. Diverse physiological parameters can be continuously monitored with acoustic sensors and transmitted for remote surveillance of personnel status. Body-worn acoustic sensors located at the neck, breathing mask, and wrist do an excellent job at detecting heartbeats and activity. However, they have difficulty extracting physiology during rigorous exercise or movements due to the motion artifacts sensed. Rigorous activity often indicates that the person is healthy by virtue of being active, and injury often causes the subject to become less active or incapacitated making the detection of physiology easier. One important measure of performance, heart rate variability, is the measure of beat-to-beat timing fluctuations derived from the interval between two adjacent beats. The Lomb periodogram is optimized for non-uniformly sampled data, and can be applied to non-stationary acoustic heart rate features (such as 1st and 2nd heart sounds) to derive heart rate variability and help eliminate errors created by motion artifacts. Simple peak-detection above or below a certain threshold or waveform derivative parameters can produce the timing and amplitude features necessary for the Lomb periodogram and cross-correlation techniques. High-amplitude motion artifacts may contribute to a different frequency or baseline noise due to the timing differences between the noise artifacts and heartbeat features. Data from a firefighter experiment is presented.
Sophisticated robotic platforms with diverse sensor suites are quickly replacing the eyes and ears of soldiers on the complex battlefield. The Army Research Laboratory (ARL) in Adelphi, Maryland has developed a robot-based acoustic detection system that will detect an impulsive noise event, such as a sniper's weapon firing or door slam, and activate a pan-tilt to orient a visible and infrared camera toward the detected sound. Once the cameras are cued to the target, onboard image processing can then track the target and/or transmit the imagery to a remote operator for navigation, situational awareness, and target detection. Such a vehicle can provide reconnaissance, surveillance, and target acquisition for soldiers, law enforcement, and rescue personnel, and remove these people from hazardous environments. ARL's primary robotic platforms contain 16-in. diameter, eight-element acoustic arrays. Additionally, a 9- in. array is being developed in support of DARPA's Tactical Mobile Robot program. The robots have been tested in both urban and open terrain. The current acoustic processing algorithm has been optimized to detect the muzzle blast from a sniper's weapon, and reject many interfering noise sources such as wind gusts, generators, and self-noise. However, other detection algorithms for speech and vehicle detection/tracking are being developed for implementation on this and smaller robotic platforms. The collaboration between two robots, both with known positions and orientations, can provide useful triangulation information for more precise localization of the acoustic events. These robots can be mobile sensor nodes in a larger, more expansive, sensor network that may include stationary ground sensors, UAVs, and other command and control assets. This report will document the performance of the robot's acoustic localization, describe the algorithm, and outline future work.
An acoustic sensor attached to a person's neck can extract heart and breath sounds, as well as voice and other physiology related to their health and performance. Soldiers, firefighters, law enforcement, and rescue personnel, as well as people at home or in health care facilities, can benefit form being remotely monitored. ARLs acoustic sensor, when worn around a person's neck, picks up the carotid artery and breath sounds very well by matching the sensor's acoustic impedance to that of the body via a gel pad, while airborne noise is minimized by an impedance mismatch. Although the physiological sounds have high SNR, the acoustic sensor also responds to motion-induced artifacts that obscure the meaningful physiology. To exacerbate signal extraction, these interfering signals are usually covariant with the heart sounds, in that as a person walks faster the heart tends to beat faster, and motion noises tend to contain low frequency component similar to the heart sounds. A noise-canceling configuration developed by ARL uses two acoustic sensor on the front sides of the neck as physiology sensors, and two additional acoustic sensor on the back sides of the neck as noise references. Breath and heart sounds, which occur with near symmetry and simultaneously at the two front sensor, will correlate well. The motion noise present on all four sensor will be used to cancel the noise on the two physiology sensors. This report will compare heart rate variability derived from both the acoustic array and from ECG data taken simultaneously on a treadmill test. Acoustically derived breath rate and volume approximations will be introduced as well. A miniature 3- axis accelerometer on the same neckband provides additional noise references to validate footfall and motion activity.
This acoustic mine detection system uses an acoustic array of hydrophones embedded within a unique fluid-coupling structure that deforms to the ground contours and has an acoustic impedance comparable to that of the ground to facilitate energy transfer and eliminate losses at the air ground interface. Broadband and impulsive acoustic array techniques are used to localize buried objects and interpret the buried object's surrounding. The goal of this system is a low-cost, hand-held mine detector that rolls or slides across the ground, suitable for a soldier ti inspect and clear a two-foot wide path. The array contains sensor and sound source, which send out various acoustic waveforms and analyzes the returning echoes and emissions to determine if an object buried below the surface has affected the propagating sound. The sensor on the array remain in a fixed linear geometry hovering over the ground to facilitate beamforming while eliminating the huge losses associated with coupling airborne sounds to the ground. Reflections at material discontinuities, as well as mine shape, materials, and depth contribute to the variations of the induced and resultant sound field. Preliminary data is presented that shows detections of underground objects, and a discussion of future efforts, to include further processing of the data introduced in this report.
An acoustic sensor array that cues an imaging system on a small tele- operated robotic vehicle was used to detect human voice and activity inside a building. The advantage of acoustic sensors is that it is a non-line of sight (NLOS) sensing technology that can augment traditional LOS sensors such as visible and IR cameras. Acoustic energy emitted from a target, such as from a person, weapon, or radio, will travel through walls and smoke, around corners, and down corridors, whereas these obstructions would cripple an imaging detection system. The hardware developed and tested used an array of eight microphones to detect the loudest direction and automatically setter a camera's pan/tilt toward the noise centroid. This type of system has applicability for counter sniper applications, building clearing, and search/rescue. Data presented will be time-frequency representations showing voice detected within rooms and down hallways at various ranges. Another benefit of acoustics is that it provides the tele-operator some situational awareness clues via low-bandwidth transmission of raw audio data for the operator to interpret with either headphones or through time-frequency analysis. This data can be useful to recognize familiar sounds that might indicate the presence of personnel, such as talking, equipment, movement noise, etc. The same array also detects the sounds of the robot it is mounted on, and can be useful for engine diagnostics and trouble shooting, or for self-noise emanations for stealthy travel. Data presented will characterize vehicle self noise over various surfaces such as tiles, carpets, pavement, sidewalk, and grass. Vehicle diagnostic sounds will indicate a slipping clutch and repeated unexpected application of emergency braking mechanism.