Diagnostics capability of combat systems shall be compatible with the Army Diagnostic Improvement Program. Present systems are capable of performing health monitoring and health checks using internal embedded resources. They employ standard sensors and data busses that monitor data signals and built-in test (BIT). These devices provide a comprehensive source of data to accomplish an accurate system level diagnostics and fault isolation at line replaceable unit (LRU) level. Prognostics routines provide capability to identify the cause of predicted failure and corrective action to prevent unscheduled maintenance action. Combat system’s health status and prognostic information are displayed to operator, crew, and maintenance personnel. Present systems use common data/information interchange network in accordance with standards defined in the Joint Technical Architecture (JTA) to provide access to vehicle’s health data. The technologies utilized in present systems include embedded diagnostics, combat maintainer, schematic viewer, etc. Implementation of these technologies significantly reduced maintenance hours of combat systems. Health monitoring, diagnostics and prognostics of future systems will utilize federated software and probes approach. Gauges will determine if the system operates within acceptable performance bands by monitoring data provided by the probes. Health monitoring system will use models of missions to make intelligent choices considering tasks criticality.
Visible, infrared (IR) and sensor-fused imagery of scenes that contain occluded camouflaged threats are compared on a two dimensional (2D) display and a three dimensional (3D) display. A 3D display is compared alongside a 2D monitor for hit and miss differences in the probability of detection of objects. Response times are also measured. Image fusion is achieved using a Gaussian Laplacian pyramidal approach with wavelets for edge enhancement. Detecting potential threats that are camouflaged or difficult to see is important not only for military acquisition problems but, also for crowd surveillance as well as tactical use such as on border patrols. Imaging and display technologies that take advantage of 3D and sensor fusion will be discussed.
The proposed technology is the Dependable Automated Reconfigurable Software (DARTS). The DARTS health and situation control continually tests the processing elements with Probe/Agent technology. Algorithms within the Health & Situation Control assess the health of the processors based on a criticality scoring system that considers mission requirements. Probes launched by the DARTS Controller query processing elements. The probed data is sent to a gauge that has a variable sensitivity or gain. Statistical Usage models and criticality scoring control the sensitivity of the gauge. In response to the gauge, the replicating process launches agents that can insert anomalous events for diagnoistic pruposes. In this context, a probe is a subset of an agent having only the abilty to query without affecting framework, I/O protocol or Quality of Service. Each weapon system fitted with a DARTS Controller will control self-repair and reconfiguration of on-board processors utilizing a statistical based intelligent scoring system. It considers criticality of the function in the current battlefield situation. DARTS is a software system that enhances the performance of a weapon system by providing on-the-fly reconfiguration to accomodate the loss or malfunction of processing elements or to optimize onboard performance capability.
The fusion of visual and infrared sensor images of potential driving hazards in static infrared and visual scenes is computed using the Fuzzy Logic Approach (FLA). The FLA is presented as a new method for combining images from different sensors for achieving an image that displays more information than either image separately. Fuzzy logic is a modeling approach that encodes expert knowledge directly and easily using rules. With the help of membership functions designed for the data set under study, the FLA can model and interpolate to enhance the contrast of the imagery. The Mamdani model is used to combine the images. The fused sensor images are compared to metrics to measure the increased perception of a driving hazard in the sensor-fused image. The metrics are correlated to experimental ranking of the image quality. A data set containing IR and visual images of driving hazards under different types of atmospheric contrast conditions is fused using the Fuzzy Logic Approach (FLA). A holographic matched-filter method (HMFM) is used to scan some of the more difficult images for automated detection. The image rankings are obtained by presenting imagery in the TARDEC Visual Perception Lab (VPL) to subjects. Probability of detection of a driving hazard is computed using data obtained in observer tests. The matched-filter is implemented for driving hazard recognition with a spatial filter designed to emulate holographic methods. One of the possible automatic target recognition devices implements digital/optical cross-correlator that would process sensor-fused images of targets. Such a device may be useful for enhanced automotive vision or military signature recognition of camouflaged vehicles. A textured clutter metric is compared to experimental rankings.