PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
We are developing a multi-sensor, multi-look Artificial Intelligence Enhanced Information Processor (AIEIP) that combines classification elements of geometric hashing, neural networks and evolutionary algorithms in a synergistic combination. The fusion is coordinated using a piecewise level fusion algorithm that operates on probability data from statistics of the individual classifiers. Further, the AIEIP incorporates a knowledge-based system to aid a user in evaluating target data dynamically. The AIEIP is intended as a semi-autonomous system that not only fuses information from electronic data sources, but also has the capability to include human input derived from battlefield awareness and intelligence sources. The system would be useful in either advanced reconnaissance information fusion tasks where multiple fixed sensors and human observer inputs must be combined or for a dynamic fusion scenario incorporating an unmanned vehicle swarm with dynamic, multiple sensor data inputs. This paper represents our initial results from experiments and data analysis using the individual components of the AIEIP on FLIR target sets of ground vehicles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A wide variety of concealed weapon detection systems are being investigated to determine the potential payoffs of employing these sensors to detect weapons concealed under a person's clothing. The enabling sensing mechanisms being studied include infrared, acoustic, millimeter wave, and X- ray sensors. The primary emphasis of this paper is on infrared. A new technique for processing sensor data by partitioning non-homogeneous images into homogeneous regions for later detection and identification processing is presented. The name of this method is Automated Statistical Characterization and Partitioning of Environments (A'SCAPE). A'SCAPE enables image enhancement for reliable detection and identification of weapons concealed under varying layers of clothing through its mapping process. By employing a variety of sensors, another enabling technology for concealed weapon detection (CWD) is sensor fusion. Concepts for experiments and analysis are discussed to determine the feasibility of sensor fusion approaches for CWD.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In an aircraft application there are a number of limitations (limited computational capability, limited data bus bandwidth...) on a sensor system as well as there are a number of requirements to fulfill to make the sensor system robust against jamming and other kinds of disturbances. We are presenting a new approach to decentralized sensor fusion and tracking, where `decentralized' means that there are a considerable amount of pre-computation performed in the sensors. The novelty of the approach lies in the way the information is fed back from the central fusion unit that is handling the information delivered by the sensors. The information is fed back to the sensors to enable an improved tracking performance and at the same time not contaminating the sensor in the case the central fusion information is distorted due to jamming. The algorithm is realized by using an extend version of the multiple models algorithm AFMM (same class of algorithms as the IMM), in combination with a decentralized Kalman filtering scheme. The extension of AFMM consists of a possibility to include externally hypotheses in the regular split and prune mechanism. This way the external information is treated the same way as a prior in a Kalman filtering scheme and thus the global track estimate is confronted with the sensor's view before accepted by the sensor. In the paper the performance of the tracking algorithm is evaluated in a system consisting of a Radar and an Infrared search and track sensor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Western European Union Satellite Center (WEUSC) operationally exploits multisensor data for security oriented applications using data fusion techniques. Fused data can contribute to improved interpretation capabilities and more reliable results since the data with different characteristics are combined. The input images vary in spectral and spatial resolution as well as in time and therefore give a more complete view of the objects observed. This paper outlines a research projected initiated by WEUSC with the aim to demonstrate the benefit of data fusion using data from visible/infrared and synthetic aperture RADAR satellite sensors with regards to improved visual image interpretation. All three processing levels of data fusion are considered, i.e. pixel-based, feature-based and decision-based. Using advanced analytical or numerical data fusion techniques the data are processed for visual and semi-automatic interpretation to extract and analyze features of interest, in particular man-made objects, such as airfields, vehicles and infrastructure. After a short description of the WEUSC framework an introduction to the data fusion demonstrator is given. The paper continues with a description of methodology, implementation, and first results obtained. It concludes with an evaluation of the experiences gained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In several recent papers we have shown how random set theory provides a theoretically rigorous foundation for much of data fusion. An important missing piece in our approach has been the problem of how to incorporate observations which are ambiguous (e.g. imprecise, fuzzy/vague, contingent, etc.) into conventional Bayesian estimation and filtering theory. If one can do this, the fusion of imprecise observations with ambiguous observations, generated by dynamic (i.e., moving) targets, becomes possible using a familiar Bayes-Markov nonlinear filtering approach. This paper sketches the basis for fusion if one assumes that both observation space and state space are finite.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper is concerned with model-based parameter estimation for noisy processes when the process models are incomplete or imprecise. The underlying representation of our models is qualitative in the sense of Interval Arithmetic and Qualitative Reasoning and Qualitative Physics from the Artificial Intelligence literature. We adopt a specific qualitative representation, namely that advocated by Kuipers, in which a well defined mathematical description of a qualitative model is given in terms of operations on intervals of the reals. We investigate an weighted opinion pool formalism for multi-sensor data fusion, develop a definition for unbiased estimation on quantity-spaces and derive a consistent mass assignment function for mean estimators for two state systems. This is extended to representations involving more than two states by utilizing the relationships between coarse (i.e. two state) and fine (i.e. N state) representations explored by Shafer. We then generalized the Dempster-Shafer Theory of Evidence to a finite set of theories and show how an extreme theory can be used to develop mean minimum-mean-square-error estimators applicable to situations with correlated noise. We demonstrate our theory using real data from a mobile robot application which utilizes sonar and laser time-of-flight and gyroscope information to disseminate surface curvature.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Communications management applied to a simulated decentralized target identification system is described. Here two communications management algorithms are compared. One is based on an intelligent information theoretic approach. The other is based on a non-intelligent algorithm. The investigation is concerned with the performance of the algorithms as the communication bandwidth between the nodes of the decentralized system is constrained and the number of targets being observed is varied. The results show that the intelligent algorithm outperforms the non-intelligent algorithm. This is at the cost of increased computational requirement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A recursive multisensor association algorithm has been developed based on fuzzy logic. It simultaneously determines fuzzy grades of membership and fuzzy cluster centers. It is capable of associating data from various sensor types and in its simplest form makes no assumption about noise statistics as many association algorithms do. The algorithm is capable of performing without operator intervention, i.e., it is unsupervised. It associates data from the same target for multiple sensor types. The algorithm also provides an estimate of the number of targets present, reduced noise estimates of the quantities being measured, and a measure of confidence to assign to the data association. The fuzzy logic formalism used offers the opportunity to incorporate additional information or heuristic rules easily. A comparison of the algorithm to a more conventional Bayesian association algorithm is provided. Also, procedures for defuzzification, i.e, mapping fuzzy results to hard results are discussed as well as the method of determining target validity. Various simulated and experimentally measured real-time data sets are analyzed and provide a basis for comparison of the fuzzy and Bayesian association algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Although the emerging of fuzzy sets theory and its application to the region of pattern recognition has pushed greatly the development of pattern recognition, the fuzzy pattern recognition has some defects of itself--it isn't of the function of information fusion, that is to say, it cannot fuse the information supplied by multiple sensors effectively in order to improve the reliability and rate of recognition. A fuzzy reasoning method based on the combination of fuzzy sets theory and Dempster-Shafer's evidence theory is introduced in this paper. For this method, evidence theory is used to add the function of information fusion to traditional fuzzy reasoning method so that the validity and reliability of recognition are improved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To achieve robust and efficient model-based object recognition, particularly from real outdoor images, we must extract salient information of the objects. Unfortunately the low-level processing procedures most of the time erroneous or incomplete primitives. Towards this end, we present an original technique to extract salient segments taking into account the geometrical specificity of the model: parallelism, T-junctions, main directions for example. Our method uses a markovian model defined on the spatial adjacency formed by structured edge primitives, on extracted features measurements and on domain knowledge. In this paper, we describe the Gibbs distribution associated to the proposed model (sites and its components and cliques representing the domain knowledge). We use a deterministic algorithm ICM (Iterated Conditional Mode) to generate a sub- optimal configuration. We also describe the energy function to minimize and how we initialize the Markov field. We specify the automatic convergence criteria depending on the model. Experimental results on real world images for different model-based recognition will be presented. Finally, we will touch on the implementation aspects and the computational time for real time applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a novel approach to feature extraction for rotationally invariant object classification is proposed based directly on a discrete wavelet transformation. This form of feature extraction is equivalent to retaining information features while eliminating redundant features from images, which is a critical property when analyzing large, high dimensional images. Usually, researchers have resorted to a data pre-processing method to reduce the size of the feature space prior to classification. The proposed method employs statistical features extracted directly from the wavelet coefficients generated from a three-level subband decomposition system using a set of orthogonal and regular Quadrature Mirror Filters. This algorithm has two desirable properties: (1) It reduces the number of dimensions of the feature space necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; (2) Regardless of the target orientation, the algorithm can perform classification with low error rates. Furthermore, the filters used have performed well in the image compression regime, but they have not been applied to applications in target classification which will be demonstrated in this paper. The results of several classification experiments on variously oriented samples of the visible wavelength targets will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper investigates the model-based aspects of the estimator correlator (EC) detector in the wavelet domain. Ideas from group theory are used to develop and describe underlying properties. By applying group representation theory to the detector development, insight into the optimal processing structure of the EC is gained. In the absence of a priori model information, the EC detector reduces to the wavelet domain or matched filter detector. With a priori information incorporated into the model, the EC becomes a weighted wavelet detector. Implementing the EC in the wavelet domain provides range-Doppler (wavelet) images at different stages of processing. This allows the opportunity to simultaneously exploit the vast body of knowledge of wavelets, scattering function theory, and range-Doppler processing techniques. Ambiguity function theory is used to evaluate performance capabilities of these wavelet-based detectors using various narrowband and wideband transmit signals. This paper shows that the weighted wavelet detector serves as a classifier as well by using the scattering function model as a basis for pattern recognition. An example of the effectiveness of the weighted wavelet detector with narrowband and wideband signals for a multi- highlight image model is presented and results are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A wavelet transformation is introduced as a new method to extract sideview face features in human face recognition. Utilizing the wavelet transformation, a sideview profile is decomposed as high frequency and low frequency components. Signal reconstruction, autocorrelation and energy distribution are used to decide a optimal decomposition level in the wavelet transformation without losing sideview features. To evaluate the feasibility of the wavelet transformation features in human sideview face recognition, the tie statistic is used to compute the complexity of the wavelet transform features. Using wavelet transformation, the sideview data size is reduced. The reduced features have almost the same ability as the original sideview face profile data in terms of distinguishing different people. The computational expense is greatly decreased. The results of the experiments are also shown in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A model for analyzing issues involving monospectral target recognition is presented. These issues include modeling target detection, recognition and identification thresholds, and predicting the functional parametric dependencies of the results of observation experiments by human observers. The model makes extensive use of concepts used in Information Theory. An image of a certain scene is treated as a sample of an entire set of images of that particular scene. A difference measure, called the Informational Difference (InDif) between two image sets is defined. The main assertion is that accomplishing target recognition tasks is equivalent to setting thresholds for the InDif. The applicability of the InDif to the performance of the Human Visual System (HVS) is shown both analytically, in very simple situations, and in computer calculations involving noisy images. Finally, a single framework for dealing with the HVS and Artificial Intelligence systems is target recognition applications is shown to result naturally from the InDif formalism.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An imaging infrared (IR) autonomous target recognition system uses a prebrief geoposition location and IR image data to locate and track targets. On some occasions, the image processing target recognition algorithm identifies a non-target object as the desired prebriefed target. This paper derives a position match quality (PMQ) algorithm which monitors the convergence of the IR Kalman filter target estimation system and attempts to decide when the IR target being attacked should be rejected as a false target. Imaging guidance simulation results of the PMQ algorithm for Type I and Type II errors are shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Progress is reviewed on the development of an all source image interpretation system which exploits complementary evidence from a range of experts. This co-operation may occur between feature detectors in different bands, between detectors searching for different types of feature, or between different types of detector of the same feature. Algorithms for detecting vehicles in infrared linescan imagery gives a low missed detection rate but have been found to respond falsely to: roads fragmented by trees; structures such as cylindrical storage tanks; and to corners of man made objects, such as buildings. False alarms are reduced by applying algorithms which detect subclasses of false alarms reliably i.e. buildings and storage tanks. In addition, both are features of interest in themselves, and are useful primitives in the identification of sites. The integration of depth (in the form of disparity maps) is examined as a means of reducing false building detections. Outputs from the feature detectors are combined using a simple rule-based approach. A surface based model matching technique is examined as a means of classifying the remaining vehicle candidates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion, Tracking, and Resource Management II
The Kalman Filter (KF) is one of the most widely used methods for tracking and estimation due to its simplicity, optimality, tractability and robustness. However, the application of the KF to nonlinear systems can be difficult. The most common approach is to use the Extended Kalman Filter (EKF) which simply linearizes all nonlinear models so that the traditional linear Kalman filter can be applied. Although the EKF (in its many forms) is a widely used filtering strategy, over thirty years of experience with it has led to a general consensus within the tracking and control community that it is difficult to implement, difficult to tune, and only reliable for systems which are almost linear on the time scale of the update intervals. In this paper a new linear estimator is developed and demonstrated. Using the principle that a set of discretely sampled points can be used to parameterize mean and covariance, the estimator yields performance equivalent to the KF for linear systems yet generalizes elegantly to nonlinear systems without the linearization steps required by the EKF. We show analytically that the expected performance of the new approach is superior to that of the EKF and, in fact, is directly comparable to that of the second order Gauss filter. The method is not restricted to assuming that the distributions of noise sources are Gaussian. We argue that the ease of implementation and more accurate estimation features of the new filter recommend its use over the EKF in virtually all applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In practice, multisensor systems use dissimilar sensors having different data rates. Such sensors may also have inherent delays as well as communication delays. Recently the authors developed a track fusion algorithm that attempted to account for realistic constraints of sensor fusion. The objective of this paper are two fold. First, it shows that the synchronous track fusion problem can be derived as a special case of the developed track fusion algorithm. Second, using simulated target tracks, the performance of the asynchronous track fusion algorithm is analyzed and compared to an existing fusion algorithm. Different sensor data rates and communication delays are used in the simulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an algorithm for fusion of tracks created by Radar and IR sensors at remote sites. It is assumed that these sensors are synchronous and the tracks are transmitted to a central station at the same rate. Since these sensors have non-unity probability of detection (Pd < 1), the transmitted tracks contain gaps due to missed detection. In addition, false tracks may be created which will result in number of tracks created by each sensor being greater than the number of targets. This paper describes a track fusion algorithm which involves processing the remote tracks at a central station where track to track correlation is performed using a track matching algorithm and false tracks are eliminated. At any update time, if it is found that a track has not been updated, it is propagated from its last update time. Correlated tracks are fused and the library of track files is updated at the central station by the newly created fused track. Results of preliminary algorithm testing are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tracking multiple maneuvering targets in clutter is a challenging problem. Using only the measured kinematic quantities is usually not adequate to meet the requirements on a multiple target tracking (MTT) system, i.e., to partition the sensor data into tracks of the targets while suppressing the clutter and false alarms. The efficient use of attribute data in addition to the kinematic measurements can greatly enhance the capability of an MTT system in discrimination against the false tracks. In this paper, the friend-foe identification information, the target run-length which measures the number of hits by a radar for a target, and the estimated target speed at each update of a track are used for true and false track identification. Since not all the information are available for all the tracks at all time, Dempster-Shafer's evidential reasoning is employed to combine these pieces of uncertain information with different levels of abstraction. Real air surveillance radar data were collected to evaluate the effectiveness of this combined tracking and identification approach. Results shows that the fusion of track attribute data with the kinematic estimates by Dempster-Shafer reasoning provides very satisfactory discrimination between the true and false tracks, thus greatly improves the system's surveillance capability over the system that uses only the kinematic data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Jonker-Volgenant-Castanon (JVC) assignment algorithm was used by Lockheed Martin Advanced Technology Laboratories (ATL) for track association in the Rotorcraft Pilot's Associate (RPA) program. RPA is Army Aviation's largest science and technology program, involving an integrated hardware/software system approach for a next generation helicopter containing advanced sensor equipments and applying artificial intelligence `associate' technologies. ATL is responsible for the multisensor, multitarget, onboard/offboard track fusion. McDonnell Douglas Helicopter Systems is the prime contractor and Lockheed Martin Federal Systems is responsible for developing much of the cognitive decision aiding and controls-and-displays subsystems. RPA is scheduled for flight testing beginning in 1997. RPA is unique in requiring real-time tracking and fusion for large numbers of highly-maneuverable ground (and air) targets in a target-dense environment. It uses diverse sensors and is concerned with a large area of interest. Target class and identification data is tightly integrated with spatial and kinematic data throughout the processing. Because of platform constraints, processing hardware for track fusion was quite limited. No previous experience using JVC in this type environment had been reported. ATL performed extensive testing of the JVC, concentrating on error rates and run- times under a variety of conditions. These included wide ranging numbers and types of targets, sensor uncertainties, target attributes, differing degrees of target maneuverability, and diverse combinations of sensors. Testing utilized Monte Carlo approaches, as well as many kinds of challenging scenarios. Comparisons were made with a nearest-neighbor algorithm and a new, proprietary algorithm (the `Competition' algorithm). The JVC proved to be an excellent choice for the RPA environment, providing a good balance between speed of operation and accuracy of results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Several assignment methods are compared in terms of problem size, computational complexity and misassignment as a function of sparsity and gating. Specific real world applications include multi-target multi-sensor tracking/fusion and resource management with sparse cost matrices. The cost matrix computational complexity is also addressed. Both randomly generated cost matrices and measured data sets are used to test the algorithms. It is shown that, both standard and some new greedy, assignment algorithms significantly degrade in performance with fully gated columns and/or rows. However, it is shown that it is possible to modify specific algorithms to regain the lost optimality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Within the framework of a command and control system, vast amounts of data are being collected and processed from a variety of dissimilar sensors. Through sensor management, sensor usage is integrated to accomplish specific and often dynamic mission objectives. Every opportunity a sensor has to measure the environment can be equated to a reduction in uncertainty in its state, and hence a quantifiable amount of information. A difficulty arises when the data from sensors is not directly comparable as in the case of kinematic and nonkinematic sensors. This paper expands on our previous work, in which a modest multiple sensor, multiple threat simulation model was built to demonstrate the use of Information Theory in sensor management. The simulation model was used to demonstrate the use of Information Theory to effectively deal with the target tracking and target search decision problem. This paper builds upon that work by implementing the OGUPSA sensor scheduling algorithm in the simulation model with more fidelity by replacing the unit interval tasks by appropriate non-unit interval tasks and compares several sensor management methods including minimum position error and maximum information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Communication management is discussed in the context of a decentralized target tracking system, based on a simulated aircraft scenario. An intelligent algorithm, motivated by information theory, is generally found to outperform a simple non-intelligent algorithm. These results depend, for example, on the maneuverability of the targets that are being tracked. They are indicative of the performance trade- offs (e.g. tracking accuracy vs. numerical processing load) that will be required in the design of future decentralized tracking systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Neutralizing the threat of a incoming ballistic missile is a difficult task. Often the missile disintegrates, leaving the warhead surrounded by a number of ballistic fragments. Thus, the challenge is for the interceptor to identify and track the targets so that a successful strike can be achieved. This paper addresses the problem of using noisy image sequence data captured by on-board sensors in the nose cone of the interceptor to track target fragments. We propose an approach based on a generalized motion model that can accurately represent motion caused by rotation, translation, and scaling. An efficient iterative estimation algorithm is presented to calculate motion parameters and the computational complexity of the tracking system is evaluated. Results show the proposed scheme performs significantly better than simple block-based motion estimation approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Matched-pursuits is a nonlinear algorithm which iteratively projects a given signal onto a complete dictionary of vectors. The dictionary is constructed such that it is well matched to the signals of interest and poorly matched to the noise, thereby affording the potential for denoising, by adaptively extracting an underlying signature from a noisy waveform. In the context of wave scattering and propagation, there are basic constituents that can be used to construct most measured waveforms. A dictionary of such constituents is used here, in the context of wave-based matched-pursuit processing of acoustic waves scattered from submerged elastic targets. It is demonstrated how wave-based matched- pursuits can be utilized for denoising as well as to effect a detector, the latter being parametrized via its receiver operating characteristic.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper develops the pulse train probabilistic data association filter (PT-PDAF) for use in pulse train analysis and deinterleaving applications. The approach is based on a state-space formulation of the pulse train evolution model. The PDA approach overcomes real-world problems of false and missing pulses which cause the basic Kalman filter to break down. Simulations are developed to show that the PT-PDAF approach is superior to a nearest neighbor filter. An augmented PDA approach which incorporates available pulse parameter measurements such an angle of arrival into the PDA algorithms is shown to further improve the filter performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automation is the future trend for the target tracking systems of the Smart Munitions Test Suite at White Sands Missile Range. Resonances often appear in electro-mechanical systems and tend to reduce the performance of the control tracking algorithms. Hence, automatic resonant cancellation is one of the algorithms that should be considered. In this paper the concept and implementation of automatic resonant cancellation using Genetic Algorithms (GAs) in conjunction with the system identification package and the notch filter developed by the DSP Control Group is presented. A simple GA is used to search for the highest resonant peaks caused by imperfect coupling between motor and load. The search is guided by a fitness function which is the transfer function obtained from a system identification method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a new way for recovering FM stereo signals after being subjected to multipath interference. This approach relies on the detection of out-of-band noise to signal the occurrence of multipath events. The correction is done by controlling the cutoff frequencies of two adaptive lowpass filters used to pass the desired left and right stereo signals simultaneously. The cutoff frequency of each filter is determined by the measurement of the peak frequency found in the in-band audio signal falling within a moving time window. This measurement is only enabled in the absence of interference. The duration of correction is initially determined by the energy of the out-of-band noise but later controlled by a Fuzzy controller whose inputs are the current duration and the duration error.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Results of theoretical and experimental research to develop a locator with antennas performing beam scanning during both the emission and reception of pulses are presented. The reflected signals are received within discrete `visibility' layers formed due to beam scanning during reception. The locator is shown to have a number of advantages in comparison with the conventional locator. The known distribution of visibility layers in space allows one: to create adaptive systems ensuring reception of the required information with the minimum energy expenses; to attain `superresolution' of objects (at distances smaller, than required by the Rayleigh criterion); to improve noise immunity of the locator. It is demonstrated that a `quasi- holographic' data-processing system can be developed, similar to the synthesized antennas aperture system proposed in the 50s by Emmett Leath. An ultrasonic version of the `superscanning' locator is described. The experimental results totally confirm the theoretical predictions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we address a new approach to the remote sensing imaging problems for radar/sonar array imaging system stated and treated as ill-posed inverse problems of restoration the extended object reflected signals distorted in a stochastic scattering medium. The developed approach is based on combining the Bayesian estimation technique for signal restoration problems with constrained regularization technique for inversion of the signal formation operator of the stochastic measurement channel. To reduce the generic ill-posed imaging problem to its radar/sonar system oriented numerical version the experiment design methodology is applied. This results in the projection-dependent scheme for measured data that originates from limited number of sensors of a sparse array. Next, to alleviate the limitations on the absence of prior knowledge ofthe object signal the model-based assumptions for the desired image space are introduced. Model-based fusion of such diverse information on data sets and image space in a generalized constrained array imaging inverse problem is the first issue addressed in the paper. Optimal/suboptimal solution of this problem in the mixed Bayesian-regularization setting that results in the development of numerical technique for extended object imaging in scattering random media with improved spatial resolution is the second issue this paper addresses. Some computer simulation results are also provided to illustrate the proposed approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A two-step adaptive beamforming approach in subarray is introduced in this paper. A main beam is formed by conventional digital beamforming algorithm. The space positions of M jammings are estimated and then M beams pointing at the directions of M jammings are formed. An adaptive processing of M+1 order is complemented with an adaptive algorithm. This algorithm has excellent performances in jamming cancellation, stability, and computational complexity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An associative memory, unlike an addressed memory used in conventional computers, is content addressable. That is, storing and retrieving information are not based on the location of the memory cell but on the content of the information. There are a number of approaches to implement an associative memory, one of which is to use a neural dynamical system where objects being memorized or recognized correspond to its basic attractors. The work presented in this paper is the investigation of applying a particular type of neural dynamical associative memory, namely the projection network, to pattern recognition and data fusion. Three types of attractors, which are fixed-point, limit- cycle, and chaotic, have been studied, evaluated and compared.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a comparative analysis of two evolved neural networks for control. Traditionally, the structure of Radial Basis Functions Networks (RBFNs) and Multilayer Feedforward Networks (MFNs) are found by a trial-and-error process. This process consists on finding an appropriate network structure such that the unknown nonlinearities of the plant can be estimated to some desired accuracy. In general, a neural network is composed of two elements: structural and learning parameters. The structural parameters are all those elements that determine the size of the network. The learning parameters are all those elements that determine learning and convergence of the network. The approach presented in this work uses a Genetic Algorithm (GA) to evolve the structure, and uses a gradient descent algorithm to adjust the weights in the network. An analysis of the evolution of RBFNs and MFNs by means of a GA is examined in detail. It is shown that the networks can be encoded in a chromosome for their evolution. Experimental results show the performance of Evolved Radial Basis Functions Networks and Evolved Multilayer Feedforward Networks in the identification and control of a nonlinear plant.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An optical diffraction method is described for inspecting periodic structures such as combs or semiconductor leads. Coherent light passing between the prongs of the structure self interfere at the fractional Talbot plane to provide a simple method of inspection. Computer simulation and laboratory experiments show the viability of this approach. The theory assumes infinite structures. In practice, large and effect signals arise due to the finiteness of the periodic structure. A neural network is demonstrated that learns to distinguish and effect signals from prong damage signals. The variability of the measuring process in a production environment makes neural networks an appropriate approach for this task.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The duration of an optical signal is defined as the time interval in which a certain part of its energy is contained. It has been proved that under such definition the form of a signal with a given duration and maximum length of dispersion broadening is not Gaussian. Time profile of such signals has been obtained as a superposition of orthogonal symmetric Hermitean modes. Such signals have also been shown to be smooth functions of time. Furthermore, it has been shown that the expansion converges rapidly and is considerably well represented by the first few modes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There is a growing need for sensors in monitoring performance in modern quality products such as in electronics to monitor heat build up, substrate delaminations, and thermal runaway. In processing instruments, intelligent sensors are needed to measure deposited layer thickness and resistivities for process control, and in environmental electrical enclosures, they are used for climate monitoring and control. A yaw sensor for skid prevention utilizes very fine moveable components, and an automobile engine controller blends a microprocessor and sensor on the same chip. An Active-Pixel Image Sensor is integrated with a digital readout circuit to perform most of the functions in a video camera. Magnetostrictive transducers sense and damp vibrations. Improved acoustic sensors will be used in flow detection of air and other fluids, even at subsonic speeds. Optoelectronic sensor systems are being developed for installation on rocket engines to monitor exhaust gases for signs of wear in the engines. With new freon-free coolants being available the problems of A/C system corrosion have gone up in automobiles and need to be monitored more frequently. Defense cutbacks compel the storage of hardware in safe-custody for an indeterminate period of time, and this makes monitoring more essential. Just-in-time customized manufacturing in modern industries also needs dramatic adjustment in productivity of various selected items, leaving some manufacturing equipment idle for a long time, and therefore, it will be prone to more corrosion, and corrosion sensors are needed. In the medical device industry, development of implantable medical devices using both potentiometric and amperometric determination of parameters has, until now, been used with insufficient micro miniaturization, and thus, requires surgical implantation. In many applications, high-aspect- ratio devices, made possible by the use of synchrotron radiation lithography, allow more useful devices to be produced. High-aspect-ratio sensors will permit industries and various other users to attain more accurate measurements of physical properties and chemical compositions in many systems. Considerable engineering research has recently been focused on this type of fabrication effect. This paper looks at a high-aspect-ratio sensor bus thermorestrictive device with increased aspect-ratio of the interconnects to the device, using unique simulation software resources.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a model-based image analysis system which automatically estimates the 3D orientation vector of satellites and their sub-components by analyzing images obtained from a ground-based optical surveillance system. We adopt a two-step approach: pose estimates are derived from comparisons with a model database; pose refinements are derived from photogrammetric information. The model database is formed by representing each available training image by a set of derived geometric primitives. To obtain fast access to the model database and to increase the probability of early successful matching, a novel index hashing method is introduced. We present recent results which include our efforts at isolating and estimating orientation vectors from degraded imagery on a significant database of satellites. We also discuss the problems our system encounters with some of the images, and the solutions we are implementing to significantly improve the system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The cost of processing imagery that exhibits a large data burden can be reduced by compressive processing, which computes over a compressed image versus the corresponding (uncompressed) source image. When the compressive result is decompressed, one obtains an approximation to the corresponding operation over uncompressed imagery. In previous publications, we have shown that compressive processing can lead to computational efficiency (e.g., a sequential speedup) that approaches the compression ratio. In certain cases, computational speedup that exceeds the compression ratio can be achieved with sufficient parallelism. We have also derived techniques for computing pointwise arithmetic operations, global reduce operations such as image sum or image maximum, and selected image- template operations over imagery compressed by several blockwise transformations. In particular, our previous research has emphasized the processing of imagery compressed by block truncation coding, vector quantization (VQ), and visual pattern image coding (VPIC). In this paper, we extend our previous work by deriving algorithms that more accurately simulate the operations of Prewitt, Sobel, and Kirsch edge detection over imagery compressed by VQ and VPIC transforms. We also derive morphological operations of erosion and dilation with the von Neumann template over VQ- and VPIC-compressed Boolean imagery. Analysis of each operation includes a model of computational complexity and theoretical/experimental assessment of information loss incurred by computing over a lossy compressed image representation. We show that edge detection over VPIC- compressed imagery can be implemented in terms of a simple codebook transformation. Thus, if an uncompressed (source) image has N pixels and the compression ratio is denoted by CR, approximately O(N/CR) substitutions of exemplars from the transformed codebook are required. This technique is extensible to certain morphological operations with the von Neumann template and can be implemented in SIMD-parallel fashion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we discuss methods of enhancing such compressive edge detectors to achieve greater accuracy, then derive morphological operations of erosion and dilation with the von Neumann template over VQ- and VPIC-compressed Boolean imagery. Additional analysis pertains to the formulation and testing of compressive component labeling algorithms. Analyses include a model of computational complexity as well as theoretical and experimental assessment of information loss incurred by computing over lossy compressed imagery. We show that numerous operations over VPIC-compressed imagery can be implemented in terms of one invocation per compressed pixel of a small lookup table. This facilitates fast computation on workstations and parallel processors, which our algorithms are designed to support.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The method of restoration of defocus image is presented. Considering the principle of geometrical optics. The point- spread function where digital image is out of focus plane is described. Using the point-spread function, we develop new algorithms for deconvolution and filter in frequency domain. Simulated images are employed in the examples where successful recovery of the defocus image is demonstrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fast implementation of convolution and discrete Fourier transform computations are frequent problems in signal and image processing. These operations typically use fast Fourier transform (FFT) algorithms. Number Theoretic Transforms (NTTs) over a finite group of primes can also be used for this purpose. Using the NTT over Fermat primes (2q + 1) in field programmable gate array (FPGA) designs is advantageous, because the arithmetic can be efficiently and fast realized. By using Fermat primes, all multiplications, which have a O(q2) area requirement, can be replaced by a q yields q (rotation) shift operation, which has only a O(q(DOT)log(q)) area requirement. The area requirement reduces to O(q) for a fully pipelined realization with hardwired shifts. Using the Eisenstein Residue Number System (ERNS), which defines complex number over the polynom j2 + j + 1 equals 0, instead of j2 + 1 equals 0 for Gaussian integers, gives the additional advantage that the transform length is extended from q to 6q by only one addition for each complex multiplication. An RNS-based multiple FPGA-board implementation is presented which demonstrates both the performance and packaging advantages of the new ERNS-FPGA- NTT paradigm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Object segmentation is the process by which a mask is generated which identifies the area of an image which is occupied by an object. Many object recognition techniques depend on the quality of such masks for shape and underlying brightness information, however, segmentation remains notoriously unreliable. This paper considers how the image restoration technique of Geman and Geman can be applied to the improvement of object segmentations generated by a locally adaptive background subtraction technique. Also presented is how an artificial neural network hybrid, consisting of a single layer Kohonen network with each of its nodes connected to a different multi-layer perceptron, can be used to approximate the image restoration process. It is shown that the restoration techniques are very well suited for parallel processing and in particular the artificial neural network hybrid has the potential for near real time image processing. Results are presented for the detection of ships in SPOT panchromatic imagery and the detection of vehicles in infrared linescan images, these being a fair representation of the wider class of problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we introduce an object recognition method that we have designed for target localization in aerial images. Our research has been oriented from the beginning toward low level feature based methods since our aim was to design a real time on-board detection system. Due to their repetitiveness some of these methods are very well suited for a parallel implementation. Not being an algorithm for general matching, our method takes advantage of the common properties of the aerial images we study, and those of the target objects to simplify the processing and thus the computational load. Special attention has also been given to the preprocessing interest point detection algorithms. In our paper, we define first the type of aerial images that we are working on. An emphasis is put on the properties of the scenes they represent and the polygonal objects that the matching algorithm has to recognize. We review then the different interest point detection algorithms that we have selected. Comparison of their respective accuracy and performances are given. Then, we present the matching algorithms which we have chosen for the basis of our final work. Special attention is given to the accuracy and speed of the algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The calibration of impact disdrometers has traditionally been a tedious process, whereby known diameter single water droplets are generated and allowed to fall from a height of 10 meters or more in order to obtain terminal velocity. An alternate method of calibration is proposed which eliminates the need of a single drop generator and associated drop shaft. The strategy behind this technique is to use an accumulation rain measurement instrument, such as a tipping bucket rain gauge, to provide a known signal for the purpose of training an adaptive calibration algorithm. The reference signal to this digital signal processing algorithm is the output of the impact disdrometer which is preprocessed by an impulse amplitude estimation algorithm. This calibration technique has been evaluated using data from UCF's Acoustic Rain Gauge Array, which estimates raindrop size distributions (1 mm drop diameter or more) by digitally sampling the acoustic signal from an array of acoustic impact sensors. This calibration technique should be applicable for use with other impact disdrometers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Determining a combat identification (CID) architecture for the future requires knowledge and understanding of the current CID situation. The purpose of this paper is to present some of the issues and misconceptions surrounding CID, and how future CID technologies can overcome the limitations of today's CID architecture. The common perception of the CID problem is illustrated in Figure 1. For air-to-air missions, it is perceived that cooperative CID systems, like the Mark XII 1FF system, will provide the warfighter with the identity of all friendly platforms in the area, thus making it fairly easy to determine which aircraft are hostile. For the air-toground missions, the warfighter continues to rely on voice communications from a forward air controller (FAC) to obtain target CID information. As long as the ground targets retain some separability, then the FAC's information should be "good enough" to accomplish the mission. Furthermore, most people believe that the goal of improving CID systems/architectures is to eliminate fratricide. This misconception has driven the direction of CID system development towards cooperative CID systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In an era of reduced defense budgets, there is increased pressure to reuse any available technology or capability to the extent possible. For data fusion applications, this requirement can lead to situations where the output of disparate individual algorithms would like to be fused; ideally, this would be done in the most quantitative way possible. This paper reviews, integrates, and comments on various prior works in both the data fusion, remote sensing, and character recognition communities which are helpful to the data fusion algorithm/process designer dealing, in particular, with target identification and classification problems. It is shown that generalized voting and rank-based methods may be useful in these cases; the issue of source reliability is also addressed and methods for incorporating assigned reliabilities are described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Linearly combining multiple, adaptive 2-class classifiers provides the capability to integrate many individual classifiers each of which possesses arbitrarily complex decision boundaries into simple, functioning classification system. Adaptive two classifiers and the linear combination approach are described. In benchmarking studies this approach has been shown to require O(2) magnitude less computational resources when compared to a classical statistical approach. We will present comparative resource requirements for performance equivalent algorithms. The adaptive algorithm used in the above benchmarking study is widely known for its pattern recognition capabilities. This approach can be used on range profile data to perform 1D target ID. In this applications, two characteristics of this approach can be seen to be of particular benefit in an airborne environment. These characteristics are: robust performance in the face of varying SNR and robust performance in the face of rough aspect angle information. We will discuss the theory behind the characteristics of this approach that provide such benefits. Finally, with respect to the cost required to maintain a fielded system, a linear combination approach to adaptive ID algorithms provides special benefits. Chief among the advantages is the ability to rapidly update a fielded system with new information. Using this approach the changes required to add new targets to a given system are equivalent to downloading a new data base. The nature of this algorithm allows for low risk, low cost upgrades. Furthermore, very rapid turn around time will be possible in the case where only a small percentage of new targets are to be added to the capabilities of a fielded system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Experience with large measured and synthetic high-range- resolution radar signature databases has shown the need for extensive data screening. Multiple sources of errors adversely affect the quality of the signatures, and can remain undetected without an intensive quality-control effort. The errors can adversely affect subsequent data processing, signal processing, and target identification research and development. This paper gives examples of errors that have been discovered in a high range-resolution air-target signature database and a synthetic HRR signature database currently being generated by the U.S. Air Force.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Extensive analyses have been conducted to find ways to improve upon the identification (ID) performance of a quadratic classifier. The analyses involved applying several different processing techniques to the discriminants of the classifier with the result being, in some cases, significant increases in the ID performance of the classifier. The result of these analyses provide some clear indications of the degree and nature of improved performance that can be obtained with these techniques taken individually and in combination.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A functional description of a fully adaptive, stationary target indication signal processing approach for the radar detection of stationary targets in ground clutter is given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Pulse compression waveforms and processing techniques for high-resolution signature formation are typically constrained by hardware limitations and target motion. This paper serves as a summary comparison of three modern pulse compression techniques: (1) linear frequency modulation (matched filtering), (2) stretch processing, and (3) stepped frequency waveforms, that are designed to perform under different hardware limitations. However, trade-offs exists between the three techniques which limit range window sizes, result in range aliasing, define minimum sampling rates and instantaneous bandwidths, and define range-Doppler ambiguities and distortions. This paper focuses on the mathematical development of the three techniques and relates the results to hardware requirements and range-Doppler ambiguities/distortions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A fuzzy logic clustering algorithm to classify a given image into targets and backgrounds is presented. The algorithm forms clusters and is trained without supervision. The clustering is done on the basis of the statistical properties of the set of inputs. The algorithm features an adaptive mechanism for selecting the number of clusters, and it features an adaptive threshold. The problem of threshold selection is considered and the convergence of the algorithm is shown. The algorithm also does not require the number of clusters been known a priori. An example is given to illustrate the application of the algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One goal of sensor-fusion methods is the integration of data of various types into a common usable form. Here we seek a uniform framework for the following three types of data: (1) numerical (e.g., x equals 74.1); (2) interval (e.g., x equals [73.9,75.2]); and (3) fuzzy (e.g., x equals tall, where tall is described by a suitable membership function). The problem context of this paper is clustering, which is the problem of separating a set of objects into self-similar groups, but other types of data analysis can be handled similarly. Earlier work on this problem has produced both parametric and nonparametric approaches. The parametric approach is only possible in cases when all the fuzzy data have membership functions coming from a single parametric family of curves, and in that case, the specific parameter values provide numerical data that can easily be used with standard clustering techniques such as the fuzzy c-means algorithm. The more difficult and interesting problem involves the nonparametric case, where there is not a common parametric form for the membership functions. The earlier nonparametric approach produces numerical data for clustering via necessity and possibility values which are derived using a set of `cognitive landmarks'. The main contribution of this note is in presenting a new, simpler nonparametric approach that derives a common usable form of data directly from the membership functions. The new approach is described and then demonstrated using a specific example.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An approach to the long range automatic detection of vehicles, using multi-sensor image sequences, is presented. The algorithm was tested on a database of six sequences, acquired under diverse operational conditions. The vehicles in the sequences can be either moving or stationary. The sensors also can be moving. The presented approach consists of two parts. The first part detects targets in single images using seven texture measurements. The values of some of the textural features at a target position will differ from those found in the background. To perform a first classification between target- and non-target pixels, linear discriminant analysis is used on one test image for each type of sensor. Because the features are closely linked to the physical properties of the sensors, the discriminant function also gives good results to the remainder of the database sequences. By applying the discriminant function to the feature space of textural parameters, a new image is created. The local maxima of this image correspond to probably target positions. To reduce the false alarm rate, any available prior knowledge about possible target size and aspect ratio is incorporated using a region growing procedure around the local maxima. The second part of the algorithm detects moving targets. First any motion of the sensor itself need to be detected. The detection is based on a comparison of the spatial cooccurrence matrix within one image and the temporal cooccurrence matrix between successive images. If sensor motion is detected, it is estimated using a multi-resolution Markov Random Field model. Available prior knowledge about the sensor motion is used to simplify the motion estimation. The motion estimate is used to warp past images onto the current one. Moving targets are detected by thresholding the difference between the original and warped images. Temporal and spatial consistency are used to reduce false alarm rate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of detection of a signal in noise analyzed in the literature assumes a complete statistical knowledge of the received signal. However, in radar, sonar and other detection problems, the signal is embedded in a noise whose characteristics are not completely known and are changing with time. In such situations, the test statistics must be based on some invariant characteristics ofthe noise density function rather than on some specific form of noise density function. In this paper, a general problem of signal detection in a background of unknown Gaussian noise is addressed. Such a noise density function approximates physical noise encountered in different situations. Using the techniques of statistical hypothesis testing, a generalized maximum likelihood ratio (GMLR) test is derived. This test is invariant to intensity changes in the noise background and achieves a fixed probability of a false alarm. Thus, operating in accorthnce to the local noise situation, the test is adaptive. It is shown that the test obtained is uniformly most powerful invariant (UMPI) and robust against departures from normality in the following sense. It is still UMPI in a broad class of distributions, and the null distribution under any member of the class is the same as that under normality.
Keywords : noise, broad class of distributions, adaptive test, signal detection
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method of visual-infrared sensor fusion for target recognition is described in the paper. The fusion system introduced by Huntsberger are discussed in detail. The six type of bimodal neurons are created and a three layer neural network for integrating two inputs from different sensors is introduced. Some experiments are shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Foveal active vision features imaging sensors and processing with graded acuity, coupled with context-sensitive gaze control. The wide field of view of peripheral vision reduces target search time, but its low acuity makes it susceptible to preliminary false alarms when operating in environments with structured clutter. In this paper, we present a foveal active vision technique for multiresolution cueing that detects regions of interest (ROIs) with coarse resolution and subsequently interrogates with progressively higher resolution and ROIs are disambiguated. A hierarchical foveal machine vision framework with rectilinear retinotopology is used. A two-stage detector uses multiscale shape matching to identify potential targets and a chain of neural networks to filter out false alarms. This context-sensitive, coarse-to- fine approach minimizes the number of computationally expensive high acuity interrogates required, while preserving performance. Results from our experiments using second generation forward looking infrared imagery are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.