PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The study investigated the sensitivity of the instance-based- learning (IBL) driven multi-source information fusion process to the underlying distance metric. An audio-visual system for recognition of spoken French vowels is used as an example for this investigation. Three different distance measures, namely, Euclidian, city block and chess board metrics, are employed for this initial foray into metric sensitivity analysis. In this example, the test phase encompasses a broader range of noise environments of the audio signal as compared to the training phase. The system is thus exercised in both trained and untrained noise regimes. Under the untrained regime, interpolation as well as extrapolation or off-nominal scenarios are considered. In the former, the signal to noise ratio in the test phase is within the range used in training phase but does not specifically include it. In the latter, the signal to noise ratio in the test phase is outside the range used in the training phase. It is observed that while both of the single-sensor based decision systems individually are not very sensitive to the choice of the metric, the fused decision system is indeed significantly more sensitive to this choice. The city block metric offers better performance as compared to the other two in the case of the fused audio- visual system across most of the spectrum of noise environments, except for the extreme off-nominal conditions wherein the Euclidian offers slightly better performance. The chess board metric offers the lowest performance across the entire test range. The lack of training in the interpolation scenario has a noticeably strong effect on audio performance under the chess board metric.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents methods to boost the classification rate in decision fusion with partially redundant information. This is accomplished by utilizing the information of known mis- classifications of certain classes to systematically modify class output. For example,, if it is known beforehand that tool A mis- classifies class 1 as often as class 2, then it appears to be prudent to integrate that information into the reasoning process if class 1 is indicated by tool B and class 2 is observed by tool A. Particularly this preferred mis-classification information is contained in the asymmetric (cross-correlation) entries of the confusion matrix. An operation we call cross-correlation is performed where this information is explicitly used to modify class output before the first fused estimate is calculated. We investigate several methods for cross-correlation and discuss the advantages and disadvantages of each. We then apply the concepts introduced to the diagnostic realm where we aggregate the output of several different diagnostic tools. We show how the proposed approach fits into an information fusion architecture and finally present results motivated from diagnosing on-board faults in aircraft engines.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Analysts responsible for supporting time dominated threat decisions are faced with a growing volume of sensor data. Most efforts to increase discrimination among targets using multiple types of sensors encounter the same problems: · Sensor data are received in large volumes. · Sensor data are highly variable. · Signature features are represented by many dimensions. · Feature values are inter-correlated, random, or not related to target differences. · Decision rules for classifying new target data are difficult to define. This paper describes a new methodology for solving several problems: selecting signature features, reducing variability, increasing discrimination accuracy, and developing decision rules for classifying new target signatures. The results from using a combination of exploratory and multi-variate statistical techniques show potential improvements over the traditional Dempster-Shafer approach. This project uses data from operational prototype sensors and vehicles of interest for threat analysis. Acoustic and seismic sensor data came from an unattended ground sensor and three military vehicles. Although the resulting algorithms are specific to the data set, the data screening and fusion methods tested in this project may be useful with other types of sensor and target data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We investigate bagging of k - NN classifiers under varying set sizes. For certain set sizes bagging often under-performs due to population bias. We propose a modification to the standard bagging method designed to avoid population bias. The modification leads to substantial performance gains, especially under very small sample size conditions. The choice of the modification method used depends on whether prior knowledge exists or not. If no prior knowledge exists then insuring that all classes exist in the bootstrap set yields the best results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Omnidirectional visual sensors have been successfully introduced recently to robot navigation, providing improved localization performances and a more stable path following behavior. As a consequence of the sensor characteristics, occlusion of the entire panoramic visual field becomes very unlikely. The presented work exploits these characteristics providing a Bayesian framework to gain even partial evidence about a current location by applying decision fusion on the multidirectional visual context. The panoramic image is first partitioned into a fixed number of overlapping unidirectional camera views, i.e., appearance sectors. For each sector image one learns then a posterior distribution over potential locations within a predefined environment. The ambiguity in a local sector interpretation is then resolved by Bayesian reasoning over the spatial context of the current position, discriminating occlusions which do not fit to the appearance model of subsequent sector views. The results from navigation experiments in an office using a robot equipped with an omnidirectional camera demonstrate that the Bayesian reasoning allows highly occlusion tolerant localization to enable visual navigation of autonomous robots even at crowded places such as offices, factories and urban environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a Bayesian multi-sensor object localization approach that keeps track of the observability of the sensors in order to maximize the accuracy of the final decision. This is accomplished by adaptively monitoring the mean-square-error of the results of the localization system. Knowledge of this error and the distribution of the system's object localization estimates allow the result of each sensor to be scaled and combined in an optimal Bayesian sense. It is shown that under conditions of normality, the Bayesian sensor fusion approach is directly equivalent to a single layer neural network with a sigmoidal non-linearity. Furthermore, spatial and temporal feedback in the neural networks can be used to compensate for practical difficulties such as the spatial dependencies of adjacent positions. Experimental results using 10 binary microphone arrays yield an order of magnitude improvement in localization error for the proposed approach when compared to previous techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The collection and management of vast quantities of meteorological data, including satellite-based as well as ground- based measurements, is presenting great challenges in the optimal usage of this information. To address these issues, the Army Laboratory has developed neural networks for combining for combining multi-sensor meteorological data for Army battlefield weather forecasting models. As a demonstration of this data fusion methodology, multi-sensor data was taken from the Meteorological Measurement Set Profiler (MMSP-POC) system and from satellites with orbits coinciding with the geographical locations of interest. The MMS Profiler-POC comprises a suite of remote sensing instrumentation and surface measuring devices. Neural network techniques were used to retrieve temperature and wind information from a combination of polar orbiter and/ or geostationary satellite observations and ground-based measurements. Back-propagation neural networks were constructed which use satellite radiances, simulated microwave radiometer measurements, and other ground-based measurements as inputs and produced temperature and wind profiles as outputs. The network was trained with Rawinsonde measurements used as truth-values. The final outcome will be an integrated, merged temperature/wind profile from the surface up to the upper troposphere.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The development of efficient semi-automatic systems for heterogeneous information fusion is actually a great challenge. The efficiency can be presented as the system openness, the system evolution capabilities and the system performance. Multi- agent architecture can be designed in order to respect the first two efficiency constraints. As for the third constraint, which is the performance, the key point is the interaction between each information component of the system. The context of this study is the development of a semi-automatic information fusion system for cartographic features interpretation. Combining heterogeneous sources of information such as expert rules and strategies, domain models, image processing tools, interpolation techniques, etc. completes the system development task. The information modeling and fusion is performed within the evidential theory concepts. The purpose of this article is to propose a learning approach for interaction-oriented multi-agent systems. The optimization of the interaction weight is tackled with genetic algorithms technique because it provides solution for the whole set of weights at once. In this paper, the context of the multi-agent system development is presented first. The need for such system and its parameters is explained. A brief overview of learning techniques leads to genetic algorithms as a choice for the learning of the developed multi-agent system. Two approaches are designed to measure the system's fitness based on either binary or fuzzy decisions. The conclusion presents suggestions for further research in the area of multi-agent system-learning with genetic algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A number of pixel level image fusion schemes have been proposed in the past which combine registered input sensor images into a single fused output image. The two general objectives that underpin the operations of these schemes are a) the transfer of all visually important information form input images into a fused image and b) the minimization of undesirable distortions and artifacts which may be generated in the fused image. Fusion is usually achieved by i) the decomposition of input images into representations of their spectral bands and ii) a selection process which transfers information from input bands to yield the required representation of a single fused output image. Furthermore, decomposition is often based on multi-resolution pyramidal representations and the selection process operates on corresponding input image pyramidal levels using selection templates which focus on local spectral characteristics. The performance of such a multi-resolution pixel level image fusion system depends primarily on the actual decomposition and selection algorithms used. Thus for a given decomposition selection arrangement, fusion performance is dependent on the pyramid size (i.e. number of level) and template size. Pyramid and template sizes on the other hand greatly influence the system's computational complexity. This paper is concerned with the performance optimization/characterization of several multi- resolution image fusion schemes, in general and with performance/ complexity trade-offs in particular. Performance is measured using a subjectively meaningful, objective fusion metric which has been proposed recently by authors and which is based on the preservation of image edge information. Thus fusion systems based on derivatives of Gaussian low-pass pyramid and the Discrete Wavelet transform are examined and their performances versus decomposition/selection parameters are defined and compared. The performance/algorithmic complexity results presented for these multi-resolution fusion systems highlight clearly their strengths and weaknesses.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many automatic target recognition, detection, and identification problems usually suffer from lack of adequate resolution of the data, especially among infrared imaging systems. A number of super-resolution reconstruction algorithms have been proposed. The challenge is how to recapture additional high-frequency information from adjacent frames in an image sequence that contains slightly different, but unique, information. In addition, real-world infrared sequence images are noisy and low contrast, and low spatial resolution. Since broad-banded noise mainly affects high-frequency information to be recaptured, the challenge is how to avoid smoothing out the high-frequency data by the regularization are not smoothed out. This paper presents a new super-resolution reconstruction approach based on wavelet domain for super-resolution image reconstruction of infrared IR sequences Minimizing the regulation cost function in wavelet domain forms a multi-scale high-resolution estimate. The effects of noise are incorporated into the iterative process in the proposed method. The estimation errors in high- and low- frequency bands are processed separately to solve the problem of variable correlations of observed images and slow convergence. The proposed approach was tested on the infrared aerial image sequences provided by Defense Research Establishment in Valcartier. Experiment results show that a significant increase in the spatial resolution can be achieved by the proposed approach while the noise is smoothed out.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mutual information (MI) has been used widely as a similarity measure for many multi-modality image registration problems. MI of two registered images is assumed to attain its global maximum. One major problem while implementing this technique is the lack of an efficient yet robust global optimizer. The direct use of existing global optimizers such as simulated annealing (SA) or genetic algorithms (GA) may not be feasible in practice since they suffer from the following problems: 1) When should the algorithm be terminated. 2) The maximum found may be a local maximum. The problems mentioned above can be avoided if the maximum found can be identified as the global maximum by means of a test. In this paper, we propose a global maximum testing algorithm for the MI based registration function. Based on this test, a cooperative search algorithm is proposed to increase the capture range of any local optimizer. Here we define the capture range as the collection of points in the parameter space starting from which a specified local optimizer can be used to reach the global optimum successfully. When used in conjunction with these two algorithms, a global optimizer like GA can be adopted to yield an efficient and robust image registration procedure. Our experiments demonstrate the successful application of our procedure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic determination of landmarks ia a challenge for image registration, especially, in an unfriendly and noisy environment such as the battlefield and with feature inconsistency between multiple sensors. In the contour-based multi-sensor image registration, the free-form curve matching does not require explicit feature correspondence. However, the approach can fail when percentage of outliers is too high. Human intervention is still needed sometimes to select good features. We introduce, in this paper, two new approaches to improve the robustness of feature extraction for automatic image registration. The method is based on the fact that the features must be robust to changes between the two images. The feature extraction proposed in this paper is divided into two steps. First of all, a method is presented in order to extract the horizon line by method of edge tracking. The horizon line is the longest contour in a ground image, but can be fragmented by noise or clutter. We fill the horizon line gaps by means of an association of regional and contour information that makes the edge tracking robust to noise. Secondly, a course curve matching is proposed in order to reduce the number of outliers drastically. The experimental results will be shown for the robust and automatic visible/far infrared battlefield image registration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the field of pattern recognition from satellite images, the existing road extraction methods have been either too specialized or too time consuming. The challenge then has been to develop a general and close to real time road extraction method. This study falls in this perspective and aims at developing a close to real time semi-automatic system able to extract linear planimetric features (including roads). The major concern of this study is to combine the most efficient tools to deal with the road primitive extraction process in order to handle multi- resolution and multi-type raw images. Hence, this study brought along a new model fusion characterized by the combination of operator input points (in 2D or 3D), fuzzy image filtering, cubic natural splines and the A*algorithm. First, a cubic natural splines interpolation of the operator points is used to parameterize the A*algorithm. Cost function with the consequence to restrict the mining research area. Second, the heuristic function of the same algorithm is combined with the fuzzy filtering which proves to be a fast and efficient tool for selection of the primitive most promising points. The combination of the cost function and the heuristic function leads to a limited number of hypothetical paths, hence decreasing the computation time. Moreover, the combination of the A*algorithm and the splines leads to a new way to solve the perceptual grouping problems. Results related to the problem of feature discontinuity suggest new research perspectives in relation to noisy area (urban) as well as noisy data (radar images).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an architecture for the fusion of multiple medical image modalities that enhances the original imagery and combines the complimentary information of the various modalities. The design principles follow the organization of the color vision system in humans and primates. Mainly, the design of within- modality enhancement and between-modality combination for fusion is based on the neural connectivity of retina and visual cortex. The architecture is based on a system developed for night vision applications while the first author was at MIT Lincoln Laboratory. Results of fusing various modalities are presented, including: a) fusion of T1-weighted and T2-weighted MRI images, b) fusion of PD, T1 weighted, and T2-weighted, and c) fusion of SPECT and MRI/CT. The results will demonstrate the ability to fuse such disparate imaging modalities with regard to information content and complimentarities. These results will show how both brightness and color contrast are used in the resulting color fused images to convey information to the user. In addition, we will demonstrate the ability to preserve the high spatial resolution of modalities such as MRI even when combined with poor resolution images such as from SPECT scans. We conclude by motivating the use of the fusion method to derive more powerful image features to be used in segmentation and pattern recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The benefits and problems of a multi-camera object localization system utilizing Spatial Likelihood Functions (SLF) are explored. This method utilizes the angular extent of objects perceived by different cameras in order to find the region in which they intersect. This region will ideally correspond to the original location of the objects. It is shown that as long as the number of cameras is greater than the number of objects, an efficient camera fusion algorithm utilizing SLFs can be successfully employed to localize the objects. In certain situations, especially with a greater number of objects than cameras, false objects will appear among the correctly localized objects. Several different techniques to identify and remove the false objects are proposed, including a heuristic-based ray tracing approach and other multi-modal techniques. The effectiveness of the camera fusion and false object removal approaches are illustrated in the context of several examples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Synthetic aperture radar (SAR), electro-optical (EO) imagery, and motion-target indicators (MTI) are sensors utilized in prime surveillance/reconnaissance systems. Each sensor system can generate substantial volumes of data requiring vast computational resources to support exploitation, the problem is only aggravated in systems which support multiple sensor evidence fusion components. The dynamic nature of military operational settings often makes it difficult to efficiently apply the computational resources necessary for successful exploitation. Current limited research suggests that dynamic, scalable and heterogeneous computer systems may be an avenue to develop successful exploitation systems of the future. Existing research and development into systems of this type has not explicitly addressed their use for tactical imagery exploitation system. The Collaborative Heterogeneous Operations Prototype (CHOP) as part of DARPAs Scalable Tactical Imagery eXploitation (STIX) program conducted a review of existing state of the art commercial and non-commercial middleware and metasystem technology as applicable to STIX's technical objectives (the architectural characterization of a multiple source exploitation system appropriate for use in a dynamic military operational setting). The results of that review resulted in the design and development of a heterogeneous metasystem demonstrating state of the art near real time exploitation technology in the solution of multiple source imagery exploitation problems. The system was evaluated against measures of effectiveness and measures of performance applied to previously built and fielded imagery exploitation systems; its performance was consistent with those systems. CHOP STIX, however, offered automatic dynamic system reconfiguration to maximize system performance in an environment of changing mission requirements, sensor selection, data load, and computational resource availability. These characteristics made CHOP STIX appropriate for use in a wider range of operational settings than existing systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
UAVs demand more accurate fault accommodation for their mission manager and vehicle control system in order to achieve a reliability level that is comparable to that of a pilot aircraft. This paper attempts to apply multi-classifier fusion techniques to achieve the necessary performance of the fault detection function for the Lockheed Martin Skunk Works (LMSW) UAV Mission Manager. Three different classifiers that meet the design requirements of the fault detection of the UAAV are employed. The binary decision outputs from the classifiers are then aggregated using three different classifier fusion schemes, namely, majority vote, weighted majority vote, and Naieve Bayes combination. All of the three schemes are simple and need no retraining. The three fusion schemes (except the majority vote that gives an average performance of the three classifiers) show the classification performance that is better than or equal to that of the best individual. The unavoidable correlation between the classifiers with binary outputs is observed in this study. We conclude that it is the correlation between the classifiers that limits the fusion schemes to achieve an even better performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, new operators for fusing logical knowledge-bases (Kbs) are proposed. They are defined in such a way that they can handle Kbs that must be interpreted under forms of the closed- world assumption. Such assumptions implicitly augment the Kbs with some additional information that could not be deduced using the standard logical deductive apparatus. More precisely, we extend previous recent works about the logical fusion of knowledge to handle such Kbs. We focus on the model-theoretic definition of fusion operators to show their limits. In particular, the basic logical concept of model appears too coarse-grained. We solve this problem and propose new operators that cover a whole family of fusion approaches in the presence of variants of the closed-world assumption.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aim of this paper is to give a homogeneous and a simple framework in order to present information fusion concepts. The main concept presented here concerns the definition of the information element concept. This concept is then illustrated through the general scheme of pattern recognition systems. Different types of information imperfection are then illustrated. Finally, information fusion concepts and fusion architecture are illustrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper deals with a study of the multi-sensor management problem for multi-target tracking. The collaboration between many sensors observing the same target means that they are able to fuse their data during the information process. Then one must take into account this possibility to compute the optimal association sensors-target at each step of time. In order to solve this problem for real large scale system, one must both consider the information aspect and the control aspect of the problem. To unify these problems, one possibility is to use a decentralized filtering algorithm locally driven by an assignment algorithm. The decentralized filtering algorithm we use in our model is the filtering algorithm of Grime, which relaxes the usual full-connected hypothesis. By full-connected, one means that the information in a full-connected system is totally distributed everywhere at the same moment, which is unacceptable for a real large scale system. We modelize the distributed assignment decision with the help of a greedy algorithm. Each sensor performs a global optimization, in order to estimate other information sets. A consequence of the relaxation of the full- connected hypothesis is that the sensors' information set are not the same at each step of time, producing an information dis- symmetry in the system. The assignment algorithm uses a local knowledge of this dis-symmetry. By testing the reactions and the coherence of the local assignment decisions of our system, against maneuvering targets, we show that it is still possible to manage with decentralized assignment control even though the system is not full-connected.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We are developing new simple algorithms to provide automatic methods for fusing data from multiple passive sensors and multiple targets in real time. Initially, we have developed algorithms to fuse data from multiple passive collocated sensors measuring the same quantities (bearing and bearing rate) from multiple targets. MATLAB results with simulated data have been very encouraging. We present results for the probability of correct data association (PA) and the probability of false data association (PFA) as a function of target angular separation, random noise, and the number of data updates used in the sequential probability ratio test (SPRT).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wavelets have been used successfully for signal compression. A signal can be represented very concisely and with a high fidelity, by a set of wavelet coefficients. This suggests that wavelet coefficients can efficiently represent the contents of a signal and, consequently, could be used as features. Such features then can be used for signal classification. The quality of classification depends on the choice of the features. Fixing the set of features in both time and frequency domains results in the lack of invariance of the classification method with respect to translations and scaling of signals. In this paper we propose an approach that addresses this problem. We achieve this goal by using the following two techniques. First, our classification method test weather a specific relation among wavelet coefficients is satisfied by a given signal. And second, our method selects features dynamically, i.e., it searches for features that satisfy the relation. The relations are learned from a database of pre-classified signals. In this paper we provide the description of the relation learning approach and results of testing the approach on a simple scenario. The results of our simulations showed that this approach gives a higher classification accuracy than a similar approach based on a fixed set of features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the past several years, many different algorithms have attempted to address the problem of robust multi-source time difference of arrival (TDOA) estimation, which is necessary for sound localization. Different approaches, including general cross correlation, multiple signal classification (MUSIC), and the maximum likelihood (ML) approach, have made different trade- offs between robustness and efficiency. A new approach presented here offers a much more efficient yet robust mechanism for TDOA estimation. This approach iteratively uses small sound signal segments to compute a local cross-correlation based TDOA estimate. All of the different local estimates are combined to form the probability density function of the TDOA. Because the power of the secondary sources will be greater than the others for a certain set of the local signal segments, the TDOA corresponding to these sources will be associated with a peak in the TDOA probability density function. This way, the TDOAs of several different sources, along with their signal strength can be estimated. A real time implementation of the proposed approach is used to show its improved accuracy and robustness. The system was consistently able to correctly localize sound sources with SNRs as low as 3 dB.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Four grassland types, plain desert, saline steppe, hill desert steppe, and mountain meadow, were observed to study their production changes in space and time using traditional method, PPR and remote sensing techniques. PPR models established by observed forage yields and different environmental factors as well as satellite information in four grasslands, and their applications of estimating grassland yields were discussed in detail in this paper. The problems of non-linear, non-normalized distribution, and correlated relationships between multi- variables for statistical data were solved by PPR technology. Therefore, the precision and effects to estimate grassland yields were greatly improved versus those of traditional multi-variate linear statistical method. Because of organic combination of remote sensing data and environmental information, it could not only estimate grassland yields on large area, but also extend the results gained on small area to a large extent for monitoring grassland resources and forecasting yields in future using the established models. By use of eight factors observed in four types of grasslands, the comprehensive yield in four different types were simulated by PPR and RS, GIS, GPS technology from 1995 to 1996. Results indicate that the precision of the models in plain desert, saline steppe, hill desert steppe, and mountain meadow reached over 81.76%, 88.61%, 83.50% and 92.35% respectively. The objective of scientific estimating yields in different grassland types was realized by PPR and RS, GIS, GPS, technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses the problem of enabling autonomous agents (e.g., robots) to carry out human oriented tasks using an electronic nose. The nose consists of a combination of passive gas sensors with different selectivity, the outputs of which are fused together with an artificial neural network in order to recognize various human-determined odors. The basic idea is to ground human-provided linguistic descriptions of these odors in the actual sensory perceptions of the nose through a process of supervised learning. Analogous to the human nose, the paper explains a method by which an electronic nose can be used for substance identification. First, the receptors of the nose are exposed to a substance by means of inhalation with an electric pump. Then a chemical reaction takes place in the gas sensors over a period of time and an artificial neural network processes the resulting sensor patterns. This network was trained to recognize a basic set of pure substances such as vanilla, lavender and yogurt under controlled laboratory conditions. The complete system was then validated through a series of experiments on various combinations of the basic substances. First, we showed that the nose was able to consistently recognize unseen samples of the same substances on which it had been trained. In addition, we presented some first results where the nose was tested on novel combinations of substances on which it had not been trained by combining the learned descriptions - for example, it could distinguish lavender yogurt as a combination of lavender and yogurt.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sensor fusion is an important technology, which is growing exponentially due to its tremendous application potential. Appropriate fusion technology is needed to be developed specially when a system requires redundant sensors to be used. The more the redundancy in sensors, the more the computational complexity for controlling the system and the more is its intelligence level. This research presents a strategy developed for multiple sensor fusion, based on geometric optimization. Each sensor's uncertainty model has been developed. Using Lagrangian optimization techniques the individual sensor's uncertainty has been fused to reduce the overall uncertainty to generate a consensus among the sensors regarding their acceptable values. Using fission-fusion architecture, the precision level has further been improved. Subsequently, using feed back from the fused sensory information, the net error has further been minimized to any pre assigned value by developing a fusion technique in the differential domain (FDD). The techniques have been illustrated using synthesized data from two types of sensors (optical encoder and a single camera vision sensor). The application experience of the same fusion strategy in improving the precision of correctness of stereo matching using multiple baselines has also been discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The accuracy of aircraft/weapon navigation systems has improved dramatically since the introduction of global positioning systems and terrain-referenced navigation systems into integrated navigation suites. Future improvements, in terms of reliability and accuracy, could arise from the inclusion of navigation systems based on the correlation of known ground features with imagery from a visual band or infrared sensor, often called scene matching and area correlation or scene-referenced navigation. This paper considers the use of multi-platform fusion techniques to improve on the performance of individual scene-referenced navigation systems. Consideration is also given to the potential benefits of multi-platform fusion for scene-referenced object localization algorithms that could be used in association with infrared targeting aids.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A fuzzy model-based multi-sensor data fusion system is presented in this paper. The system is capable of accommodating both non- linear sensors of the same type and different (non-commensurate) sensors and to give accurate information about the observed system state by combining readings from them at feature/decision level. The data fusion system consists of process model and knowledge-based sensor model units based on fuzzy inference system that predicts the future system and sensor states based on the previous states and the inputs. The predicted state is used as a reference datum in the sensor validation process which is conducted through fuzzy classifier to categorize each sensor reading as a valid or invalid datum. The data fusion combines the valid sensor data to generate the feature/decision output. The corrector unit functions as a filtering unit to provide the final decision on the value of the current state based on the current measurement (fused output) and the predicted state. The results of the simulation of this system and other data fusion systems have been compared to justify the capability of the system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Single images quite often do not bear enough information for precise interpretation due to a variety of reasons. Multiple image fusion and adequate integration recently became the state of the art in the pattern recognition field. In this paper presented here and enhanced multiple observation schema is discussed investigating improvements to the baseline fuzzy- probabilistic image fusion methodology. The first innovation introduced consists in considering only a limited but seemingly ore effective part of the uncertainty information obtained by a certain time restricting older uncertainty dependencies and alleviating computational burden that is now needed for short sequence (stored into memory) of samples. The second innovation essentially grouping them into feature-blind object hypotheses. Experiment settings include a sequence of independent views obtained by camera being moved around the investigated object.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper explains a new approach to change detection and interpretation in a context of forest map updating. In this temporal change analysis we use data set composed of map at time T0 and a satellite image at time T1 and we refer to this as a mixed fusion approach. The analysis of remotely sensed data always necessitates the use of approximate reasoning. For this purpose, we use fuzzy logic to evaluate the objects' membership values to the considered classes and the Dempster-Shafer theory to analyze the confusion between the classes and to find the more evident class to which an object belongs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper concentrates on multi-target tracking (MTT) simulation. The purpose of this paper is to simulate 11 targets in the noisy environment. The sensors used in the simulations are passive. First, we use the interactive multiple model (IMM) algorithm with the probabilistic data association (PDA) algorithm. The PDA is not able to process attribute observations (i.e. observations of features such as the form of wings, radio frequency, etc.). Therefore we have applied Bayesian networks to our tracking system, since they are able to process attribute observations. The main gain of using the Bayesian networks is that the type of the target is possible to determine. In this paper we briefly recapitulate the most important features of the IMM, PDA and Bayesian networks. WE also discuss how to establish attribute association probabilities, which are possible to fuse with the association probabilities computed by the PDA. We have executed the simulations 30 times. In this study we show one typical example of tracking with IMM and PDA as well as tracking with IMM, PDA and Bayesian networks. We conclude that tracking results with IMM and PDA are quite satisfactory. Tracking with the Bayesian networks produces slightly better results and identified the targets correctly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Starting with a randomly distributed sensor array with unknown sensor orientations, array calibration is needed before target localization and tracking can be performed using classical triangulation methods. In this paper, we assume that the sensors are only capable of accurate direction of arrival (DOA) estimation. The calibration problem cannot be completely solved given the DOA estimates alone, since the problem is not only rotationally symmetric but also includes a range ambiguity. Our approach to calibration is based on tracking a single target moving at a constant velocity. In this case, the sensor array can be calibrated from target tracks generated by an extended Kalman filter (EKF) at each sensor. A simple algorithm based on geometrical matching of similar triangles will align the separate tracks and determine the sensor positions and orientations relative to a reference sensor. Computer simulations show that the algorithm performs well even with noisy DOA estimates at sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider the problem of simultaneously locating an observer and a set of environmental landmarks with respect to an inertial coordinate system, when both the observer position and the landmark positions are initially uncertain. For solving this problem, a new state estimator is introduced, which allows the problem to be consistently solved locally.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.