KEYWORDS: Image fusion, Information visualization, Visualization, Receivers, Information fusion, Principal component analysis, Sensors, Image quality, Signal to noise ratio, Infrared imaging
Fusion of visual information from multiple sources is relevant for applications security, transportation, and safety applications. One way that image fusion can be particularly useful is when fusing imagery data from multiple levels of focus. Different focus levels can create different visual qualities for different regions in the imagery, which can provide much more visual information to analysts when fused. Multi-focus image fusion would benefit a user through automation, which requires the evaluation of the fused images to determine whether they have properly fused the focused regions of each image. Many no-reference metrics, such as information theory based, image feature based and structural similarity-based have been developed to accomplish comparisons. However, it is hard to scale an accurate assessment of visual quality which requires the validation of these metrics for different types of applications. In order to do this, human perception based validation methods have been developed, particularly dealing with the use of receiver operating characteristics (ROC) curves and the area under them (AUC). Our study uses these to analyze the effectiveness of no-reference image fusion metrics applied to multi-resolution fusion methods in order to determine which should be used when dealing with multi-focus data. Preliminary results show that the Tsallis, SF, and spatial frequency metrics are consistent with the image quality and peak signal to noise ratio (PSNR).
Automated image fusion has a wide range of applications across a multitude of fields such as biomedical diagnostics, night vision, and target recognition. Automation in the field of image fusion is difficult because there are many types of imagery data that can be fused using different multi-resolution transforms. The different image fusion transforms provide coefficients for image fusion, creating a large number of possibilities. This paper seeks to understand how automation could be conceived for selected the multiresolution transform for different applications, starting in the multifocus and multi-modal image sub-domains. The study analyzes the greatest effectiveness for each sub-domain, as well as identifying one or two transforms that are most effective for image fusion. The transform techniques are compared comprehensively to find a correlation between the fusion input characteristics and the optimal transform. The assessment is completed through the use of no-reference image fusion metrics including those of information theory based, image feature based, and structural similarity based methods.
Tracking process captures the state of an object. The state of an object is defined in terms of its dynamic and static
properties such as location, speed, color, temperature, size, etc. The set of dynamic and static properties for tracking very
much depends on the agency who wants to track. For example, police needs different set of properties to tracks people
than to track a vehicle than the air force. The tracking scenario also affects the selection of parameters. Tracking is done
by a system referred to in this paper as "Tracker." It is a system that consists of a set of input devices such as sensors and
a set of algorithms that process the data captured by these input devices. The process of tracking has three distinct steps
(a) object discovery, (b) identification of discovered object, and (c) object introduction to the input devices. In this paper
we focus mainly on the object discovery part with a brief discussion on introduction and identification parts. We
develops a formal tracking framework (model) called "Discover, Identify, and Introduce Model (DIIM)" for building
efficient tracking systems. Our approach is heuristic and uses reasoning leading to learning to develop a knowledge base
for object discovery. We also develop a tracker for the Air Force system called N-CET.
Road networks and associated traffic flow information are topics that have an innumerable number of applications,
ranging from highway planning to military intelligence. Despite the importance of these networks, archival databases
that often have update rates on the order of years or even decades have historically been the main source for obtaining
and analyzing road network information. This somewhat static view of a potentially changing infrastructure can cause
the information to therefore be incomplete and incorrect. Furthermore, these road databases are not only static, but
rarely provide information beyond a simple two-dimensional view of a road, where divided high-ways are represented in
the same manner as a rural dirt road. It is for these reasons that the use of Ground Moving Target Indicator (GMTI) data
and tracks to create road networks is explored. This data lends itself to being able to not only provide a single static
snapshot of a network that is considered the network for years, but to provide a consistently accurate and updated
changing picture of the environment. The approach employed for creating a road network from GMTI tracks includes a
technique known as Continuous Dynamic Time Warping (CDTW), as well as a general fusion routine.
The tracking of objects and phenomena exhibiting nonlinear motion is a topic that has application in many areas ranging
from military surveillance to weather forecasting. Observed nonlinearities can come not only from the nonlinear
dynamic motion of the object, but also from nonlinearities in the measurement model. Many techniques have been
developed that attempt to deal with this issue, including the development of various types of filters, such as the Extended
Kalman Filter (EKF) and the Unscented Kalman Filter (UKF), variants of the Kalman Filter (KF), as well as other filters
such as the Particle Filter (PF). Determining the effectiveness of any of these techniques in nonlinear scenarios is not
straightforward. Testing needs to be accomplished against scenarios whose degree of nonlinearity is known. This is
necessary if reliable assessments of the effectiveness of nonlinear mitigation techniques are to be accomplished. In this
effort, three techniques were investigated regarding their ability to provide useful measures of nonlinearity for
representative scenarios. These techniques were the Parameter Effects Curvature (PEC), the Normalized Estimation
Error Squared (NEES), and the Normalized Innovation Squared (NIS). Results indicated that the NEES was the most
effective, although it does require truth values in its formulation.
Many algorithms may be applied to solve the target tracking problem, including the Kalman Filter and different types of
nonlinear filters, such as the Extended Kalman Filter (EKF), Unscented Kalman Filter (UKF) and Particle Filter (PF).
This paper describes an intelligent algorithm that was developed to elegantly select the appropriate filtering technique
depending on the problem and the scenario, based upon a sliding window of the Normalized Innovation Squared (NIS).
This technique shows promise for the single target, single radar tracking problem domain. Future work is planned to
expand the use of this technique to multiple targets and multiple sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.