Image registration in wide area motion imagery (WAMI) is a critical problem that is required for target tracking, image fusion, and situation awareness. The high resolution, extremely low frame rate, and large camera motion in such videos; however, introduces challenging constraints that distinguish the task from
traditional image registration from such sesnors as full motion video (FMV). In this study, we propose to
use the feature-based approach for the registration of wide area surveillance imagery. Specifically, we extract Speeded Up Robust Feature (SURF) feature points for each frame. After that, a kd-tree algorithm is adopted to match the feature points of each frame to the reference frame. Then, we use the RANdom SAmple Consensus (RANSAC) algorithm to refine the matching results. Finally, the refined matching point pairs are used to estimate the transformation between frames. The experiments are conducted on the Columbus Large Image Format (CLIF) dataset. The experimental results show that the proposed approach is very efficient for the wide area motion imagery registration.
With the advent of new technology in wide-area motion imagery (WAMI) and full-motion video (FMV), there is a
capability to exploit the imagery in conjunction with other information sources for improving confidence in detection,
tracking, and identification (DTI) of dismounts. Image exploitation, along with other radar and intelligence information
can aid decision support and situation awareness. Many advantages and limitations exist in dismount tracking analysis
using WAMI/FMV; however, through layered management of sensing resources, there are future capabilities to explore
that would increase dismount DTI accuracy, confidence, and timeliness. A layered sensing approach enables commandlevel
strategic, operational, and tactical analysis of dismounts to combine multiple sensors and databases, to validate DTI
information, as well as to enhance reporting results. In this paper, we discuss WAMI/FMV, compile a list of issues and
challenges of exploiting the data for WAMI, and provide examples from recently reported results. Our aim is to provide a
discussion to ensure that nominated combatants are detected, the sensed information is validated across multiple
perspectives, the reported confidence values achieve positive combatant versus non- combatant detection, and the related
situational awareness attributes including behavior analysis, spatial-temporal relations, and cueing are provided in a timely
and reliable manner to stakeholders.
The apical root regions play an important role in analysis and diagnosis of many oral diseases. Automatic
detection of such regions is consequently the first step toward computer-aided diagnosis of these diseases.
In this paper we propose an automatic method for periapical root region detection by using the state-of-theart
machine learning approaches. Specifically, we have adapted the AdaBoost classifier for apical root
detection. One challenge in the task is the lack of training cases especially for diseased ones. To handle this
problem, we boost the training set by including more root regions that are close to the annotated ones and
decompose the original images to randomly generate negative samples. Based on these training samples,
the Adaboost algorithm in combination with Haar wavelets is utilized in this task to train an apical root
detector. The learned detector usually generates a large amount of true and false positives. In order to
reduce the number of false positives, a confidence score for each candidate detection result is calculated for
further purification. We first merge the detected regions by combining tightly overlapped detected
candidate regions and then we use the confidence scores from the Adaboost detector to eliminate the false
positives. The proposed method is evaluated on a dataset containing 39 annotated digitized oral X-Ray
images from 21 patients. The experimental results show that our approach can achieve promising detection
Periapical lesion is a common disease in oral health. While many studies have been devoted to image-based
diagnosis of periapical lesion, these studies usually require clinicians to perform the task. In this paper we
investigate the automatic solutions toward periapical lesion classification using quantized texture analysis.
Specifically, we adapt the bag-of-visual-words model for periapical root image representation, which
captures the texture information by collecting local patch statistics. Then we investigate several similarity
measure approaches with the K-nearest neighbor (KNN) classifier for the diagnosis task. To evaluate these
classifiers we have collected a digitized oral X-Ray image dataset from 21 patients, resulting 139 root
images in total. The extensive experimental results demonstrate that the KNN classifier based on the bagof-
words model can achieve very promising performance for periapical lesion classification.