This paper describes the application of sequence learning to the domain of terrorist group actions. The goal is to make accurate predictions of future events based on learning from past history. The past history of the group is represented as a sequence of events. Well-established sequence learning approaches are used to generate temporal rules from the event sequence. In order to represent all the possible events involving a terrorist group activities, an event taxonomy has been created that organizes the events into a hierarchical structure. The event taxonomy is applied when events are extracted, and the hierarchical form of the taxonomy is especially useful when only scant information is available about an event. The taxonomy can also be used to generate temporal rules at various levels of abstraction. The generated temporal rules are used to generate predictions that can be compared to actual events for evaluation. The approach was tested on events collected for a four-year period from relevant newspaper articles and other open-source literature. Temporal rules were generated based on the first half of the data, and predictions were generated for the second half of the data. Evaluation yielded a high hit rate and a moderate false-alarm rate.
Numerous feature detectors have been defined for detecting military vehicles in natural scenes. These features can be computed for a given image chip containing a known target and used to train a classifier. This classifier can then be used to assign a label to an un-labeled image chip. The performance of the classifier is dependent on the quality of the set of features used. In this paper, we first describe a set of features commonly used by the Automatic Target Recognition (ATR) community. We then analyze feature performance on a vehicle identification task in laser radar (LADAR) imagery. Our features are computed over both the range and reflectance channels. In addition, we perform feature subset selection using two different methods and compare the results. The goal of this analysis is to determine which subset of features to choose in order to optimize performance in LADAR Autonomous Target Acquisition (ATA).
One of NASA's goals for the Mars Rover missions of 2003 and 2005 is to have a distributed team of mission scientists. Since these scientists are not experts on rover mobility, we have developed the Rover Obstacle Visualizer and Navigability Expert (ROVANE). ROVANE is a combined obstacle detection and path planning software suite, to assist in distributed mission planning. ROVANE uses terrain data, in the form of panoramic stereo images captured by the rover, to detect obstacles in the rover's vicinity. These obstacles are combined into a traversability map which is used to provide path planning assistance for mission scientists. A corresponding visual representation is also generated, allowing human operators to easily identify hazardous regions and to understand ROVANE's path selection. Since the terrain data often contains uncertain regions, the ROVANE obstacle detector generates a probability distribution describing the likely cost of a given obstacle or region. ROVANE then allows the user to plan for best-, worst-, and intermediate-case scenarios. ROVANE thus allows non-experts to examine scenarios and plan missions which have a high probability of success. ROVANE is capable of stand-alone operation, but it is designed to work with JPL's Web Interface for Telescience, an Internet-based tool for collaborative command sequence generation.
The purpose of this work is to provide a model for the average time to detection for observers searching for targets in photo-realistic images of cluttered scenes. The current work proposes to extend previous results of modeling time to detection that used a simple decaying fixation memory. While the aforementioned results were encouraging in showing a strong effect of fixation memory, there were also discrepancies. The main discrepancy was the tendency of immediate refixation, which was not accounted for at all by the original model. The present paper describes how the original fixation memory model is extended using a shunting neural network. Shunting neural networks are neurally plausible mechanisms for modeling various brain functions. Furthermore, this shunting neural network can then be extended in a simple manner to incorporate effects of spatial relationships, which were completely ignored in the original model. The model described is testable on experimental data, and is being calibrated using both analytical and experimental methods.
Future command and control (C2) systems must be constructed in such a way that they are extensible both in terms of the kinds of scenarios they can handle and the type of manipulations that they support. This paper presents an open architecture that uses commercial standards and implementations where appropriate. The discussion is framed by our ongoing work with a course of action planner and generator that uses genetic algorithms together with an abstract wargamer to suggest a small number of possible COAs (FOX).
The primary data used in ground-based, global path planning for NASA's Planetary Rovers are stereo images down-linked from the rover and range data derived from those images. The range data are often incomplete: the sensors are inherently noisy and sections of the landscape are blocked. This missing data complicates the path planning process and necessitates the help of human experts. We present the Rover Obstacle Visualizer and Navigability Evaluator (ROVANE), which assists these human experts and allows non-experts to plan missions without expert help. ROVANE generates a hazard map identifying slow, impassable, or dangerous regions with varying degrees of certainty. This map is used to create possible paths, which are assigned variable costs based on possible hazards. A hazard visualization is also produced, allowing the user to visually identify hazards and understand the system's path selection. As target locations are entered by the user, the system finds appropriate paths using a variation of the A* algorithm. A found path can be further modified by the user and output in a format suitable for commanding an actual rover. The system is capable of stand-alone operation, but is designed to be integrated into the Jet Propulsion Laboratory’s Web Interface for Telescience.
Using a model of visual search that predicts fixation probabilities for hard-to-see targets in naturalistic images, it is possible to stochastically generate fixation sequences and time to detection for targets in these images. The purpose of the current work is to calibrate some of the parameters of a time to detection model. In particular, this work is an attempt to elucidate the parameters of the proposed fixation memory model, the strength and decay parameters. The methods used to perform this calibration consist chiefly of comparison of the stochastic model with both experimental data and a theoretical analysis of a simplified scenario. The experimental data have been collected from ten observers performing a visual search experiment. During the experiment, eye fixations were tracked with an ISCAN infrared camera system. The visual search stimuli required fixation on target for detection (i.e. hard-to-detect stimuli). The experiment studied re-fixations of previously fixated targets ,where the fixation memory failed. The theoretical analysis is based on a simplified scenario that parallels the experimental setup, with a fixed number, N, of equally probable objects. It is possible to derive analytical expressions for the re- fixation probability in this case. The results of the analysis can be used in three different ways: (1) to verify the implementation of the stochastic model, (2) to estimate the stochastic parameters of the model (i.e., number of fixations sequences to generate), and (3) to calibrate the fixation memory parameters by fitting the experimental data.
Hard-to-see targets are generally only detected by human observers once they have been fixated. Hence, understanding how the human visual system allocates fixation locations is necessary for predicting target detectability. Visual search experiments were conducted where observers searched for military vehicles in cluttered terrain. Instantaneous eye position measurements were collected using an eye tracker. The resulting data was partitioned into fixations and saccades, and analyzed for correlation with various image properties. The fixation data was used to validate out model for predicting fixation locations. This model generates a saliency map from bottom-up image features, such as local contrast. To account for top-down scene understanding effects, a separate cognitive bias map is generated. The combination of these two maps provides a fixation probability map, from which sequences of fixation points were generated.
The purpose of this work is to provide a model for the average time to detection for observers searching for targets in photo-realistic images of cluttered scenes. The proposed model builds on previous work that constructs a fixation probability map (FPM) from the image. This FPM is constructed from bottom- up features, such as local contrast, but also includes top- down cognitive effects, such as the location of the horizon. The FPM is used to generate a set of conspicuous points that are likely to be fixation points, along with initial probabilities of fixation. These points are used to assemble fixation sequences. The order of these fixations is clearly crucial for determining the time to fixation. Recognizing that different observers (unconsciously) choose different orderings of the conspicuous points, the present model performs a Monte- Carlo simulation to find the probability of fixating each conspicuous point at each position in the sequence. The three main assumptions of this model are: the observer can only attend to the area of the image being fixated, each fixation has an approximately constant duration, and there is a short term memory for the locations of previous fixation points. This fixation point memory is an essential feature of the model, and the memory decay constant is a parameter of the model. Simulations show that the average time to fixation for a given conspicuous point in the image depends on the distribution of other conspicuous points. This is true even if the initial probability of fixation for a given point is the same across distributions, and only the initial probability of fixation of the other points is distributed differently.
To understand how a human operator performs visual search in complex scenes, it is necessary to take into account top- down cognitive biases in addition to bottom-up visual saliency effects. We constructed a model to elucidate the relationship between saliency and cognitive effects in the domain of visual search for distant targets in photo- realistic images of cluttered scenes. In this domain, detecting targets is difficult and requires high visual acuity. Sufficient acuity is only available near the fixation point, i.e. in the fovea. Hence, the choice of fixation points is the most important determinant of whether targets get detected. We developed a model that predicts the 2D distribution of fixation probabilities directly from an image. Fixation probabilities were computed as a function of local contrast (saliency effect) and proximity to the horizon (cognitive effect: distant targets are more likely to be found c close to the horizon). For validation, the model's predictions were compared to ensemble statistics of subjects' actual fixation locations, collected with an eye- tracker. The model's predictions correlated well with the observed data. Disabling the horizon-proximity functionality of the model significantly degraded prediction accuracy, demonstrating that cognitive effects must be accounted for when modeling visual search.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.