PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 8756 including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes methods to affordably improve the robustness of distributed fusion systems by
opportunistically leveraging non-traditional data sources. Adaptive methods help find relevant data, create models,
and characterize the model quality. These methods also can measure the conformity of this non-traditional data with
fusion system products including situation modeling and mission impact prediction. Non-traditional data can
improve the quantity, quality, availability, timeliness, and diversity of the baseline fusion system sources and
therefore can improve prediction and estimation accuracy and robustness at all levels of fusion. Techniques are
described that automatically learn to characterize and search non-traditional contextual data to enable operators
integrate the data with the high-level fusion systems and ontologies. These techniques apply the extension of the
Data Fusion & Resource Management Dual Node Network (DNN) technical architecture at Level 4. The DNN
architecture supports effectively assessment and management of the expanded portfolio of data sources, entities of
interest, models, and algorithms including data pattern discovery and context conformity. Affordable model-driven
and data-driven data mining methods to discover unknown models from non-traditional and ‘big data’ sources are
used to automatically learn entity behaviors and correlations with fusion products, [14 and 15]. This paper describes
our context assessment software development, and the demonstration of context assessment of non-traditional data
to compare to an intelligence surveillance and reconnaissance fusion product based upon an IED POIs workflow.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tactical platforms benefit greatly from the fusion of tracks from multiple sources in terms of increased situation awareness. As a necessary precursor to this track fusion, track-to-track association, or correlation, must first be performed. The related measurement-to-track fusion problem has been well studied with multiple hypothesis tracking and multiple frame assignment methods showing the most success. The track-to-track problem differs from this one in that measurements themselves are not available but rather track state update reports from the measuring sensors. Multiple hypothesis, multiple frame correlation systems have previously been considered; however, their practical implementation under the constraints imposed by tactical platforms is daunting. The situation is further exacerbated by the inconvenient nature of reports from legacy sensor systems on bandwidth- limited communications networks. In this paper, consideration is given to the special difficulties encountered when attempting the correlation of tracks from legacy sensors on tactical aircraft. Those difficulties include the following: covariance information from reporting sensors is frequently absent or incomplete; system latencies
can create temporal uncertainty in data; and computational processing is severely limited by hardware and architecture. Moreover, consideration is given to practical solutions for dealing with these problems in a multiple hypothesis correlator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The capabilities of tactical intelligence, surveillance, and reconnaissance (ISR) payloads are expanding from single
sensor imagers to integrated systems-of-systems architectures. Increasingly, these systems-of-systems include multiple
sensing modalities that can act as force multipliers for the intelligence analyst. Currently, the separate sensing modalities
operate largely independent of one another, providing a selection of operating modes but not an integrated intelligence
product. We describe here a Sensor Management System (SMS) designed to provide a small, compact processing unit
capable of managing multiple collaborative sensor systems on-board an aircraft. Its purpose is to increase sensor
cooperation and collaboration to achieve intelligent data collection and exploitation. The SMS architecture is designed to
be largely sensor and data agnostic and provide flexible networked access for both data providers and data consumers. It
supports pre-planned and ad-hoc missions, with provisions for on-demand tasking and updates from users connected via
data links. Management of sensors and user agents takes place over standard network protocols such that any number
and combination of sensors and user agents, either on the local network or connected via data link, can register with the
SMS at any time during the mission. The SMS provides control over sensor data collection to handle logging and routing
of data products to subscribing user agents. It also supports the addition of algorithmic data processing agents for
feature/target extraction and provides for subsequent cueing from one sensor to another. The SMS architecture was
designed to scale from a small UAV carrying a limited number of payloads to an aircraft carrying a large number of
payloads. The SMS system is STANAG 4575 compliant as a removable memory module (RMM) and can act as a
vehicle specific module (VSM) to provide STANAG 4586 compliance (level-3 interoperability) to a non-compliant
sensor system. The SMS architecture will be described and results from several flight tests and simulations will be
shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses the use of implication rules (with uncertainty) within the Transferable Belief Model (TBM)
where the rules convey knowledge about relationships between two frames of discernment. Technical challenges
include: a) computational scalability of belief propagation, b) logical consistency of the rules, and c) uncertainty
of the rules. This paper presents a simplification of the formalism developed by Ristic and Smets for incorporating
uncertain implication rules into the TBM. By imposing two constraints on the form of implication rules, and
restricting results to singletons of the frame of discernment, we derive a belief function that can be evaluated in
polynomial time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Condition-based maintenance (CBM) refers to the philosophy of performing maintenance when the need arises, based
upon indicators of deterioration in the condition of the machinery. Traditionally, CBM involves equipping machinery
with electronic sensors that continuously monitor components and collect data for analysis. The addition of the
multisensory capability of human cognitive functions (i.e., sensemaking, problem detection, planning, adaptation,
coordination, naturalistic decision making) to traditional CBM may create a fuller picture of machinery condition.
Cognitive systems engineering techniques provide an opportunity to utilize a dynamic resource—people acting as soft
sensors. The literature is extensive on techniques to fuse data from electronic sensors, but little work exists on fusing
data from humans with that from electronic sensors (i.e., hard/soft fusion). The purpose of my research is to explore,
observe, investigate, analyze, and evaluate the fusion of pilot and maintainer knowledge, experiences, and sensory
perceptions with digital maintenance resources. Hard/soft information fusion has the potential to increase problem
detection capability, improve flight safety, and increase mission readiness.
This proposed project consists the creation of a methodology that is based upon the Living Laboratories framework, a
research methodology that is built upon cognitive engineering principles1. This study performs a critical assessment of
concept, which will support development of activities to demonstrate hard/soft information fusion in operationally
relevant scenarios of aircraft maintenance. It consists of fieldwork, knowledge elicitation to inform a simulation and a
prototype.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of multiclass classification is often modeled by breaking it down into a collection of binary classifiers, as
opposed to jointly modeling all classes with a single primary classifier. Various methods can be found in the literature
for decomposing the multiclass problem into a collection of binary classifiers. Typical algorithms that are studied here
include each versus all remaining (EVAR), each versus all individually (EVAI), and output correction coding (OCC).
With each of these methods a classifier fusion based decision rule is formulated utilizing the various binary classifiers to
determine the correct classification of an unknown data point. For example, with EVAR the binary classifier with
maximum output is chosen. For EVAI, the correct class is chosen using a majority voting rule, and with OCC a
comparison algorithm based minimum Hamming distance metric is used. In this paper, it is demonstrated how these
various methods perform utilizing the Bayesian Reduction Algorithm (BDRA) as the primary classifier. BDRA is a
discrete data classification method that quantizes and reduces the dimensionality of feature data for best classification
performance. In this case, BDRA is used to not only train the appropriate binary classifier pairs, but it is also used to
train on the discrete classifier outputs to formulate the correct classification decision of unknown data points. In this
way, it is demonstrated how to predict which binary classification based algorithm method (i.e., EVAR, EVAI, or OCC)
performs best with BDRA. Experimental results are shown with real data sets taken from the Knowledge Extraction
based on Evolutionary Learning (KEEL) and University of California at Irvine (UCI) Repositories of classifier
Databases. In general, and for the data sets considered, it is shown that the best classification method, based on
performance with unlabeled test observations, can be predicted form performance on labeled training data. Specifically,
the best method is shown to have the least overall probability of error, and the binary classifiers have the least overall
average quantization complexity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a novel and innovative information fusion and visualization framework for multi-source intelligence
(multiINT) data using Spatial Voting (SV) and Data Modeling. We describe how different sources of information can be
converted into numerical form for further processing downstream, followed by a short description of how this
information can be fused using the SV grid. As an illustrative example, we show the modeling of cyberspace as cyber
layers for the purpose of tracking cyber personas. Finally we describe a path ahead for creating interactive agile networks
through defender customized Cyber-cubes for network configuration and attack visualization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The capability to track individuals in CCTV cameras is important for e.g. surveillance applications at large areas such as
train stations, airports and shopping centers. However, it is laborious to track and trace people over multiple cameras. In
this paper, we present a system for real-time tracking and fast interactive retrieval of persons in video streams from
multiple static surveillance cameras. This system is demonstrated in a shopping mall, where the cameras are positioned
without overlapping fields-of-view and have different lighting conditions. The results show that the system allows an
operator to find the origin or destination of a person more efficiently. The misses are reduced with 37%, which is a
significant improvement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a novel feature selection, fusion, and visualization utility using Spatial Voting (SV). This SV feature
optimization utility is designed to be an off-line stand-alone utility to help an investigator find useful feature pairs for
cluster analysis and lineage identification. The analysis can be used to enable the analyst to vary parameters manually
and explore the best combination that yields visually appealing or significant groups or spreading of data points
depending on the planned use of the analysis downstream. Several different criteria are available to the user in order to
determine the best SV grid size and feature pair including minimizing zeros, minimizing covariance, balanced minimum
covariance, or the maximization of one of eight different scoring metrics: Containment, Rand Index, Purity, Precision,
Recall, F-Score, Normalized Mutual Information (NMI), and Adjusted Rand Index (ARI). The tool that is described in
this work facilitates this analysis and makes it simple, efficient, and interactive if the analyst so desires.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For many applications, the combination of multiple data sources is required to reach a conclusion. Limited
communications and processing capabilities dictate transmitting, processing and reviewing information more likely to
affect a decision before less relevant data. Autonomous craft must, thus, prioritize data for transmission, onboard.
Previous work demonstrated the ability to identify and prioritize discrepancies between collected and pre-existing image,
topographic and gravitational data. This paper focuses on combining single-source analysis from these datasets to
produce a fusion-analysis for predicting the decision-impact value of the component data. This approach is evaluated for
persistent surveillance and asteroid assessment applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image fusion involves merging two or more images in such a way as to retain the most desirable characteristics of each.
There are various image fusion methods and they can be classified into three main categories: i) Spatial domain, ii)
Transform domain, and iii) Statistical domain. We focus on the transform domain in this paper as spatial domain
methods are primitive and statistical domain methods suffer from a significant increase of computational complexity. In
the field of image fusion, performance analysis is important since the evaluation result gives valuable information which
can be utilized in various applications, such as military, medical imaging, remote sensing, and so on. In this paper, we
analyze and compare the performance of fusion methods based on four different transforms: i) wavelet transform, ii)
curvelet transform, iii) contourlet transform and iv) nonsubsampled contourlet transform. Fusion framework and scheme
are explained in detail, and two different sets of images are used in our experiments. Furthermore, various performance
evaluation metrics are adopted to quantitatively analyze the fusion results. The comparison results show that the
nonsubsampled contourlet transform method performs better than the other three methods. During the experiments, we
also found out that the decomposition level of 3 showed the best fusion performance, and decomposition levels beyond
level-3 did not significantly affect the fusion results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The airborne LiDAR system, which usually integrated with optical camera, is an efficient way of acquiring 3D
geographic information and enjoys widely application in building DSM. However, when the airborne LiDAR is used in
urban area, where there are a large amount of tall buildings, the characteristic points of buildings are seldom measured
and the measured points are frequently too sparse to create precise building models. In this paper, an approach to DSM
refining DSM in urban area with fusion of airborne LiDAR point cloud data and optical imagery is put forward. Firstly,
the geometric relationship between the airborne LiDAR point and the correspondent pixel on the image synchronously
taken by optical camera is analyzed. The relative position and attitude parameters between the laser rangefinder and the
camera are determined in the process of alignment and calibration. Secondly, the building roof edges on the optical
image are extracted by edge detection. By tracing the building roof edges, the contours of building roofs in vector format
are acquired and the characteristic points of buildings are further extracted. Thirdly, all the LiDAR measured points on
the roof of specific building are separated from the point cloud data by judging the geometric relation between LiDAR
measured points and the building outline, which is represented by a polygon, according to their plane coordinates.
Finally, the DSM refinement for buildings can be implemented. All pixels representing the building roof are given
heights as same as that of nearer LiDAR point inside the polygon. Ortho-photo map and virtual building models of urban
area with higher quality can be reached with the refined DSM and optical images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a cognitive vision system for a mobile robot. This system works in a manner similar to the human vision
system, using saccadic, vergence and pursuit movements to extract information from visual input. At each fixation,
the system builds a 3D model of a small region, combining information about distance, shape, texture and motion.
These 3D models are embedded within an overall 3D model of the robot's environment. This approach turns the
computer vision problem into a search problem, with the goal of constructing a physically realistic model of the entire
environment.
At each step, the vision system selects a point in the visual input to focus on. The distance, shape, texture and motion
information are computed in a small region and used to build a mesh in a 3D virtual world. Background knowledge is
used to extend this structure as appropriate, e.g. if a patch of wall is seen, it is hypothesized to be part of a large wall
and the entire wall is created in the virtual world, or if part of an object is recognized, the whole object's mesh is
retrieved from the library of objects and placed into the virtual world. The difference between the input from the real
camera and from the virtual camera is compared using local Gaussians, creating an error mask that indicates the main
differences between them. This is then used to select the next points to focus on.
This approach permits us to use very expensive algorithms on small localities, thus generating very accurate models. It
also is task-oriented, permitting the robot to use its knowledge about its task and goals to decide which parts of the
environment need to be examined.
The software components of this architecture include PhysX for the 3D virtual world, OpenCV and the Point Cloud
Library for visual processing, and the Soar cognitive architecture, which controls the perceptual processing and robot
planning. The hardware is a custom-built pan-tilt stereo color camera.
We describe experiments using both static and moving objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In future we expect that UAV platoon based military / civilian missions would require persistent airborne network
support for command, control and communication needs for the mission. Highly-dynamic mobile-wireless sensor
networks operating in a large region present unique challenges in end-to-end communication for sensor data sharing and
data fusion, particularly caused by the time varying connectivity of high-velocity nodes combined with the unreliability
of the wireless communication channel. To establish an airborne communication network, a UAV must maintain a
link(s) with other UAV(s) and/or base stations. A link between two UAVs is deemed to be established when the linked
UAVs are in line of sight as well as within the transmission range of each other. Ideally, all the UAVs as well as the
ground stations involved in command, control and communication operations must be fully connected. However, the
continuous motion of UAVs poses a challenge to ensure full connectivity of the network. In this paper we explore the
dynamic topological network configuration control under mission-related constraints in order to maintain connectivity
among sensors enabling data sharing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We address the problem of fusing laser ranging data from multiple mobile robots that are surveying an area as part of a robot
search and rescue or area surveillance mission. We are specifically interested in the case where members of the robot team are
working in close proximity to each other. The advantage of this teamwork is that it greatly speeds up the surveying process; the area
can be quickly covered even when the robots use a random motion exploration approach. However, the disadvantage of the close
proximity is that it is possible, and even likely, that the laser ranging data from one robot include many depth readings caused by
another robot. We refer to this as mutual interference.
Using a team of two Pioneer 3-AT robots with tilted SICK LMS-200 laser sensors, we evaluate several techniques for fusing
the laser ranging information so as to eliminate the mutual interference. There is an extensive literature on the mapping and
localization aspect of this problem. Recent work on mapping has begun to address dynamic or transient objects. Our problem differs
from the dynamic map problem in that we look at one kind of transient map feature, other robots, and we know that we wish to
completely eliminate the feature.
We present and evaluate three different approaches to the map fusion problem: a robot-centric approach, based on
estimating team member locations; a map-centric approach, based on inspecting local regions of the map, and a combination of both
approaches. We show results for these approaches for several experiments for a two robot team operating in a confined indoor
environment .
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the ongoing development of a robotic control architecture that inspired by computational
cognitive architectures from the discipline of cognitive psychology. The Symbolic and Sub-Symbolic Robotics
Intelligence Control System (SS-RICS) combines symbolic and sub-symbolic representations of knowledge into a
unified control architecture. The new architecture leverages previous work in cognitive architectures, specifically
the development of the Adaptive Character of Thought-Rational (ACT-R) and Soar. This paper details current work
on learning from episodes or events. The use of episodic memory as a learning mechanism has, until recently, been
largely ignored by computational cognitive architectures. This paper details work on metric level episodic memory
streams and methods for translating episodes into abstract schemas. The presentation will include research on
learning through novelty and self generated feedback mechanisms for autonomous systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Robot navigation already has many relatively efficient solutions: reactive control, simultaneous localization and
mapping (SLAM), Rapidly-Exploring Random Trees (RRTs), etc. But many primates possess an additional inherent
spatial reasoning capability: mental rotation. Our research addresses the question of what role, if any, mental rotations
can play in enhancing existing robot navigational capabilities. To answer this question we explore the use of optical flow
as a basis for extracting abstract representations of the world, comparing these representations with a goal state of similar
format and then iteratively providing a control signal to a robot to allow it to move in a direction consistent with
achieving that goal state. We study a range of transformation methods to implement the mental rotation component of
the architecture, including correlation and matching based on cognitive studies. We also include a discussion of how
mental rotations may play a key role in understanding spatial advice giving, particularly from other members of the
species, whether in map-based format, gestures, or other means of communication. Results to date are presented on our
robotic platform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In a rapid serial visual presentation (RSVP) images are shown at an extremely rapid pace. Yet, the images can still be parsed by the visual system to some extent. In fact, the detection of specific targets in a stream of pictures triggers a characteristic electroencephalography (EEG) response that can be recognized by a brain-computer interface (BCI) and exploited for automatic target detection. Research funded by DARPA's Neurotechnology for Intelligence Analysts program has achieved speed-ups in sifting through satellite images when adopting this approach. This paper extends the use of BCI technology from individual analysts to collaborative BCIs. We show that the integration of information in EEGs collected from multiple operators results in performance improvements compared to the single-operator case.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe our latest work in understanding spatial localization in open arenas based on rat studies and corresponding
modeling with simulated and physical robots. The studies and experiments focus on goal-oriented navigation where both
rats and robots exploit distal cues to localize and find a goal in an open environment. The task involves training of both
rats and robots to find the shortest path to the goal from multiple starting points in the environment. The spatial cognition
model is based on the rat’s brain neurophysiology of the hippocampus extending previous work by analyzing granularity
of localization in relation to a varying number and position of landmarks. The robot integrates internal and external
information to create a topological map of the environment and to generate shortest routes to the goal through path
integration. One of the critical challenges for the robot is to analyze the similarity of positions and distinguish among
different locations using visual cues and previous paths followed to reach the current position. We describe the robotics
architecture used to develop, simulate and experiment with physical robots.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The National Institute of Standards and Technology (NIST) has been researching human-robot-vehicle collaborative
environments for automated guided vehicles (AGVs) and manned forklifts. Safety of AGVs and manned vehicles with
automated functions (e.g., forklifts that slow/stop automatically in hazardous situations) are the focus of the American
National Standards Institute/Industrial Truck Safety Development Foundation (ANSI/ITSDF) B56.5 safety standard.
Recently, the NIST Mobile Autonomous Vehicle Obstacle Detection/Avoidance (MAVODA) Project began researching
test methods to detect humans or other obstacles entering the vehicle’s path. This causes potential safety hazards in
manufacturing facilities where both line-of-sight and non-line-of-sight conditions are prevalent. The test methods
described in this paper address both of these conditions. These methods will provide the B56.5 committee with the
measurement science basis for sensing systems - both non-contact and contact - that may be used in manufacturing
facilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Information Fusion Systems and Evaluation Measures
Most intelligence analysts currently use Information Products (IP) from multiple sources with very different
characteristics to perform a variety of intelligence tasks. In order to maximize the analysts’ efficacy (and ultimately
provide intelligent automation), it is important to understand how and what each IP within the set of IPs contributes
to the accuracy and validity of the analytic result. This paper describes initial research toward the development of a
scale, analogous to the National Imagery Interpretability Scale (NIIRS), which will measure the knowledge contribution
of each of the multi-source IPs, as well as measuring the extent to which the IP set as a whole meets the enduser’s
intelligence need – which is actionable knowledge. This scale, the Knowledge-NIIRS (KnIIRS), when
completed, will support the measurement of the quality and quantity of information gained through multi-source IP
fusion and enables the development of smart (automated) tools for analysts using the next generation of PED workstations.
The results of this initial study indicate that analysts are capable of making judgments that reflect the
“value” of fused information, and that the judgments they make vary along at least two dimensions. Furthermore,
there are substantial and significant differences among analysts in how they make these judgments that must be
considered for further scale development. We suggest that the KnIIRS objectives and its derived understandings
offer important and critical insights to enable automation that will achieve the goal to deliver actionable knowledge.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a solution for information integration and sharing architecture, which is able to receive data
simultaneously from multiple different sensor networks. Creating a Common Operational Picture (COP) object along
with the base map of the building plays a key role in the research. The object is combined with desired map sources and
then shared to the mobile devices worn by soldiers in the field. The sensor networks we used focus on location
techniques indoors, and a simple set of symbols is created to present the information, as an addition to NATO APP6B
symbols.
A core element in this research is the MUSAS (Mobile Urban Situational Awareness System), a demonstration
environment that implements central functionalities. Information integration of the system is handled by the Internet
Connection Engine (Ice) middleware, as well as the server, which hosts COP information and maps. The entire system is
closed, such that it does not need any external service, and the information transfer with the mobile devices is organized
by a tactical 5 GHz WLAN solution. The demonstration environment is implemented using only commercial off-theshelf
(COTS) products.
We have presented a field experiment event in which the system was able to integrate and share real time information of
a blue force tracking system, received signal strength indicator (RSSI) based intrusion detection system, and a robot
using simultaneous location and mapping technology (SLAM), where all the inputs were based on real activities. The
event was held in a training area on urban area warfare.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.