Most intelligence analysts currently use Information Products (IP) from multiple sources with very different
characteristics to perform a variety of intelligence tasks. In order to maximize the analysts’ efficacy (and ultimately
provide intelligent automation), it is important to understand how and what each IP within the set of IPs contributes
to the accuracy and validity of the analytic result. This paper describes initial research toward the development of a
scale, analogous to the National Imagery Interpretability Scale (NIIRS), which will measure the knowledge contribution
of each of the multi-source IPs, as well as measuring the extent to which the IP set as a whole meets the enduser’s
intelligence need – which is actionable knowledge. This scale, the Knowledge-NIIRS (KnIIRS), when
completed, will support the measurement of the quality and quantity of information gained through multi-source IP
fusion and enables the development of smart (automated) tools for analysts using the next generation of PED workstations.
The results of this initial study indicate that analysts are capable of making judgments that reflect the
“value” of fused information, and that the judgments they make vary along at least two dimensions. Furthermore,
there are substantial and significant differences among analysts in how they make these judgments that must be
considered for further scale development. We suggest that the KnIIRS objectives and its derived understandings
offer important and critical insights to enable automation that will achieve the goal to deliver actionable knowledge.
A multi-modal (hyperspectral, multispectral, and LIDAR) imaging data collection campaign was conducted just south of Rochester New York in Avon, NY on September 20, 2012 by the Rochester Institute of Technology (RIT) in conjunction with SpecTIR, LLC, the Air Force Research Lab (AFRL), the Naval Research Lab (NRL), United Technologies Aerospace Systems (UTAS) and MITRE. The campaign was a follow on from the SpecTIR Hyperspectral Airborne Rochester Experiment (SHARE) from 2010. Data was collected in support of the eleven simultaneous experiments described here. The airborne imagery was collected over four different sites with hyperspectral, multispectral, and LIDAR sensors. The sites for data collection included Avon, NY, Conesus Lake, Hemlock Lake and forest, and a nearby quarry. Experiments included topics such as target unmixing, subpixel detection, material identification, impacts of illumination on materials, forest health, and in-water target detection. An extensive ground truthing effort was conducted in addition to collection of the airborne imagery. The ultimate goal of the data collection campaign is to provide the remote sensing community with a shareable resource to support future research. This paper details the experiments conducted and the data that was collected during this campaign.
Performing persistent surveillance of large populations of targets is increasingly important in both the defence and security domains. In response to this, Wide Area Motion Imagery (WAMI) sensors with Wide FoVs are growing in popularity. Such WAMI sensors simultaneously provide high spatial and temporal resolutions, giving extreme pixel counts over large geographical areas. The ensuing data rates are such that either very bandwidth data links are required (e.g. for human interpretation) or close-to-sensor automation is required to down-select salient information. For the latter case, we use an iterative quad-tree optical-flow algorithm to efficiently estimate the parameters of a perspective deformation of the background. We then use a robust estimator to simultaneously detect foreground pixels and infer the parameters of each background pixel in the current image. The resulting detections are referenced to the coordinates of the first frame and passed to a multi-target tracker. The multi-target tracker uses a Kalman filter per target and a Global Nearest Neighbour approach to multi-target data association, thereby including statistical models for missed detections and false alarms. We use spatial data structures to ensure that the tracker can scale to analysing thousands of targets. We demonstrate that real-time processing (on modest hardware) is feasible on an unclassified WAMI infra-red dataset consisting of 4096 by 4096 pixels at 1Hz simulating data taken from a Wide FoV sensor on a UAV. With low latency and despite intermittent obscuration and false alarms, we demonstrate persistent tracking of all but one (low-contrast) vehicular target, with no false tracks.
A significant challenge in the adoption of today's digital imaging standards is a clear connection to how they relate to
today's vernacular digital imaging vocabulary. Commonly used terms like resolution, dynamic range, delta E, white
balance, exposure, or depth of focus are mistakenly considered measurements in their own right and are frequently
depicted as a disconnected shopping list of individual metrics with little common foundation. In fact many of these are
simple summary measures derived from more fundamental imaging science/engineering metrics, adopted in existing
standard protocols.
Four important underlying imaging performance metrics are; Spatial Frequency Response (SFR), Opto-Electronic
Conversion Function (OECF), Noise Power Spectrum (NPS), and Spatial Distortion. We propose an imaging
performance taxonomy. With a primary focus on image capture performance, our objective is to indicate connections
between related imaging characteristics, and provides context for the array of commonly used terms. Starting with the
concepts of Signal and Noise, the above imaging performance metrics are related to several simple measures that are
compatible with testing for design verification, manufacturing quality assurance, and technology selection evaluation.
The process of extracting information from hyperspectral imagery datasets provided by newer sensor systems can be enhanced through a combination of unique spectral processing algorithms. The first technique we describe is a unique method for extracting the relevant bands within a hyperspectral dataset; this set of optimized bands will provide the greatest potential for discriminating specified materials of interest. The second process, subpixel spectral identification, uses the results from the subset of hyperspectral bands to further refine and distinguish between specific materials of interest, improving classification accuracy and diminishing false alarms. Comparison results produced using the full hyperspectral bandset, a six-band selection chosen based on thematic-mapper band centers, and the optimized bandset are presented for a test scene using HYDICE hyperspectral imagery.
Grayscale accuracy and consistency, along with spatial frequency performance at high and low modulation levels were characterized for two types of 14 by 17-inch medical laser printers; conventional silver-film printers and a dry-process, pigment transfer printer. A total of 180 images of various test targets were created on a silver-film system operating in a clinical environment over a 30 day period. A similar number of films were produced on both a pigment transfer imager and a silver film printer under controlled laboratory conditions. Every care was taken to ensure that all systems were operating within the manufacturer's specifications. Evaluations consisted of objective densitometric measurements from uniform fields, grayscale step tablets, and spatial frequency gratings. Results comparing the tone and spatial performance of these systems are presented.
Itek Optical Systems has developed a hybrid multispectral imagery simulation capability based on physical images of a terrain board and computer modeling of radiation propagation. This process produces multispectral imagery within the 0.4 micrometers to 2.5 micrometers wavelength region that reflects the complex interactions among the ground scene reflectance, atmospheric radiance and attenuation, system acquisition conditions, and sensor performance characteristics. This process is used to evaluate performance of multispectral sensor designs by comparing output products representative of each design. Imagery produced by this process is also well suited for automatic processing algorithms because various imaging parameters are easily and independently altered in a controlled manner. These parameters may include spectral band placement, bandwidth, number of bands, image spatial resolution, sensor noise, feature location, and mixed pixel composition. The resulting simulation imagery can be otherwise identical in scene content, allowing direct analysis of algorithm performance as a function of specific input scene and acquisition conditions.
This paper describes and compares two different methods for combining multisensor images into single integrated pictures for visual data analysis and data exploration. In the specific case considered here, the original images are thermal (IR) and visible. The first method preserves contrast in the thermal image and modulates local contrast by the structure of the high- frequency information in the visible image. This method produces a conventional gray-scale picture. The second method encodes the intensity at each pixel position in each image as the length of a line-segment, or 'limb,' of a stick-figure icon at the corresponding position in the output picture. This method produces an 'iconographic' picture. Although these two approaches differ significantly, they both satisfy the goal of incorporating the unique features of the thermal and visible images in a single integrated picture. We discuss the strengths and weaknesses of each method, and we suggest ways in which each might be improved and extended.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.