A key component in the emerging localization and mapping paradigm is an appearance-based place recognition
algorithm that detects when a place has been revisited. This algorithm can run in the background at a low
frame rate and be used to signal a global geometric mapping algorithm when a loop is detected. An optimization
technique can then be used to correct the map by 'closing the loop'. This allows an autonomous unmanned ground
vehicle to improve localization and map accuracy and successfully navigate large environments. Image-based
place recognition techniques lack robustness to sensor orientation and varying lighting conditions. Additionally,
the quality of range estimates from monocular or stereo imagery can decrease the loop closure accuracy. Here,
we present a lidar-based place recognition system that is robust to these challenges. This probabilistic framework
learns a generative model of place appearance and determines whether a new observation comes from a new or
previously seen place. Highly descriptive features called the Variable Dimensional Local Shape Descriptors are
extracted from lidar range data to encode environment features. The range data processing has been implemented
on a graphics processing unit to optimize performance. The system runs in real-time on a military research
vehicle equipped with a highly accurate, 360 degree field of view lidar and can detect loops regardless of the
sensor orientation. Promising experimental results are presented for both rural and urban scenes in large outdoor
environments.
KEYWORDS: Video, Video surveillance, Video acceleration, 3D modeling, Video processing, Surveillance, Reconnaissance, Geographic information systems, Calibration, Visualization
Airborne surveillance and reconnaissance are essential for many military missions. Such capabilities are critical for troop
protection, situational awareness, mission planning and others, such as post-operation analysis / damage assessment.
Motion imagery gathered from both manned and unmanned platforms provides surveillance and reconnaissance
information that can be used for pre- and post-operation analysis, but these sensors can gather large amounts of video
data. It is extremely labour-intensive for operators to analyse hours of collected data without the aid of automated tools.
At MDA Systems Ltd. (MDA), we have previously developed a suite of automated video exploitation tools that can
process airborne video, including mosaicking, change detection and 3D reconstruction, within a GIS framework. The
mosaicking tool produces a geo-referenced 2D map from the sequence of video frames. The change detection tool
identifies differences between two repeat-pass videos taken of the same terrain. The 3D reconstruction tool creates
calibrated geo-referenced photo-realistic 3D models.
The key objectives of the on-going project are to improve the robustness, accuracy and speed of these tools, and make
them more user-friendly to operational users. Robustness and accuracy are essential to provide actionable intelligence,
surveillance and reconnaissance information. Speed is important to reduce operator time on data analysis. We are
porting some processor-intensive algorithms to run on a Graphics Processing Unit (GPU) in order to improve
throughput. Many aspects of video processing are highly parallel and well-suited for optimization on GPUs, which are
now commonly available on computers.
Moreover, we are extending the tools to handle video data from various airborne platforms and developing the interface
to the Coalition Shared Database (CSD). The CSD server enables the dissemination and storage of data from different
sensors among NATO countries. The CSD interface allows operational users to search and retrieve relevant video data
for exploitation.
In this paper, we propose and illustrate a methodology for classifying the change detection results generated from repeatpass
polarimetric RADARSAT-2 images and segmenting only the changes of interest to a given user while suppressing
all other changes. The detected changes are first classified based on generated supervised ground-cover classification of
the polarimetric SAR images between which changes were detected. In the absence of reliable ground truth needed for
generating supervised classification training sets, we rely on the use of periodically acquired high-resolution, multispectral
optical imagery in order to classify the manually selected training sets before computing their classes' statistics
from the SAR images. The classified detected changes can then be segmented to isolate the changes of interest, as
specified by the user and suppress all other changes. The proposed polarimetric change detection, classification and
segmentation method overcomes some of the challenges encountered when visualizing and interpreting typical raw
change results. Often these non-classified change detection results tend to be too crowded, as they show all the changes
including those of interest to the user as well as other non-relevant changes. Also, some of the changes are difficult to
interpret, especially those which are attributed to a mixture of the backscatters. We shall illustrate how to generate,
classify and segment polarimetric change detection results from two SAR images over a selected region of interest.
Tracking the progress and impact of large scale projects in areas of active conflict is challenging. In early 2010, the
Canadian International Development Agency (CIDA) broke ground on an ambitious project to rehabilitate a network of
just under 600 km of canals that supply water from the Arghandab River throughout southern Kandahar Province
thereby restoring a reliable and secure water supply and stimulating a once vibrant agricultural region. Monitoring the
region for signs of renewal is difficult due to the large areal extent of the irrigated land and safety concerns. With the
support of the Canadian Space Agency, polarimetric change detection techniques are applied to space-borne SAR data to
safely monitor the area through a time-series of RADARSAT-2 images acquired during the rehabilitation ground work
and subsequent growing seasons. Change detection maps delineating surface cover improvement will aid CIDA in
demonstrating the positive value of Canada's investment in renovating Afghanistan's irrigation system to improve water
distribution. This paper examines the use of value-added SAR imaging products to provide short- and long-term
monitoring suitable for assessing the impact and benefit of large scale projects and discusses the challenges of
integrating remote sensing products into a non-expert user community.
Space-borne Synthetic Aperture Radar (SAR) sensors, such as RADARSAT-1 and -2, enable a multitude of defense and
security applications owing to their unique capabilities of cloud penetration, day/night imaging and multi-polarization
imaging. As a result, advanced SAR image time series exploitation techniques such as Interferometric SAR (InSAR) and
Radargrammetry are now routinely used in applications such as underground tunnel monitoring, infrastructure
monitoring and DEM generation. Imaging geometry, as determined by the satellite orbit and imaged terrain, plays a
critical role in the success of such techniques.
This paper describes the architecture and the current status of development of a geometry-based search engine that
allows the search and visualization of archived and future RADARSAT-1 and -2 images appropriate for a variety of
advanced SAR techniques and applications. Key features of the search engine's scalable architecture include (a)
Interactive GIS-based visualization of the search results; (b) A client-server architecture for online access that produces
up-to-date searches of the archive images and that can, in future, be extended to acquisition planning; (c) A techniquespecific
search mode, wherein an expert user explicitly sets search parameters to find appropriate images for advanced
SAR techniques such as InSAR and Radargrammetry; (d) A future application-specific search mode, wherein all search
parameters implicitly default to preset values according to the application of choice such as tunnel monitoring, DEM
generation and deformation mapping; (f) Accurate baseline calculations for InSAR searches, and, optimum beam
configuration for Radargrammetric searches; (g) Simulated quick look images and technique-specific sensitivity maps in
the future.
KEYWORDS: 3D modeling, 3D acquisition, 3D image processing, Data modeling, Clouds, Databases, Video, Detection and tracking algorithms, RGB color model, LIDAR
3D imagery has a well-known potential for improving situational awareness and battlespace visualization by
providing enhanced knowledge of uncooperative targets. This potential arises from the numerous advantages
that 3D imagery has to offer over traditional 2D imagery, thereby increasing the accuracy of automatic target
detection (ATD) and recognition (ATR). Despite advancements in both 3D sensing and 3D data exploitation,
3D imagery has yet to demonstrate a true operational gain, partly due to the processing burden of the massive
dataloads generated by modern sensors. In this context, this paper describes the current status of a workbench
designed for the study of 3D ATD/ATR. Among the project goals is the comparative assessment of algorithms
and 3D sensing technologies given various scenarios. The workbench is comprised of three components: a
database, a toolbox, and a simulation environment. The database stores, manages, and edits input data of
various types such as point clouds, video, still imagery frames, CAD models and metadata. The toolbox features
data processing modules, including range data manipulation, surface mesh generation, texture mapping, and
a shape-from-motion module to extract a 3D target representation from video frames or from a sequence of
still imagery. The simulation environment includes synthetic point cloud generation, 3D ATD/ATR algorithm
prototyping environment and performance metrics for comparative assessment. In this paper, the workbench
components are described and preliminary results are presented. Ladar, video and still imagery datasets collected
during airborne trials are also detailed.
Non-invasive estimation of regional cardiac function is important for assessment of myocardial contractility.
The use of MR tagging technique enables acquisition of intra-myocardial tissue motion by placing a spatially
modulated pattern of magnetization whose deformation with the myocardium over the cardiac cycle can be
imaged. Quantitative computation of parameters such as wall thickening, shearing, rotation, torsion and strain
within the myocardium is traditionally achieved by processing the tag-marked MR image frames to 1) segment
the tag lines and 2) detect the correspondence between points across the time-indexed frames. In this paper,
we describe our approach to solving this problem using the Large Deformation Diffeomorphic Metric Mapping
(LDDMM) algorithm in which tag-line segmentation and motion reconstruction occur simultaneously. Our
method differs from earlier proposed non rigid registration based cardiac motion estimation methods in that
our matching cost incorporates image intensity overlap via the L2 norm and the estimated tranformations are
diffeomorphic. We also present a novel method of generating synthetic tag line images with known ground truth
and motion characteristics that closely follow those in the original data; these can be used for validation of
motion estimation algorithms. Initial validation shows that our method is able to accurately segment tag-lines
and estimate a dense 3D motion field describing the motion of the myocardium in both the left and the right
ventricle.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.