The paper intends to show that Applied Multidimensional Fusion will bring the benefits that data fusion and related techniques will bring to Urban Intelligence Surveillance Target Acquisition and Reconnaissance (ISTAR) systems. In the course of this work it has been shown through the practical application of some of the multi-dimensional fusion research in the United Kingdom. This paper highlights work done in the area of: super-resolution, joint fusion, multi-resolution target detection and identification, and task based image and video fusion assessment. Work done to date has produced practical pertinent research products with direct applicability to the problems posed.
Presented in this paper is a detailed novel approach to tracking multiple moving targets from multiple moving platforms and fusing the individual estimates within platform centric nodes via covariance intersection. The approach presents a method of deconstructing the target model into a nonlinear element and a Kalman Filter, modelling the target position and velocity vectors of the targets. The method avoids the increased complexity of using Extended Kalman Filters. The model state noise covariance is restructured by considering the source of the noise within the simplified imposed model and the measurement noise covariance is estimated from a single coefficient optimized moving average filter. The filter coefficient is optimally determined by the minimization of the variance of the Frobenius norm of the current estimated measurement covariance matrix, via a fuzzy logic feedback structure.
This paper presents an algorithm for aligning 2D video to 3D point clouds. The paper is a vignette of on-going research in the area of 3D Urban Environment Modelling. The aim of this research is to produce accurate, fast and useable 3D maps of the dynamic urban environment. Paper presents development of the algorithm followed by the processing and implementation procedure to produce a realistic 3D model of an urban environment model from 3D point cloud and RGB video collected by the system. To allow further discussion the paper concludes with the results of draping 2D video frames to a solid surface developed from 3D point clouds.
This paper presents a fast and robust approach to surface creation and feature extraction. The methodology is based on segmentation of point clouds iteratively till a set bound is reached. This paper concentrates on developing the methodology for developing planar surfaces. To achieve this goal vegetation is filtered and planar surfaces are created using the Delaunay triangulation. Surface creation process uses segmented point clouds based on fluctuation of normal of the surfaces in the segmented cubes. Results produced using this technique show the effect of imposing geometric constraints on the reconstruction to generate realistic surfaces.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.