We present a method adapted from medical sensor data analysis, viz. independent component analysis of electroencephalography data, to health system analysis. Timely and effective care in a hospital emergency department is measured by throughput measures such as median times patients spent before they were admitted as an inpatient, before they were sent home, before they were seen by a healthcare professional. We consider a set of five such measures collected at 3,086 hospitals distributed across the U.S. One model of the performance of an emergency department is that these correlated throughput measures are linear combinations of some underlying sources. The independent component analysis decomposition of the data set can thus be viewed as transforming a set of performance measures collected at a site to a collection of outputs of spatial filters applied to the whole multi-measure data. We compare the independent component sources with the output of the conventional principal component analysis to show that the independent components are more suitable for understanding the data sets through visualizations.
Automatic object detection and tracking has been widely applied in the video surveillance systems for homeland security and data fusion in the remote sensing and airborne imagery. The typical applications include human motion analysis, vehicle detection, and architectural building detection. Here we conduct object detection and tracking under planar constraints for interesting objects. Planar surface abounds in man-made environment. It provides much useful information for image understanding and then can be adopted to improve the performance of object detection and tracking. The experiments on real data show that object detection and tracking could be successfully implemented by incorporating planar information of interesting objects.
Unmanned Aircraft Systems (UAS) have been used in many military and civil applications, particularly surveillance. One of
the best ways to use the capacity of a UAS imaging system is by constructing a mosaic or panorama of the recorded video. This paper presents a novel algorithm for the construction of super-resolution mosaicking. The algorithm is based on the Conjugate Gradient (CG) method. Geman -McClure prior is used together with four different cliques to deal with the ill-conditioned inverse problem and to preserve edges. We present the results with synthetic and real UAS surveillance data, resulting in a great improvement of the visual resolution. For the case of synthetic images, we obtained a PSNR of 47.0 dB, as well as a significant increase in the details visible for the case of real UAS frames in only ten iterations.
Unmanned Aircraft Systems (UAS) have been widely applied into military reconnaissance and surveillance by
exploiting the information collected from the digital imaging payload. However, the data analysis of UAS videos is
frequently limited by motion blur; the frame-to-frame movement induced by aircraft roll, wind gusts, and less than ideal
atmospheric conditions; and the noise inherent within the image sensors. Therefore, the super-resolution mosaicking on
low-resolution UAS surveillance video frames, becomes an important task for UAS video processing and is a pre-step for
further effective image understanding.
Here we develop a novel super-resolution framework which does not require the construction of sparse
matrices. This method applied image operators in spatial domain and adopted an iterated back-projection method to
conduct super-resolution mosaics from UAS surveillance video frames. The Steepest Descent method, Conjugate
Gradient method and Levenberg Marquardt algorithm are used to numerically solve the nonlinear optimization problem
in the modeling of super-resolution mosaic. A quantity comparison in computation time and visual performance of the
super-resolution using the three numerical methods is performed. The Levenberg Marquardt algorithm provides a
numerical solution to the least squares curve fitting, which avoids the time-consuming computation of the inverse of the
pseudo Hessian matrix in regular singular value decomposition (SVD). The Levenberg Marquardt method, interpolating
between the Gauss-Newton algorithm (GNA) and the method of gradient descent, is efficient, robust, and easy to
implement. The results obtained in our simulations shows a great improvement of the resolution of the low resolution
mosaic of up to 47.54 dB for synthetic images, and a considerable visual improvement in sharpness and visual details for
real UAS surveillance frames. The convergence is generally reached in no more than ten iterations.
We describe the epipolar constraint that specifies the geometry of stereo vision. We consider the 3D structure
reconstruction from multiple views through the new perspective of basing the reconstruction from directly estimated
planar homographies instead of using techniques that are based on matched point pairs. Planar homography parameters
can more accurately extract scene planar surfaces and directly solve for the 3D structure and camera motion parameters.
The new method has the advantage that it integrates larger amount of information because the homography parameters
are estimated directly from the intensities and not from an abstracted descriptor of the neighborhood. Because it does
not rely on a transformation of an entire image region, the method is efficient.
Automatic object detection and tracking has been widely applied in the video surveillance systems for homeland security
and data fusion in the remote sensing and airborne imagery. The typical applications include human motion analysis and
the vehicle detection. Here we implement object detection and tracking under shape graphs of interesting objects
integrating local contextual information (corner/point features, etc) of the objects. On the top layer, shapes/sketches
provide a discrimination measure to describe the global status of the interesting objects. This kind of information is very
useful to improve the object tracking performance for occlusion. The shape can be modeled as a graph or hyper graph
through its local geometric features. On the bottom layer, local geometric features are used to capture local properties of
objects and perform correspondence estimation of high-level shapes. The local features provide a way to conquer
inaccurate object segmentation and extraction. The experiments were implemented on human face tracking and vehicle
detection and tracking.
Intelligent vehicles have many applications in the military, aerospace, and other industries, including land-mine
detection for the military, patient transportation in hospitals, and many other domains that often require automation
to reduce risks to the human operators. One of important tasks of intelligent vehicles is the navigation, whose goal is
to extract and determine the appropriate path that leads to a destination based on perceived environmental
information. The objective of our work is to develop a simple and effective method to detect and extract road lanes
and boundaries. We propose a solution by incorporating the planar information of road surfaces. We first detect all
possible edges in the captured images. The straight lanes and boundaries are extracted as straight lines, which
generate a vanishing point. The straight lines are described with Hough transform. A cluster analysis in Hough
space is used to detect the vanishing point on road. Further, we search lines passing through the vanishing point
from 180 degrees to 270 degrees and from 0 degree to negative 90 degrees. The first two strong lines will be
extracted as road boundaries.
Planar surfaces are important characteristics in man-made environments.
Planes have many practical applications in computer vision and computer graphics,
including camera calibration and interactive modeling. Here, we develop a plane
detection method for piecewise image pair taken under urban environment. All
potential planes are detected based on planar homographies estimated from
Levenberg-Marquardt algorithm. In order to extract the whole planes, the
normalized cut method is used to segment the original images. We pick those
segmented regions with maximum fit to those features satisfied with planar
homographies as the whole planes. We illustrate the algorithm's performance on
gray and color image pairs.
Remote sensing is widely applied to provide information of areas with
limited ground access with applications such as to assess the destruction from
natural disasters and to plan relief and recovery operations. However, the data
collection of aerial digital images is constrained by bad weather, atmospheric
conditions, and unstable camera or camcorder. Therefore, how to recover the
information from the low-quality remote sensing images and how to enhance the
image quality becomes very important for many visual understanding tasks, such
like feature detection, object segmentation, and object recognition. The quality of
remote sensing imagery can be improved through meaningful combination of the
employed images captured from different sensors or from different conditions
through information fusion. Here we particularly address information fusion to
remote sensing images under multi-resolution analysis in the employed image
sequences. The image fusion is to recover complete information by integrating
multiple images captured from the same scene. Through image fusion, a new image
with high-resolution or more perceptive for human and machine is created from a
time series of low-quality images based on image registration between different
video frames.
Remote sensing is widely used to assess the destruction from natural disasters and to plan relief
and recovery operations. How to automatically extract useful features and segment interesting objects from
digital images, including remote sensing imagery, becomes a critical task for image understanding.
Unfortunately, the data collection of aerial digital images is constrained with bad weather, muzzy
atmosphere, and unstable camera or camcorder. As a result, remote sensing imagery is shown as lowcontrast,
blurred, and dark from time to time. Here, we introduce a new method integrating image local
statistics and image natural characteristics to enhance remote sensing imagery. This method computes the
adaptive histogram equalization to each distinct region of the input image and then redistributes the lightness
values of the image. The natural characteristic of image is applied to adjust the restoration contrast. The
experiments on real data show the effectiveness of the algorithm.
Shadows and shadings are typical natural phenomena, which can often be found in images and videos acquired under
strong directional lighting, such as those taken outdoors on a sunny day. Unfortunately, shadows can cause many
difficulties in image processing and vision-related tasks, such like image segmentation and object recognition. Therefore,
shadow removal is needed for improving the performance of these image understanding tasks. We present a new shadow
removal algorithm for real textured color images. The algorithm is based on the statistical property of textures in images.
The experimental results on real-world data are shown to demonstrate this algorithm.
Remote sensing is widely used assess the destruction from natural disasters and to plan
relief and recovery operations. How to automatically extract useful features and segment
interesting objects from digital images, including remote sensing imagery, becomes a
critical task for image understanding. Unfortunately, current research on automated
feature extraction is ignorant of contextual information. As a result, the fidelity of
populating attributes corresponding to interesting features and objects cannot be satisfied.
In this paper, we present an exploration on meaningful object extraction integrating
reflecting surfaces. Detection of specular reflecting surfaces can be useful in target
identification and then can be applied to environmental monitoring, disaster prediction
and analysis, military, and counter-terrorism. Our method is based on a statistical model
to capture the statistical properties of specular reflecting surfaces. And then the reflecting
surfaces are detected through cluster analysis.
The concept surrounding super-resolution image reconstruction is to recover a highly-resolved
image from a series of low-resolution images via between-frame subpixel image
registration. In this paper, we propose a novel and efficient super-resolution algorithm, and then
apply it to the reconstruction of real video data captured by a small Unmanned Aircraft System
(UAS). Small UAS aircraft generally have a wingspan of less than four meters, so that these vehicles
and their payloads can be buffeted by even light winds, resulting in potentially unstable video. This
algorithm is based on a coarse-to-fine strategy, in which a coarsely super-resolved image sequence is
first built from the original video data by image registration and bi-cubic interpolation between a
fixed reference frame and every additional frame. It is well known that the median filter is robust to
outliers. If we calculate pixel-wise medians in the coarsely super-resolved image sequence, we can
restore a refined super-resolved image. The primary advantage is that this is a noniterative algorithm,
unlike traditional approaches based on highly-computational iterative algorithms. Experimental
results show that our coarse-to-fine super-resolution algorithm is not only robust, but also very
efficient. In comparison with five well-known super-resolution algorithms, namely the robust super-resolution
algorithm, bi-cubic interpolation, projection onto convex sets (POCS), the Papoulis-Gerchberg algorithm, and the iterated back projection algorithm, our proposed algorithm gives both
strong efficiency and robustness, as well as good visual performance. This is particularly useful for
the application of super-resolution to UAS surveillance video, where real-time processing is highly
desired.
In traditional super-resolution methods, researchers generally assume that accurate
subpixel image registration parameters are given a priori. In reality, accurate image registration on
a subpixel grid is the single most critically important step for the accuracy of super-resolution
image reconstruction. In this paper, we introduce affine invariant features to improve subpixel
image registration, which considerably reduces the number of mismatched points and hence makes
traditional image registration more efficient and more accurate for super-resolution video
enhancement. Affine invariant features are invariant to affine transformations, including scale,
rotation, and translation. They are extracted from the second moment matrix through the
integration and differentiation covariance matrices. The experimental results show that affine
invariant interest points are more robust to perspective distortion and present more accurate
matching than traditional Harris/SIFT corners. In our experiments, all matching affine invariant
interest points are found correctly. In addition, for the same super-resolution problem, we can use
much fewer affine invariant points than Harris/SIFT corners to obtain good super-resolution
results.
In traditional super-resolution methods, researchers generally assume that accurate subpixel image registration parameters are given a priori. In reality, accurate image registration on a subpixel grid is the single most critically important step for the accuracy of super-resolution image reconstruction. In this paper, we introduce affine invariant features to improve subpixel image registration, which considerably reduces the number of mismatched points and hence makes traditional image registration more efficient and more accurate for super-resolution video enhancement. Affine invariant interest points include those corners that are invariant to affine transformations, including scale, rotation, and translation. They are extracted from the second moment matrix through the integration and differentiation covariance matrices. Our tests are based on two sets of real video captured by a small Unmanned Aircraft System (UAS) aircraft, which is highly susceptible to vibration from even light winds. The experimental results from real UAS surveillance video show that affine invariant interest points are more robust to perspective distortion and present more accurate matching than traditional Harris/SIFT corners. In our experiments on real video, all matching affine invariant interest points are found correctly. In addition, for the same super-resolution problem, we can use many fewer affine invariant points than Harris/SIFT corners to obtain good super-resolution results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.