The measurement of cloud motion is very useful in weather forecast and natural disaster management. This paper is focus on accurately estimating cloud motion from a sequences of satellite images. Due to the complexity of cloud motion, which is a non-rigid movement and implying non-linear events, we cannot adopt some simple motion models and need to develop new algorithms. We presented a new method for cloud motion measurement based on image matching. We use the Iterative Multigrid Image Deformation (IMID) technique to measure the cloud movement at sub pixel accuracy, and for the alignment of image sub-regions differing in translation, rotation angle, and uniform scale factor, we change the correlation method from discrete Cartesian cross correlations to the phase correlation based on the Fourier-Mellin Transformation (FMT) which is invariant to translation, rotation and scaling. The phase correlation based on FMT can directly estimate the rotation angle and scale factor between satellite images. For cloud regions with large rotation angle or scale factors, our method can get more accurate motion estimation than traditional correlations by searching the deformation parameters using Cartesian cross correlation. In addition, the iterative multigrid framework aims at improving the precision of motion measurement by refining the size of cloud regions. To validate the performance of our algorithm, we process a cloudy satellite image with known geometric transformation, including translation, rotation and scaling to simulate a sequence of satellite images, and apply our method to measure the velocity fields of clouds. We also apply our algorithm to the sequence of real satellite images. Our results show that IMID technique with FMT can significantly decrease the displacement error compared to traditional correlation methods, especially in regions with large velocity gradients or high rates of rotation.
Because of the ability to optimize the 3D points and viewing parameters jointly and simultaneously, Sparse Bundle Adjustment (SBA) is an essential procedure and usually used as the last step of Structure from Motion (SFM). Recent development of SBA is incline to research on combination of the numeric method with matrix compression technique for more efficient and less memory consuming, and of prior information with SBA for the high accuracy. In this paper, a new hard constrained SBA method for multi-camera is presented. This method takes the prior information of 3D model or multi-camera into account as a hard constraint, and its solution is accomplished by the Lagrange multiplier method and Schur complement combined and with block matrix. The contribution of this work is that it provides a solution integrate constraint and multi-camera SBA, which is desired in the SFM problem and photogrammetry area. Another noticeable aspect is that obvious less time consuming with block matrix based than without, and the accuracy is maintained.
The technique of the star image simulation is widely used to test star identification algorithms and the performance of
star sensor on the ground. A novel INS data based approach to ship-borne star map simulation is put forward in this
paper. The simulation procedure consists of three steps: Firstly, the exact speed and position of the ship in the
Conventional Inertial System(CIS) are calculated via the INS data; and then the ship attitude matrix is obtained.
Secondly, considering the azimuth angle and elevation angle of the star sensor, the accurate positions of the selected
guide stars on the image plane of the star sensor are derived by constructing a pinhole model. At the third, the gray
values of simulating star image pixels are evaluated according to the 2D Gaussian distribution law. In order to simulate
the star image precisely and actually, the image smear has been considered. Based on the proposed star image simulation
approach, the effects of image smear on star sensor recognition capability have been analyzed in different exposure time.
A simple and flexible method for non-overlapping camera rig calibration that includes camera calibration and relative poses calibration is presented. The proposed algorithm gives the solutions of the cameras parameters and the relative poses simultaneously by using nonlinear optimization. Firstly, the intrinsic and extrinsic parameters of each camera in the rig are estimated individually. Then, a linear solution derived from hand-eye calibration scheme is proposed to compute an initial estimate of the relative poses inside the camera rig. Finally, combined non-linear refinement of all parameters is performed, which optimizes the intrinsic parameters, the extrinsic parameters and relative poses of the coupled camera at the same time. We develop and test a novel approach for calibrating the parameters of non-overlapping camera rig using camera calibration and hand-eye calibration method. The method is designed inter alia for the purpose of deformation measurement using the calibrated rig. Compared the camera calibration with hand-eye calibration separately, our joint calibration is more convenient in practice application. Experimental data shows our algorithm is feasible and effective.
Car detection from unmanned aerial vehicle (UAV) images has become an important research field. However, robust and efficient car detection is still a challenging problem because of the cars’ appearance variations and complicated background. We present an online cascaded boosting framework with histogram of orient gradient (HOG) features for car detection from UAV images. First, the HOG of the whole sliding window is computed to find the primary gradient direction that is used to estimate the car’s orientation. The sliding window is then rotated according to the estimated car’s orientation, and the HOG features in the rotated window are efficiently computed using the proposed four kinds of integral histograms. Second, to improve the performance of the weak classifiers, a new distance metric is employed instead of the Euclidean distance. Third, we propose an efficient online cascaded boosting for car detection by combining online boosting with soft cascade. Additionally, for the problem of imbalanced training samples, more positive samples are extracted in the rotated images, and for postprocessing, a confidence map is obtained to combine multiple detections and eliminate isolated false negatives. A set of experiments on real images shows the applicability and high efficiency of the proposed car detection method.
Our primary interest is in real-time one-dimensional object’s pose estimation. In this paper, a method to estimate general motion one-dimensional object’s pose, that is, the position and attitude parameters, using a single camera is proposed. Centroid-movement is necessarily continuous and orderly in temporal space, which means it follows at least approximately certain motion law in a short period of time. Therefore, the centroid trajectory in camera frame can be described as a combination of temporal polynomials. Two endpoints on one-dimensional object, A and B, at each time are projected on the corresponding image plane. With the relationship between A, B and centroid C, we can obtain a linear equation system related to the temporal polynomials’ coefficients, in which the camera has been calibrated and the image coordinates of A and B are known. Then in the cases that object moves continuous in natural temporal space within the view of a stationary camera, the position of endpoints on the one-dimensional object can be located and also the attitude can be estimated using two end points. Moreover the position of any other point aligned on one-dimensional object can also be solved. Scene information is not needed in the proposed method. If the distance between the endpoints is not known, a scale factor between the object’s real positions and the estimated results will exist. In order to improve the algorithm’s performance from accuracy and robustness, we derive a pain of linear and optimal algorithms. Simulations’ and experiments’ results show that the method is valid and robust with respect to various Gaussian noise levels. The paper’s work contributes to making self-calibration algorithms using one-dimensional objects applicable to practice. Furthermore, the method can also be used to estimate the pose and shape parameters of parallelogram, prism or cylinder objects.
The automatic detection of visually salient information from abundant video imagery is crucial, as it plays an important role in surveillance and reconnaissance tasks for Unmanned Aerial Vehicle (UAV). A real-time approach for the detection of salient objects on road, e.g. stationary and moving vehicle or people, is proposed, which is based on region segmentation and saliency detection within related domains. Generally, the traditional method specifically depends upon additional scene information and auxiliary thermal or IR sensing for secondary confirmation. However, this proposed approach can detect the interesting objects directly from video imagery captured by optical camera fixed on the small level UAV platform. To validate this proposed salient object detection approach, the 25 Hz video data from our low speed small UAV are tested. The results have demonstrated the proposed approach performs excellently in isolated rural environments.
KEYWORDS: Sensors, Detection and tracking algorithms, Optical tracking, Video surveillance, Motion models, Particle filters, Electroluminescence, Statistical modeling, Video acceleration, Video
An improved online long-term visual tracking algorithm, named adaptive and accelerated TLD (AA-TLD) based on
Tracking-Learning-Detection (TLD) which is a novel tracking framework has been introduced in this paper. The
improvement focuses on two aspects, one is adaption, which makes the algorithm not dependent on the pre-defined
scanning grids by online generating scale space, and the other is efficiency, which uses not only algorithm-level
acceleration like scale prediction that employs auto-regression and moving average (ARMA) model to learn the object
motion to lessen the detector’s searching range and the fixed number of positive and negative samples that ensures a
constant retrieving time, but also CPU and GPU parallel technology to achieve hardware acceleration. In addition, in
order to obtain a better effect, some TLD’s details are redesigned, which uses a weight including both normalized
correlation coefficient and scale size to integrate results, and adjusts distance metric thresholds online. A contrastive
experiment on success rate, center location error and execution time, is carried out to show a performance and efficiency
upgrade over state-of-the-art TLD with partial TLD datasets and Shenzhou IX return capsule image sequences. The
algorithm can be used in the field of video surveillance to meet the need of real-time video tracking.
Camera calibration is one of the most basic and important processes in optical measuring field. Generally, the objective
of camera calibration is to estimate the internal and external parameters of object cameras, while the orientation error of
optical axis is not included yet. Orientation error of optical axis is a important factor, which seriously affects measuring
precision in high-precision measurement field, especially for those distant aerospace measurement in which object
distance is much longer than focal length, that lead to magnifying the orientation errors to thousands times. In order to
eliminate the influence of orientation error of camera optical axis, the imaging model of camera is analysed and
established in this paper, and the calibration method is also introduced: Firstly, we analyse the reasons that cause optical
axis error and its influence. Then, we find the model of optical axis orientation error and imaging model of camera
basing on it’s practical physical meaning. Furthermore, we derive the bundle adjustment algorithm which could compute
the internal and external camera parameters and absolute orientation of camera optical axis simultaneously at high
precision. In numeric simulation, we solve the camera parameters by using bundle adjustment optimization algorithm,
then we correct the image points by calibration results according to the model of optical axis error, and the simulation
result shows that our calibration model is reliable, effective and precise.
The high portability of small Unmanned Aircraft Vehicles (UAVs) makes them play an important
role in surveillance and reconnaissance tasks, so the military and civilian desires for UAVs are
constantly growing. Recently, we have developed a real-time video exploitation system for our small
UAV which is mainly used in forest patrol tasks. Our system consists of six key models, including
image contrast enhancement, video stabilization, mosaicing, salient target indication, moving target
indication, and display of the footprint and flight path on map. Extensive testing on the system has
been implemented and the result shows our system performed well.
KEYWORDS: 3D image processing, 3D metrology, Projection systems, Stereoscopic cameras, Cameras, Photogrammetry, 3D image reconstruction, Imaging systems, 3D modeling, 3D imaging standards
Fast and reliable three-dimensional (3-D) measurement of large stack yards is an important job in bulk load-and-unload operations and logistics management. Traditional noncontacting methods, such as LiDAR and photogrammetry, witness difficulties of complex and irregular shape, single texture and weak reflectivity, and so on. In this paper, we propose a videogrammetry and projected-contour scanning method. The surface of a stack yard can be scanned easily by a laser-line projector, and its 3-D shape can be reconstructed automatically by stereo cameras. There are two main technical contributions of this method: 1. corresponding-point matching in stereo imagery based on image gradient and epipolar line; and 2. single projected-contour extraction under constraint of homography and RANSAC (random sampling consensus). The proposed method has been tested by 3-D-reconstruction experiments of sand tables in indoor and outdoor conditions, which showed that about five contours were reconstructed per second on average, and moving-distance error of a standard slab was less than 0.4 mm in the worst direction of the videogrammetric system. In conclusion, the proposed method is effective for 3-D shape measurement of stack yards in a fast, reliable and accurate way.
KEYWORDS: 3D metrology, 3D modeling, 3D image processing, Projection systems, Cameras, 3D image reconstruction, Stereoscopic cameras, Calibration, Photogrammetry, Reflectivity
Fast and accurate 3D measurement of large stack-yard is important job in bulk load-and-unload and logistics
management. Stack-yard holds its special characteristics as: complex and irregular shape, single surface texture and low
material reflectivity, thus its 3D measurement is quite difficult to be realized by traditional non-contacting methods, such
as LiDAR(LIght Detecting And Ranging) and photogrammetry. Light-section is good at the measurement of small
bulk-flow but not suitable for large-scale bulk-yard yet. In the paper, an improved method based on stereo cameras and
laser-line projector is proposed. The due theoretical model is composed from such three key points: corresponding point
of contour edge matching in stereo imagery based on gradient and epipolar-line constraint, 3D point-set calculating for
stereo imagery projected-contour edge with least square adjustment and forward intersection, then the projected
3D-contour reconstructed by RANSAC(RANdom SAmpling Consensus) and contour spatial features from 3D point-set
of single contour edge. In this way, stack-yard surface can be scanned easily by the laser-line projector, and certain
region's 3D shape can be reconstructed automatically by stereo cameras on an observing position. Experiment proved the
proposed method is effective for bulk-yard 3D measurement in fast, automatic, reliable and accurate way.
KEYWORDS: Detection and tracking algorithms, Image processing, Distortion, Target recognition, Image quality, Algorithm development, 3D acquisition, 3D image processing, 3D vision, Signal to noise ratio
The target tracking method based on correlation in image sequence is always invalid because of the magnitude or shape
distortion and occlusion. In this paper a robust and highly accuracy matching algorithm called Matching Based on Valid
Invariant Feature Part (MBVF) is proposed, which combines the target image invariant feature description, matching of
feature, and recognition of valid feature. The feature descriptor is formed from a vector containing the values of all the
grad magnitude and orientation entries that belong to the divided parts of target area. The features is robust to image
rotation, distortion, addition of noise, change in 3D viewpoint, and change in illumination. The first step of the algorithm
is to build the invariant feature descriptor of the target area in the referenced image. At the second step, a coarse position
of the target is calculated using the traditional forecast and correlation method. And the invariant feature descriptors of
all the possible points of the tracked target in image to be tracked are built also. Next, by comparing the invariant feature
of the referenced target and the tracked target the valid feature parts of the feature are recognized. At last, similitude
function is calculated according the valid feature parts in both images, which give the final fine position of the target in
the tracked image. Experiment results show that the MBVF can deal with the target tracking and positioning problems in
image sequence process and stereo image analysis automatically and accurately.
An algorithm of multiple facula targets recognition based on edge and region search in full field of a frame of image is
presented. Firstly, the image is segmented by binarization and the burr around the targets is removed by morphology
processing. Then every facula target's edge and region is found and numbered in turn. Experimental results on simulated
images and real images are shown to validate the presented algorithm.
Moving target tracking is a basic task in the processing of high speed photography. Despite its widely applications,
Correlation tracking method can not adapt to the rotation and zoom of target and results in accumulation of tracking
error. The Least Squares Image Matching(LSIM) method which is used in photogrammetry is introduced to moving
target tracking, and a Weighted Least Squares Image Matching(WLSIM) based tracking algorithm is proposed. The
WLSIM based algorithm sets weights according to the target's shape for the Least-Squares Image Matching Algorithm,
as a result matching error produced by the background in the tracking window can be avoided. Experimental results are
shown to demonstrate the robustness, efficiency and accuracy of the proposed algorithm.
Speckle fringe patterns of ESPI are full of high-spatial-frequency and high contrast speckle noise which defies normal
process methods. Filtering with contoured windows has been proven to be an efficient approach to filter off the speckle
noise while reserving the fringe patterns obtained by subtraction of two original speckle patterns. Furthermore, with
contoured windows, the contoured correlation fringe pattern (CCFP) method proposed by the authors can derive highquality
fringe patterns of ESPI with speckle-free, smooth, normalized and consistent fringes from two original speckle
patterns. CCFP method can also extract the phase field with a single-step phase-shifting. Determination of contoured
windows is a key step in CCFP method. The contoured windows used to be determined by fringe orientations only and
this process would generate accumulated errors. In this paper, two new algorithms to determine the contoured windows
according to the fringe intensity slope and the distance ratio to neighboring skeletons with the help of the local fringe
direction are proposed. These new techniques can determine contoured windows more precisely and more robustly with
no accumulated errors. Some applications of our new contoured windows are also presented.
An image sequence analysis system for analysis of object movement from film rolls or tapes is developed in our lab. The system hardware consists of a film winding apparatus, a CCD camera with an image card and a PC system. The main features of the software include correlation tracking, several pattern recognition tracking, trajectory estimation and lens distortion calibration, etc.
KEYWORDS: Distortion, Data analysis, High speed photography, Image processing, Cameras, Image analysis, Data processing, Calibration, Motion analysis, Digital image processing
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.