A method based on compressed sampling and matrix decomposition to separate foreground and background of infrared sparse images was proposed in this article. A new combinational sensing matrix was used to obtain measurements of original infrared images. The thumbnail and the compressed sampled values are obtained by the combinational sensing matrix at the same time. Rank estimation based on image information entropy and sparse recovery was used to reconstruct foreground and background in the compressed domain. Experiments was carried out on real infrared images. Compared with MaxMedian, TopHat and CLSDM, the signal-to-noise ratio gain and background suppression factor have been significantly improved by using our method.
Optical remote sensing is widely used in relief and military affairs. However, its detection ability is limited by the contrast between target and background. Polarization imaging detection is different from traditional intensity detection methods. It can effectively detect and identify polarization pattern obvious targets with low contrast, but it has some shortcomings such as large system volume, complex system design and low light efficiency. Therefore, a polarization detection method based on dynamic vision sensor (DVS) is proposed in the paper. The feasibility of the method is studied and analyzed here. A simple experimental system based on DVS and rotating polarizer is built. Moreover, both indoor and outdoor experiments are carried out separately. The results show that our method can effectively detect the targets with different degrees of polarization (Dop) in the scene, and has the advantages of high sensitivity, intuitive detection and small physical size. It holds the potential applications in the field of remote sensing based man-made targets detection.
KEYWORDS: Particle filters, Target detection, Detection and tracking algorithms, Signal to noise ratio, Electronic filtering, Image processing, Surveillance systems
Particle filtering is a key technique for moving targets detection and tracking in the field of remote surveillance system and air defense systems. Moving targets can be tracked by particle filter without registration. However, standard particle filtering cannot suite for high-precision tracking and track small dim moving targets occupying a few pixels in image, having low signal-to-noise ratio (SNR) and always flicking. To solve this problem, an improved algorithm is proposed to achieve detection and tracking for small dim moving targets. In the new algorithm, the prediction process of particle filter is improved by a linear regression method. It is applicable to the sequential images where the moving targets become smaller and dimmer gradually. Small dim targets can be detected and tracked directly with low SNR and without registration. The trajectory of the moving target is learned automatically through the past state of the moving target, and the trajectory is used for generating the importance density function. The importance density function is used as the prior probability in particle filter to sample and update particles. Through continuously learning and updating the trajectory of the moving target, the tracking accuracy is improved. Experimental results show that the tracking accuracy of the moving targets is greatly improved, and small dim moving targets can be detected and tracked without registration.
Accurate onboard-camera pose estimation is one of the challenges of satellite systems. Improving remote sensing camera pose accuracy never ceases for various applications, including autonomous navigation, 3D reconstruction and continuous city modeling. 3D products of very high spatial accuracy can be created with 3m@SE90 (3 meters error with SE90, which is the abbreviation for Spherical Error 90%) with leading companies, for example, Vricon company in USA. Aiming at the problem of the accuracy of pose estimation, a new method from captured images with the reference 3D products is presented in this paper. Distinguished from the existing methods, our method employs the 3D model to calibrate the pose of the remote sensing camera. Firstly, the high-precision 3D digital surface model is projected onto image space using a virtual calibrated camera. Then, the camera motion parameters of the neighboring moment are estimated by the information of the adjacent frames. This process consists of three steps: i) feature extraction; ii) similarity measurement, and feature matching; iii) camera pose estimation and verification. Finally, the camera pose of the captured image can be determined. Experiment results were compared with the initial exterior orientation parameters used to achieve perspective transformation of the captured images. Furthermore, the method proposed in this study is tested by hardware experiment which simulates remote sensors and platform. Results showed that acceptable accuracy of camera pose can be achievable by using the proposed approach.
High-resolution (HR) remote sensing images are characterized by rich and detailed ground object information with more complex structures of the ground object which make the interference information is more difficult to process. It has always been the focus of domestic and foreign researchers that how to obtain more accurate and higher quality ground object information from these images. The GF-4, the world's first geostationary orbit with high spatial resolution remote sensing satellite, can provide high temporal resolution, large width and 50m pixel resolution of remote sensing data by using area array imaging technology. However, the GF-4 image is a medium resolution and low resolution (LR) image data with relatively vague details of ground objects and not obvious relationships between objects which limit the acquisition of the ground object information to some extent. Therefore, in this paper, we analyze the influence of various factors in the imaging process and construct an image degradation model according to the characteristics of GF-4 satellite images. We adopted the super resolved (SR) method based on Mixed sparse representations (MSR) to increase the spatial resolution of the GF-4 image by twice as much, which not only enriched the detailed information of the image, but also improved the image quality. For the results of SR of GF-4 imagery, we adopted the Maximum Likelihood Classification (MLC) method to perform image classification test and result verification. The experimental area selected in this paper is Yantai City, Shandong Province, China, the LANDSAT 8 OLI data is used as a training sample to calculate the overall accuracy and Kappa coefficient after classification. The results show that the overall accuracy of the superreconstructed result data is 40% higher than that of the source image data from GF-4, especially when the spectral characteristics of the ground objects are obviously different, the accuracy is more obvious. The Kappa coefficient increased 0.4, the extracted outline is more complete and the classification details are more refined.
The stripe noise is a key factor that affects imaging quality of satellite multi-hyperspectral remote sensing images, which also has a serious effect on the interpretation and information extraction of remote sensing images. Complex surface textures mixed with strip noises in the high-resolution multi-spectral remote sensing of satellite are extremely difficult to remove, this paper analyzes the Markov random field prior model method, combines the Huber function to propose a universal, fast and effective Huber Markov destriping method. According to the statistical characteristics of the image gray level variation, the distribution features and mutual relationship between each pixel and its neighborhood pixels in the image, the co-occurrence matrix reflecting the contrast gray characteristics of the image is connected with the threshold T of Huber function, which is automatically iteratively determined during the noise removal process, and will be able to remove image noises as well as preserving its edges and details effectively. In order to solve the time complexity of the algorithm caused by the pixel space information introduced by the Huber Markov random field algorithm, the GPU adaptive partitioning technique is adopted to accelerate the algorithm. Experimental results show that the destriping method based on Huber function Markov random field can remove the strip noise effectively, while preserving texture details of the image, which can be applied to a variety of noise-containing images. Meanwhile, GPUbased adaptive partitioning technology has been adopted, which has greatly improved the computational efficiency of processsing massive remote sensing images, and lays a foundation for the application of remote sensing satellite images in China.
Large amount of data is one of the most obvious features in satellite based remote sensing systems, which is also a burden for data processing and transmission. The theory of compressive sensing(CS) has been proposed for almost a decade, and massive experiments show that CS has favorable performance in data compression and recovery, so we apply CS theory to remote sensing images acquisition. In CS, the construction of classical sensing matrix for all sparse signals has to satisfy the Restricted Isometry Property (RIP) strictly, which limits applying CS in practical in image compression. While for remote sensing images, we know some inherent characteristics such as non-negative, smoothness and etc.. Therefore, the goal of this paper is to present a novel measurement matrix that breaks RIP. The new sensing matrix consists of two parts: the standard Nyquist sampling matrix for thumbnails and the conventional CS sampling matrix. Since most of sun-synchronous based satellites fly around the earth 90 minutes and the revisit cycle is also short, lots of previously captured remote sensing images of the same place are available in advance. This drives us to reconstruct remote sensing images through a deep learning approach with those measurements from the new framework. Therefore, we propose a novel deep convolutional neural network (CNN) architecture which takes in undersampsing measurements as input and outputs an intermediate reconstruction image. It is well known that the training procedure to the network costs long time, luckily, the training step can be done only once, which makes the approach attractive for a host of sparse recovery problems.
In this paper, the application of super resolution (SR, restoring a high spatial resolution image from a series of low resolution images of the same scene) techniques to GaoFen(GF)-4, which is the most advanced geostationaryorbit earth observing satellite in China, remote sensing images is investigated and tested. SR has been a hot research area for decades, but one of the barriers of applying SR in remote sensing community is the time slot between those low resolution (LR) images acquisition. In general, the longer the time slot, the less reliable the reconstruction. GF-4 has the unique advantage of capturing a sequence of LR of the same region in minutes, i.e. working as a staring camera from the point view of SR. This is the first experiment of applying super resolution to a sequence of low resolution images captured by GF-4 within a short time period. In this paper, we use Maximum a Posteriori (MAP) to solve the ill-conditioned problem of SR. Both the wavelet transform and the curvelet transform are used to setup a sparse prior for remote sensing images. By combining several images of both the BeiJing and DunHuang regions captured by GF-4 our method can improve spatial resolution both visually and numerically. Experimental tests show that lots of detail cannot be observed in the captured LR images, but can be seen in the super resolved high resolution (HR) images. To help the evaluation, Google Earth image can also be referenced. Moreover, our experimental tests also show that the higher the temporal resolution, the better the HR images can be resolved. The study illustrates that the application for SR to geostationary-orbit based earth observation data is very feasible and worthwhile, and it holds the potential application for all other geostationary-orbit based earth observing systems.
It is more difficult and challenging to implement Nano-satellite (NanoSat) based optical Earth observation missions than conventional satellites because of the limitation of volume, weight and power consumption. In general, an image compression unit is a necessary onboard module to save data transmission bandwidth and disk space. The image compression unit can get rid of redundant information of those captured images. In this paper, a new image acquisition framework is proposed for NanoSat based optical Earth observation applications. The entire process of image acquisition and compression unit can be integrated in the photo detector array chip, that is, the output data of the chip is already compressed. That is to say, extra image compression unit is no longer needed; therefore, the power, volume, and weight of the common onboard image compression units consumed can be largely saved. The advantages of the proposed framework are: the image acquisition and image compression are combined into a single step; it can be easily built in CMOS architecture; quick view can be provided without reconstruction in the framework; Given a certain compression ratio, the reconstructed image quality is much better than those CS based methods. The framework holds promise to be widely used in the future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.