In order to overcome the high cost of manufacture and large volume or weight limitations, one solution is to arrange multiple sub-aperture optical systems in accordance with certain spatial rules. The stacking image of sub-aperture beam on focal plane is equivalent to large aperture optical system. However, due to the discretization of pupil distribution of sparse-aperture optical system, the signal-to-noise ratio of image is reduced, the modulation transfer function decreases at midband spatial frequencies, and the optical system errors increase. Aiming at the poor imaging quality of sparse-aperture optical system, in this article, the method of restoration algorithm based on improved Wiener filter and optimization of adjacent frames is proposed, which makes the restored video image have higher definition. The restoration algorithm based on improved Wiener filter and evaluate as well as optimize of adjacent frames in this article mainly contains four aspects, including the analyze of image degradation process, the establish of image restoration model, the evaluate of restored image’s definition, and the optimize of adjacent frame image. Firstly, Synthesize the effect of atmospheric transmission and array structure on image degradation, we have constructed an image degradation model and have calculated the degradation function under the model. Then, the restoration model based on Wiener filter is established and improved. Moreover, the definition evaluation factor of no reference image is built to measure the quality of the restoration image. Finally, construct the mapping relation between the adaptive constant K and the definition evaluation factor in Wiener filter, constantly optimize image restoration quality. In high altitude reconnaissance, remote sensing imaging and other fields, cameras are required to have very high resolution, so the algorithm in this article has great research value.
Curvature filter and gradient transform based image enhancement algorithm can effectively suppress noises and enhance image edges. However, it is very hard to be carried out in real time due to the large computing load. To address this problem, a GPU based parallel implementation is proposed in this paper. First, aiming at the characteristics of the algorithm, a numerical implementation method based on central-difference is proposed. Then a domain decomposition scheme is utilized in parallel Gaussian curvature filter to remove the dependence of neighboring pixels and guarantee convergence. Finally, we make the multiprocessor wrap occupancy reach 100% by optimizing the thread grid and register usage. Experimental results demonstrate that our parallel method runs 200-300 times faster than CPU serial method with real time processing of 4096×4096 resolution image, which indicates a great potential for application.
Traditional histogram equalization method always leads to the gray level reduction and loss of details. In this paper, an efficient and self-adaptive image enhancement algorithm is proposed based on canny operator and histogram equalization. The canny operator is used to extract the detail information which could be preserved in the enhanced image. The shortcomings of histogram equalization can thus be overcome. The experimental results with infrared images show that our method can preserve more image details and improve the image contrast and suppress noise effectively, which indicates a better performance for infrared image enhancement.
Pixel-level image fusion, which is widely used in remote sensing, medical imaging, surveillance and etc., directly combines the original information in the source images. As a pixel-level method, multi-focus image fusion is designed to combine the partially focused images into one fully fused single image, which is expected to be more informative for human or machine perception. To achieve this purpose, an algorithm using spatial frequency (SF) measure and discrete wavelet transform (DWT) for multi-focus image fusion is proposed. In this work, the source images are decomposed into low frequency components and high frequency components by using DWT. Then the spatial frequency of the low frequency components is calculated. The spatial frequency is used to judge the focused regions, followed by the morphological filter and median filter. The fused low frequency can be obtained. And the high frequency components are fused using traditional method. Finally, the fused image is obtained by doing inverse discrete wavelet transform. To do the comparison, the proposed algorithm is compared with several existing fusion algorithms in qualitative and quantitative ways. Experimental results demonstrate that our method can be competitive or even outperforms the methods in comparison.