KEYWORDS: 3D displays, Eye, Visualization, Cones, Image processing, 3D image processing, Calibration, Colorimetry, Information technology, Color vision
As an important feature of image, color can be used to achieve binocular vision. However, different colors may have different contributions. In this experiment, we designed a stimulus in which luminance is incongruent and color could be manipulated. Color variations were based on opponent color space, where seventeen color points distributing in red-green and blue-yellow directions were selected. The stimulus consisted of an array of asymmetric patches uniformly distributed in a constant sized volume. Subjects were required to indicate the amount of perceived depth patches in the 3D displays. Our results demonstrate that the amount of perceived depth patches was influenced by color information, and indicate that colors have different contributions to binocular matching.
Visible and near-infrared spectral reflectances of surface vegetation are basic data for applications in remote sensing classification, multispectral imaging and color reproduction. Leaves are the objects of this study. Firstly, The 400-700 nm visible light spectral reflectance and 700−1000 nm near infrared spectral reflectance data of 12 kinds of trees such as camphor tree, ginkgo tree and peach tree (etc.) are measured by visible and near-infrared portable hyperspectral cameras. The spectral reflectance data is obtained by denoising the using the Minimum Noise Fraction (MNF). Secondly, the Principal Component Analysis (PCA) is used as a method of processing spectral reflectance in the visible and near infrared bands. At last, the correlation analysis is used for spectral reflectance in the visible and near-infrared bands. The obtained data and results provide a theoretical basis for the subsequent establishment of a spectral reflectance data base of surface vegetation spectroscopy and multispectral imaging.
Image registration has always been the hot topic in image research field, and the mutual information registration method has become a commonly used method in image registration because of its high precision and good robustness. Unfortunately, it has a problem for infrared and visible image registration. Lots of rich background detail information is usually provided by the visible light band, while the infrared image can locate an object (heat source) with a higher temperature, and often can't obtain the background information. The large difference in the background information of the two images not only interferes with the accuracy of the registration algorithm but also brings a lot of computation. In this paper, a method of fuzzy c-means clustering is used to separate foreground and background which reduces the background information interference for registration, based on the feature that the infrared image and the visible image have a high uniformity in the target area and a large difference in the background area. Then, the mutual information of the foreground image marked by clustering algorithm is calculated as the similarity measure to achieve the purpose of registration. Finally, the algorithm is tested by the infrared and visible images acquired actually. The results show that the two image’s registration is perfectly implemented and verify the effectiveness of this method.
KEYWORDS: Image fusion, Near infrared, RGB color model, Denoising, Detection and tracking algorithms, Visible radiation, Image analysis, Image processing, Color imaging, Algorithm development
In a low-light scene, capturing color images needs to be at a high-gain setting or a long-exposure setting to avoid a visible flash. However, such these setting will lead to color images with serious noise or motion blur. Several methods have been proposed to improve a noise-color image through an invisible near infrared flash image. A novel method is that the luminance component and the chroma component of the improved color image are estimated from different image sources [1]. The luminance component is estimated mainly from the NIR image via a spectral estimation, and the chroma component is estimated from the noise-color image by denoising. However, it is challenging to estimate the luminance component. This novel method to estimate the luminance component needs to generate the learning data pairs, and the processes and algorithm are complex. It is difficult to achieve practical application. In order to reduce the complexity of the luminance estimation, an improved luminance estimation algorithm is presented in this paper, which is to weight the NIR image and the denoised-color image and the weighted coefficients are based on the mean value and standard deviation of both images. Experimental results show that the same fusion effect at aspect of color fidelity and texture quality is achieved, compared the proposed method with the novel method, however, the algorithm is more simple and practical.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.