You have requested a machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Neither SPIE nor the owners and publishers of the content make, and they explicitly disclaim, any express or implied representations or warranties of any kind, including, without limitation, representations and warranties as to the functionality of the translation feature or the accuracy or completeness of the translations.
Translations are not retained in our system. Your use of this feature and the translations is subject to all use restrictions contained in the Terms and Conditions of Use of the SPIE website.
30 August 2002Color fusion schemes for low-light CCD and infrared images of different properties
Although gray-level fused images can optimally integrate the modalities of low-light CCD and infrared imager, operators cannot tell from which modality the details originate. Thus the fundamental that human eyes can discern much more color categories than gray levels has been used to assigns a distinct color to each sensor modality. But the color fused image which has no natural appearance will fatigue operators greatly. Our approach is building on MIT scheme and aims at achieving natural appearance in the color fused image. MIT scheme derives its basis from biological models of color vision and utilizes the feed-forward center-surround shunting neural network to enhance and fuse low-light and infrared images. We bring forward linear fusion architecture, and composite architecture that comprises the enhancement part of MIT scheme and the linear fusion architecture. Furthermore, enhancement and combination methods for low-light and infrared images of different properties have been specified.
The alert did not successfully save. Please try again later.
Lingxue Wang, Weiqi Jin, Zhiyun Gao, Guangrong Liu, "Color fusion schemes for low-light CCD and infrared images of different properties," Proc. SPIE 4925, Electronic Imaging and Multimedia Technology III, (30 August 2002); https://doi.org/10.1117/12.481629