Significant radiometric differences and weak grayscale correlations exist between optical and SAR images. As a result, there are severe spectral and spatial distortions in the fused images. We propose a fusion method of optical and SAR remote sensing images that couples the gain injection method and the guided filter. The proposed method is based on the fusion framework of generalized intensity-hue-saturation non-subsampled contourlet transform, and the gain injection is used for the low-frequency coefficient fusion to reduce the spectral distortion. Then, the divergence is used as the activity measure operator to calculate the initial weight template for the high-frequency coefficients. The guided filter is used to optimize the edge details of the initial weight template. The fused high-frequency coefficients are obtained by weighted average. Through comparison experiments with existing fusion methods, the results show that the proposed method has the best quality of fusion and the proposed method has the best performance. |
1.IntroductionSAR has strong penetrating power and can generate remote sensing images without being restricted by weather and time, and the structural features of the images are apparent. However, the lack of spectral information and severe noise make SAR image interpretation difficult. Optical images are rich in texture and spectral information, but the imaging conditions are volatile and easily obscured by clouds az.1 The pixel-level fusion of optical and SAR images can integrate both advantages and obtain complementary information, which is of great significance to overcoming the limitations of single-source remote sensing images and improving the interpretation capability of images. The fused images of SAR and optical images have been widely used in many fields to improve the interpretation of remote sensing images, such as land cover classification,2,3 sea ice identification,4,5 biomass estimation,6 change detection,7,8 flood monitoring,9,10 and urban feature extraction and classification.11,12 Pixel-level fusion methods of optical and SAR images can be divided into four categories: component substitution (CS) fusion methods, multi-scale decomposition (MSD) fusion methods, hybrid methods based on CS and MSD, and model-based methods. The most used method is the hybrid method, which integrates the advantages of both CS and MSD methods. Compared to single methods, hybrid methods can reduce spatial structure and spectral distortion in the fused image and are more suitable for optical and SAR image fusion.13 Hong et al. proposed a method based on the intensity-hue-saturation (IHS) and wavelet transform. This method uses global statistical features as the active measure and then achieves fusion by directly replacing sub-bands. However, this method ignores the specificity of individual image elements and may introduce a large amount of noise, leading to significant spectral distortions in the fusion results.14 Subsequently, Han et al. performs IHS transform and à trous wavelet transforms on the images to be fused and then uses local statistical parameters as activity measures to calculate the pixel-wise fusion weight. The fusion weight estimated by this method fully considers the unique characteristic of a single pixel and the influence of neighboring pixels on the central pixel. The resulting fused image retains more spatial structure and spectral information.15 To further reduce the spatial distortion in the fused images, Anandhi and Valli16 calculated the fusion weights based on non-subsampled contourlet transform (NSCT) with minimum likelihood ratio, local gradient, and maximum edge intensity as the active measure operators, which can retain more edge and contour features in the fused image. Kulkarni et al. used a hybrid method of the principal component analysis (PCA) and discrete wavelet transform (DWT) transform as the base fusion framework, calculated the fusion weights using the image element local energy as the active measure operator, and performed a weighted average fusion of the components further to reduce the spectral distortion in the fused images.17 Zhou et al. used an adaptive IHS fusion method based on phase coherence feature preservation to fuse SAR and optical remote sensing images, and more spectral and spatial structure information was retained in the results.18 Although many scholars have used the hybrid method as the basic framework and continuously introduced better-performing activity measure operators and improved the fusion weight calculation method for multi-scale components, there are still two problems:
Given the problems of existing fusion methods, this paper proposed a coupled gain injection and guided filtering method for optical and SAR image fusion. The proposed method uses generalized intensity-hue-saturation non-subsampled contourlet transform (GIHS-NSCT) as the basic fusion framework. First, GIHS extracts the luminance component I of the optical image. NSCT then decomposes the I and SAR images into multi-scale and multi-directional. Next, the low-frequency coefficients of I and SAR are fused using the gain injection method. The gain injection method is used by solving the unique features of the low-frequency coefficients of the SAR image and injecting the unique features into the low-frequency coefficients of I as gain. Fusing only the unique features of the low-frequency coefficients of SAR images can effectively reduce the spectral distortion. 2.Fundamental Theories and MethodsThis section introduces some fundamental theories involved in the proposed fusion method, including the GIHS ensemble method, the NSCT method, and the guided filter. 2.1.GIHS Fusion MethodGIHS extends the classical IHS method of the CS class fusion method. Compared with IHS, it can acquire the luminance components of images with more than three channels. It does not require forward and inverse transformation of the image color space, which is computationally tiny and improves the fusion efficiency.19 Therefore, GIHS is widely used in image fusion,20,21 and we extend it to optical and SAR image fusion. The fused image calculation process of the GIHS method is as follows: where , and SAR represent the fused image, optical image, and SAR image, respectively. is the brightness component of the optical image and is the number of bands. and represent the corresponding weights of each band of the optical image, respectively.2.2.NSCT MethodNSCT is an image MSD method proposed by Da Cunha et al.22 It consists of the non-subsampling pyramid filter bank (NSPFB) and the non-subsampling directional filter bank (NSDFB). NSPFB can perform MSD of images. Its non-down sampling decomposition can reduce the distortion of image elements caused by up-sampling and down-sampling processes and has translation invariance. NSDFB is a multi-directional filter bank that decomposes the image into multiple directions and preserves multi-directional detail features. The NSCT method that coupled NSPFB and NSDFB has the advantages of multi-scale, multi-directional, and non-down sampling.23 So NSCT is widely used in image fusion.24,25 The schematic diagram of NSCT MSD is shown in Fig. 1. 2.3.Guided FilterThe guided filter is an edge-preserving filter based on a local linear model. It works by correcting the noisy image with reference to the guiding image and has the properties of noise reduction and edge retention. Therefore, guided filters are widely used to optimize fusion weight maps in image fusion.26,27 The guiding image is the key to determining the filtering effect, which can be the same or different from the input image but must be given in advance. The guided filter is implemented by a sliding calculation of the local window. For a square sliding window of size , the linear relationship between the guiding image and the output image can be expressed as where is the linearity factor of the sliding window . The linear coefficients are significant, and solving for them is a least-squares optimization process. Optimization aims to solve a set of such that the difference between the input image window and the output image window is minimized. Based on the above, the optimization objective function can be defined as follows: where is the regularization parameter and , and are calculated as where and are the average values and variances values of and the window . is the mean value of the window in .3.Proposed Fusion MethodIn this section, we elaborate on the implementation process of the proposed fusion method, including the basic framework, the rules for low-frequency coefficients, and high-frequency coefficients fusion. 3.1.Basic FrameworkWe use the hybrid method of GIHS-NSCT to fuse optical and SAR images. The overall methodological framework of the algorithm is shown in Fig. 2. In Fig. 2, the input optical and SAR images have been registered using the method proposed by Yan et al.28 and can be directly used for pixel-level fusion. GIHS-NSCT first acquires the luminance image of the optical image with GIHS. Then the luminance and SAR images are multi-scale fused based on NSCT to obtain the fused luminance image. Finally, the original optical and the new luminance image are fused using GIHS. The key to determining the quality of fusion is the feature maps and fusion weight maps corresponding to the low-frequency and high-frequency coefficients. The quality of the feature map depends on the feature extraction method and the feature measurement used. The key to the fusion weight map lies in the fusion weight calculation method and the activity measurement operator. The main steps of the GIHS-NSCT fusion method are as follows:
3.2.Rule for Low-frequency CoefficientsThe low-frequency sub-band approximates the image, which contains the main contour features. The low-frequency sub-bands are also crucial for determining the fused image’s spectral distortion. Therefore, considering the significant nonlinear radiometric differences between optical and SAR images, we use the feature gain method for fusion when calculating the fused low-frequency coefficients. The fusion is weighted only at specific features of the low-frequency sub-band of the SAR image. The weights of the SAR image elements at non-specific features are all 0, while the weights of the optical image elements are set to 1. This fusion method can effectively reduce the spectral distortion in the fused image caused by the nonlinear radiation difference.29 The fusion process of the low-frequency sub-band is shown in Fig. 3. The images used in Fig. 3 are rendered for easy observation. In the fusion process, the common features of the low-frequency sub-bands of SAR and I are firstly calculated according to the Eq. (7): Since the low-frequency sub-bands are the approximation of the image features, we take the low-frequency sub-bands of I and SAR directly as the feature maps, which is to let . Thus, the common feature of the low-frequency sub-bands of I and SAR images is calculated as follows: Based on Eq. (8), the peculiar features of the LF sub-band of the SAR image are given as follows: The method of the fused low-frequency sub-band calculated based on the feature gain injection method is given as where is the sub-band of fused low-frequency, is the injection coefficient of the unique features of the low-frequency sub-band of SAR, calculated as where denotes the entropy of the corresponding image.3.3.Rule for High-frequency CoefficientsThe high-frequency sub-bands are multi-directional detailed images of the original image, rich in details and textures. Meanwhile, the high-frequency coefficients are also crucial in determining the degree of spatial distortion of the fused image. Therefore, when fusing high-frequency, we introduce the image divergence, which is sensitive to the points near the texture edge, as the feature activity metric to accurately extract and describe the point features of the high-frequency sub-bands. The divergence of a point in the image precisely describes its degree of clustering in the gradient field. The larger divergence value indicates a greater divergence of the point in the gradient field and a higher probability of the point being a feature point on the edge of the texture.30 Therefore, using divergence as the active measure in high-frequency fusion can accurately describe the feature saliency of all image elements and thus acquire complete feature maps. The NSCT method allows us to obtain detailed images of the source in multiple directions at multiple scales. Each detailed image can be considered as a single-channel image in a two-dimensional (2D) cartesian coordinate space, and the divergence of the image is calculated in the gradient field. For a 2D field , the gradient at is calculated as For a 2D vector field, the divergence of at is formulated as Based on the Eqs. (12) and (13), the divergences of an image are given as Since SAR images are seriously polluted by noise, there is still some noise in the speckle-filtered SAR images, which may reduce the fusion quality. Unfortunately, according to the calculation principle of divergence, the image divergence is the second-order image gradient, and the gradient operator is not robust to the noise in the image. Therefore, the divergence is used as the activity measure for the fusion of high-frequency sub-bands to calculate the fusion weights, which is challenging to overcome the influence of noise on fusion quality. To address the above problems, we utilized the guided filter in the fusion process of high-frequency sub-bands, optimized the weight maps obtained from the divergence calculation, and used the strong correlation between pixels to improve the fusion quality of detail images.31The fusion process of high-frequency sub-bands is shown in Fig. 4. First, acquire the high-frequency sub-bands {, } of SAR and I separately, and then calculate feature maps {, } based on divergence. Second, initial weight maps {, } are determined with the maximum divergence rule. Third, we use {,} as guiding images to optimize initial weight maps {, } and acquire the optimized weight maps {, }. Finally, {, } are used to fuse the detailed images of SAR and I by a weighted average method. 4.ExperimentThis section introduces the datasets used in the experiments, the indicators for objective evaluation of the fusion results, and the comparative analysis of the experimental results. All experiments were performed using MATLAB2020a on a computer with NVIDIA Quadro P4000 GPU and Intel Xeon W-2102 CPU. 4.1.DatasetsWe arranged three sets of experiments, and the datasets used in the experiments consisted of three groups of optical and SAR images. The datasets used in the experiments contain three groups of optical and SAR images. In experiment 1, the main scene of the data is farmland, which includes a scene of airborne SAR images and a scene of Google Optical images with sub-meter resolution. In experiment 2, the main scene of the data is the city, which includes a scene of the GaoFen-3 SAR image and a scene of the GaoFen-1 multispectral image with meter-level resolution. In experiment 3, the main scenes of the data are mountains and lakes, including one scene of Sentinel-1 SAR image and one scene of Landsat8 image, with 10-m level resolution. Through three sets of experiments, the proposed algorithm’s effectiveness is verified from multi-source, multi-scale, and multi-scene perspectives. It is worth stating that the test images used in the experiment were completely pre-processed. The SAR images are processed in the SARscape toolbox in ENVI, which includes import, multilooking, speckle filtering, geocoding, and radiometric calibration. The specific parameters of the experimental data set are shown in Table 1. Table 1Data information of the experiment.
4.2.Evaluation MetricsTo evaluate the performance of the fusion methods, root mean square error (RMSE),32 Erreur relative globale adimensionnelle de synthèse (ERGAS),33 universal image quality index (UIQI),34 spectral angle mapper (SAM),35 and quality with no reference (QNR)36 are used to evaluate the quality of fusion results quantitatively. Among them, SAM measures the degree of spectral distortion of the fusion result, and the smaller the value, the smaller the spectral distortion. UIQI also called the index, measures the fused image’s correlation, luminance, and contrast distortion. Its value ranges from [], and a higher value indicates higher image quality. RMSE measures the global spectral distortion, and the smaller the value, the smaller the global spectral distortion. ERGAS can reflect the overall image quality, and the smaller the ERGAS value of the fused image, the higher the fusion quality. QNR is a comprehensive evaluation index that contains two parts: spectral distortion and spatial distortion. The smaller the value of and , the smaller the spectral and spatial distortion, while the larger the value of QNR, the higher the quality. 4.3.Experimental Results and ComparisonThe comparison methods used in the experiments include the IHS37 and PCA38 methods that belong to the CS class, the NSCT-PC39 method that belongs to the MSD class, and the IHS-wavelet14 and the NSCT-mean40 method in the hybrid method that couples CS and MSD. 4.3.1.Subjective evaluationThe fusion results of experiment 1 are shown in Fig. 5. The IHS and PCA methods of the CS class can inject the spatial structure information of the SAR image into the optical image more completely. Still, simultaneously, they also cause severe spectral distortion. In contrast, the hybrid methods can effectively reduce the spectral distortion while retaining more spatial information. The IHS-wavelet and the NSCT-mean methods show different degrees of global brightness reduction than the original optical image. NSCT-PC and the proposed method have similar results in spectral retention, while the fused image obtained by the proposed method has more distinct features; therefore, the proposed method has the best fusion performance in experiment 1. The fusion results of experiment 2 are shown in Fig. 6. The IHS and PCA methods of the CS class can thoroughly remove the clouds when fusing SAR and optical remote sensing images affected by cloud occlusion. Although the direct component replacement method does not need to take into account the information of the optical images and can thoroughly remove the occluded clouds, it severely distorts the spectral information in the fusion results. The principle of the CS method determines this. The direct component replacement method does not need to consider the information of the optical image. It uses the SAR image replacement directly, which can remove the occluded clouds, but it also severely distorts the spectral information in the fusion result. The fusion results of the hybrid methods inject the spatial information of the SAR image into the part occluded by the cloud in the optical image. Such methods cannot thoroughly remove the obscured clouds but retain more spectral information and effectively inject spatial information. Among them, the spectral preservation of the IHS-Wavelet and NSCT-mean methods are relatively low. The spectral protection of the NSCT-PC method and the proposed method achieve similar results. Nonetheless, since the features of NSCT-PC injection are less evident than those of the proposed method, the fused image of the proposed method is of the highest quality in experiment 2 in a comprehensive view. The fusion results of experiment 3 are shown in Fig. 7. The main scenes of the experimental data are mountains and lakes. By fusing Landsat8 and Sentinel-1 SAR images, the distinct features in the SAR images are injected into the optical images, making fusion images rich with structural and spectral information. In terms of structural feature integrity, the PCA and NSCT-PC methods of injection do not achieve the expected results of the experiment. IHS, IHS-Wavelet, NSCT-mean, and the proposed method are all capable of injecting intact, stereoscopic structural features from SAR images into optical images. Among them, the fusion results of NSCT-mean and the proposed method have similar results, but the mountainous features in the fused image of the NSCT-mean are not as evident as those of the proposed method. Thus, the proposed fusion method performs the best in experiment 3. 4.3.2.Objective evaluationTables 2Table 3–4 show the evaluation results of the fusion results for the three groups of experiments. As seen from the three tables, compared with the IHS and PCA methods of the CS class, the hybrid method can retain more spectral information of the optical images while injecting the spatial information of SAR images into the optical images completely and clearly. Therefore, the hybrid method is more suitable for fusing SAR and optical remote sensing images. SAM, RMSE, ERGAS, , and can measure the spectral distortion of the fused images. From the index results, the spectral distortion of IHS and PCA methods of the CS class is the most severe, while the spectral distortion of the proposed method is the smallest. The evaluation results of show that among the hybrid methods, IHS-wavelet and NSCT-mean have similar spatial information retention. In contrast, the proposed method has the highest spatial information retention. In terms of the comprehensive quality of the fusion results, the evaluation of the QNR showed that the hybrid methods have higher comprehensive qualities of the fused images than the methods of the CS and MSD classes. Table 2Results of the quantitative evaluation of the methods in experiment 1.
Table 3Results of the quantitative evaluation of the methods in experiment 2.
Table 4Results of the quantitative evaluation of the methods in experiment 3.
5.ConclusionsWe have made improvements in two aspects to solve the problems that fused images of SAR and optical remote sensing images often have significant spectral and spatial distortion.
A limitation of the proposed method in this paper is that it is time-consuming. Therefore, reducing the time consumption of the fusion process will be one direction of our next research in the future. Our experiments found that the time consumption is mainly concentrated in NSCT MSD and reconstruction. Therefore, we will try some fast MSD methods, such as fast NSCT,41 fast finite Shearlet transform,42 etc. In addition, we plan to improve the fusion quality by using some active metric operators that are robust to noise and nonlinear radiometric differences, such as phase congruency features. AcknowledgmentsThis work was supported by the National Natural Science Foundation of China (Grant Nos. 41761082 and 41861055), the National Key Research and Development Program of China (Grant No. 2017YFB0504201). No potential conflict of interest was reported by the authors. ReferencesC. Pohl and J. L. Van Genderen,
“Review article multisensor image fusion in remote sensing: concepts, methods and applications,”
Int. J. Remote Sens., 19
(5), 823
–854 https://doi.org/10.1080/014311698215748 IJSEDK 0143-1161
(1998).
Google Scholar
D. Amarsaikhan et al.,
“Fusing high-resolution SAR and optical imagery for improved urban land cover study and classification,”
Int. J. Image Data Fusion, 1
(1), 83
–97 https://doi.org/10.1080/19479830903562041
(2010).
Google Scholar
D. Luo et al.,
“Fusion of high spatial resolution optical and polarimetric SAR images for urban land cover classification,”
in Third Int. Workshop on Earth Observ. and Remote Sens. Appl. (EORSA),
362
–365
(2014). https://doi.org/10.1109/EORSA.2014.6927913 Google Scholar
M. Liu et al.,
“PCA-based sea-ice image fusion of optical data by HIS transform and SAR data by wavelet transform,”
Acta Oceanol. Sin., 34
(3), 59
–67 https://doi.org/10.1007/s13131-015-0634-7 AOSIEE
(2015).
Google Scholar
S. Sandven,
“Sea Ice monitoring in the European Arctic Seas using a multi-sensor approach,”
Remote Sensing of the European Seas, 487
–498 Springer(
(2008). Google Scholar
M. E. J. Cutler et al.,
“Estimating tropical forest biomass with a combination of SAR image texture and Landsat TM data: an assessment of predictions between regions,”
ISPRS J. Photogramm. Remote Sens., 70 66
–77 https://doi.org/10.1016/j.isprsjprs.2012.03.011 IRSEE9 0924-2716
(2012).
Google Scholar
Y. Zeng et al.,
“Image fusion for land cover change detection,”
Int. J. Image Data Fusion, 1
(2), 193
–215 https://doi.org/10.1080/19479831003802832
(2010).
Google Scholar
M. Gong, Z. Zhou and J. Ma,
“Change detection in synthetic aperture radar images based on image fusion and fuzzy clustering,”
IEEE Trans. Image Process., 21
(4), 2141
–2151 https://doi.org/10.1109/TIP.2011.2170702 IIPRE4 1057-7149
(2012).
Google Scholar
J. Avendano et al.,
“Flood monitoring and change detection based on unsupervised image segmentation and fusion in multitemporal SAR imagery,”
in 12th Int. Conf. Electr. Eng., Comput. Sci. and Autom. Control,
1
–6
(2015). https://doi.org/10.1109/ICEEE.2015.7357982 Google Scholar
A. D’Addabbo et al.,
“SAR/optical data fusion for flood detection,”
in IEEE Int. Geosci. and Remote Sens. Symp.,
7631
–7634
(2016). https://doi.org/10.1109/IGARSS.2016.7730990 Google Scholar
T. Riedel, C. Thiel and C. Schmullius,
“Fusion of optical and SAR satellite data for improved land cover mapping in agricultural areas,”
in Proc. Envisat Symp.,
(2007). Google Scholar
A. Salentinig and P. Gamba,
“Combining SAR-based and multispectral-based extractions to map urban areas at multiple spatial resolutions,”
IEEE Geosci. Remote Sens. Mag., 3
(3), 100
–112 https://doi.org/10.1109/MGRS.2015.2430874
(2015).
Google Scholar
S. C. Kulkarni and P. P. Rege,
“Pixel level fusion techniques for SAR and optical images: a review,”
Inf. Fusion, 59 13
–29 https://doi.org/10.1016/j.inffus.2020.01.003
(2020).
Google Scholar
G. Hong, Y. Zhang and B. Mercer,
“A wavelet and IHS integration method to fuse high resolution SAR with moderate resolution multispectral images,”
Photogramm. Eng. Remote Sens., 75
(10), 1213
–1223 https://doi.org/10.14358/PERS.75.10.1213
(2009).
Google Scholar
N. Han, J. Hu and W. Zhang,
“Multi-spectral and SAR images fusion via Mallat and À trous wavelet transform,”
in 18th Int. Conf. Geoinf.,
1
–4
(2010). https://doi.org/10.1109/GEOINFORMATICS.2010.5567653 Google Scholar
D. Anandhi and S. Valli,
“An algorithm for multi-sensor image fusion using maximum a posteriori and nonsubsampled contourlet transform,”
Comput. Electr. Eng., 65 139
–152 https://doi.org/10.1016/j.compeleceng.2017.04.002 CPEEBQ 0045-7906
(2018).
Google Scholar
S. C. Kulkarni, P. P. Rege and O. Parishwad,
“Hybrid fusion approach for synthetic aperture radar and multispectral imagery for improvement in land use land cover classification,”
J. Appl. Remote Sens., 13
(3), 034516 https://doi.org/10.1117/1.JRS.13.034516
(2019).
Google Scholar
Z. Shunjie et al.,
“Fusion algorithm of SAR and visible images for feature recognition,”
J. Hefei Univ. Technol., 41
(7), 900
–907 https://doi.org/10.3969/j.issn.1003-5060.2018.07.008
(2018).
Google Scholar
J. Zhang et al.,
“Cloud detection in high-resolution remote sensing images using multi-features of ground objects,”
J. Geovisual. Sp. Anal., 3
(2), 1
–9 https://doi.org/10.1007/s41651-019-0037-y
(2019).
Google Scholar
X. Zhou et al.,
“A GIHS-based spectral preservation fusion method for remote sensing images using edge restored spectral modulation,”
ISPRS J. Photogramm. Remote Sens., 88 16
–27 https://doi.org/10.1016/j.isprsjprs.2013.11.011 IRSEE9 0924-2716
(2014).
Google Scholar
M. Chikr El-Mezouar et al.,
“An IHS-based fusion for color distortion reduction and vegetation enhancement in IKONOS imagery,”
IEEE Trans. Geosci. Remote Sens., 49
(5), 1590
–1602 https://doi.org/10.1109/TGRS.2010.2087029 IGRSD2 0196-2892
(2011).
Google Scholar
A. L. Da Cunha, J. Zhou and M. N. Do,
“The nonsubsampled contourlet transform: theory, design, and applications,”
IEEE Trans. Image Process., 15
(10), 3089
–3101 https://doi.org/10.1109/TIP.2006.877507 IIPRE4 1057-7149
(2006).
Google Scholar
Z. Wang et al.,
“Infrared and visible image fusion via hybrid decomposition of NSCT and morphological sequential toggle operator,”
Optik, 201 163497 https://doi.org/10.1016/j.ijleo.2019.163497 OTIKAJ 0030-4026
(2020).
Google Scholar
C. Zhao, Y. Guo and Y. Wang,
“A fast fusion scheme for infrared and visible light images in NSCT domain,”
Infrared Phys. Technol., 72 266
–275 https://doi.org/10.1016/j.infrared.2015.07.026 IPTEEY 1350-4495
(2015).
Google Scholar
Z. Zhu et al.,
“A phase congruency and local Laplacian energy based multi-modality medical image fusion method in NSCT domain,”
IEEE Access, 7 20811
–20824 https://doi.org/10.1109/ACCESS.2019.2898111
(2019).
Google Scholar
Y. Yang et al.,
“Remote sensing image fusion based on adaptive IHS and multiscale guided filter,”
IEEE Access, 4 4573
–4582 https://doi.org/10.1109/ACCESS.2016.2599403
(2016).
Google Scholar
Q. Jiahui, L. Yunsong and D. Wenqian,
“Guided filter and principal component analysis hybrid method for hyperspectral pansharpening,”
J. Appl. Remote Sens., 12
(1), 1
–18 https://doi.org/10.1117/1.JRS.12.015003
(2018).
Google Scholar
H. Yan et al.,
“HR optical and SAR image registration using uniform optimized feature and extend phase congruency,”
Int. J. Remote Sens., 43
(1), 52
–74 https://doi.org/10.1080/01431161.2021.1999527 IJSEDK 0143-1161
(2022).
Google Scholar
X. J. Chong and C. Xuejiao,
“Comparative analysis of different fusion rules for SAR and multispectral image fusion based on NSCT and IHS transform,”
in Int. Conf. Comput. and Computational Sci.,
271
–274
(2015). https://doi.org/10.1109/ICCACS.2015.7361364 Google Scholar
Z. Sheng et al.,
“Divergence-based multifocuses image fusion,”
J.-Huazhong Univ. Sci. Technol. Nat. Sci. Ed., 35
(4), 7 https://doi.org/10.13245/j.hust.2007.04.003
(2007).
Google Scholar
Q. Li et al.,
“Pansharpening multispectral remote-sensing images with guided filter for monitoring impact of human behavior on environment,”
Concurr. Comput. Pract. Exp., 33
(4), e5074 https://doi.org/10.1002/cpe.5074 CCPEBO 1532-0626
(2018).
Google Scholar
L. He et al.,
“HyperPNN: hyperspectral pansharpening via spectrally predictive convolutional neural networks,”
IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens., 12
(8), 3092
–3100 https://doi.org/10.1109/JSTARS.2019.2917584
(2019).
Google Scholar
L. He et al.,
“Pansharpening via detail injection based convolutional neural networks,”
IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens., 12
(4), 1188
–1204 https://doi.org/10.1109/JSTARS.2019.2898574
(2019).
Google Scholar
D. Li et al.,
“A universal hypercomplex color image quality index,”
in IEEE Int. Instrum. and Meas. Technol. Conf. Proc.,
985
–990
(2012). https://doi.org/10.1109/I2MTC.2012.6229639 Google Scholar
L. Sui et al.,
“Fusion of hyperspectral and multispectral images based on a Bayesian nonparametric approach,”
IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens., 12
(4), 1205
–1218 https://doi.org/10.1109/JSTARS.2019.2902847
(2019).
Google Scholar
C. Han et al.,
“A remote sensing image fusion method based on the analysis sparse model,”
IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens., 9
(1), 439
–453 https://doi.org/10.1109/JSTARS.2015.2507859
(2016).
Google Scholar
J. R. Harris, R. Murray and T. Hirose,
“Ihs transform for the integration of radar imagery with geophysical data,”
923
–926
(1989). Google Scholar
İ. Kösesoy et al.,
“A comparative analysis of image fusion methods,”
in 20th Signal Process. and Commun. Appl. Conf.,
1
–4
(2012). https://doi.org/10.1109/SIU.2012.6204511 Google Scholar
G. Bhatnagar, Q. J. Wu and Z. Liu,
“Directive contrast based multimodal medical image fusion in NSCT domain,”
IEEE Trans. Multimedia, 15
(5), 1014
–1024 https://doi.org/10.1109/TMM.2013.2244870
(2013).
Google Scholar
Y. Wei, Z. Yong and Y. Zheng,
“Fusion of GF-3 SAR and optical images based on the nonsubsampled contourlet transform,”
Acta Opt. Sin., 38
(11), 1110002 https://doi.org/10.3788/AOS201838.1110002 GUXUDC 0253-2239
(2018).
Google Scholar
D. Wang et al.,
“Optimization of the oil drilling monitoring system based on the multisensor image fusion algorithm,”
J. Sens., 2021 5229073 https://doi.org/10.1155/2021/5229073
(2021).
Google Scholar
L. Tan and X. Yu,
“Medical image fusion based on fast finite Shearlet transform and sparse representation,”
Comput. Math. Methods Med., 2019 1
–14 https://doi.org/10.1155/2019/3503267
(2019).
Google Scholar
BiographyYukai Fu is currently working toward his MS degree in surveying and mapping from Lanzhou Jiaotong University, Lanzhou, China. His research focuses on remote sensing image processing and analysis. Shuwen Yang received his BS degree from the China University of Geosciences, Wuhan, China, in 1999, and his MS and PhD degrees from the School of Earth Sciences, China University of Geosciences, China, in 2004 and 2011, respectively. Since 2004, he has been with Lanzhou Jiaotong University, where he is currently a professor in the Faculty of Geomatics. His research focuses on image processing and pattern recognition. Heng Yan is currently working toward his MS degree in surveying and mapping from Lanzhou Jiaotong University, Lanzhou, China. His research focuses on image processing and pattern recognition. Qing Xue is currently working toward his MS degree in surveying and mapping from Lanzhou Jiaotong University, Lanzhou, China. His research focuses on remote sensing image processing and analysis. |