Hyperspectral and light detection and ranging (LiDAR) imaging instruments capture on-ground object information from diverse perspectives, reflecting spectral–spatial and elevation descriptions, respectively. Their complementary capturing feasibilities contribute to enhancing accurate landcover identification in multimodal data fusion tasks. However, their heterogeneous distributions always impede fusion and joint classification performance, leading to wrong classification phenomena. To solve this challenge, we proposed a deep multiscale cross-modal attention (DMSCA) network for hyperspectral and LiDAR data fusion and joint classification. Compared with existing methods, our primary motivation is to explore the intrinsic connection between these two specific remote sensing modalities and enhance their shared attributes through the implementation of various cross-modal attention mechanisms. The extracted modality features are cross-modally integrated and exchanged, thereby enhancing the overall consistency of the simulations. Specifically, these cross-modal attention mechanisms are capable of strengthening local considerable segments considering detailed hyperspectral and LiDAR geographical descriptions. The spatial-wise attention mechanism measures the contributions of neighboring samples to classification performance. The spectral-wise attention mechanism highlights the significant hyperspectral channels in terms of channel correlation. The elevation-wise attention mechanism highly connects the hyperspectral-related attention mechanisms to detailed LiDAR elevations for information fusion. Based on these mechanisms, an adaptive fusion and joint classification framework is constructed for balancing multimodal information. Multiple experiments are conducted on three widely used datasets to prove the effectiveness of DMSCA. Experimental results prove that DMSCA outperforms state-of-the-art techniques qualitatively and quantitatively.
To overcome the inefficiency of incremental learning for hyperspectral remote sensing images, we propose a binary detection theory-sequential minimal optimization (BDT-SMO) nonclass-incremental learning algorithm based on hull vectors and Karush-Kuhn-Tucker conditions (called HK-BDT-SMO). This method can improve the accuracy and efficiency of BDT-SMO nonclass-incremental learning for fused hyperspectral images. But HK-BDT-SMO cannot effectively solve class-incremental learning problems (an increase in the number of classes in the newly added sample sets). Therefore, an improved version of HK-BDT-SMO based on hypersphere support vector machine (called HSP-BDT-SMO) is proposed. HSP-BDT-SMO can substantially improve the accuracy, scalability, and stability of HK-BDT-SMO class-incremental learning. Ultimately, HK-BDT-SMO and HSP-BDT-SMO are applied to the classification of land uses with fused hyperspectral images, and the classification results are compared with other incremental learning algorithms to verify their performance. In nonclass-incremental learning, the accuracy of HSP-BDT-SMO and HK-BDT-SMO is approximately the same and is higher than the others, and the former has the best learning speed; while in class-incremental learning, HSP-BDT-SMO has a better accuracy and more continuous stability than the others and the second highest learning speed next to HK-BDT-SMO. Therefore, HK-BDT-SMO and HSP-BDT-SMO are excellent algorithms which are respectively suitable to nonclass and class-incremental learning for fused hyperspectral images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.