In recent years, bidirectional encoder representation from transformers (BERT) models have achieved superior performance in hyperspectral images (HSIs). It can capture the long-range correlations between HSI elements, but the local space and spectral band information of HSI is insufficient. We propose a spatially augmented guided sequence BERT network for HSI classification study, referred to as SAS-BERT, which makes more effective use of HSI’s spatial and spectral information by improving the BERT model. First, a spatial augmentation learning module is added in the preprocessing stage to obtain more significant spatial features before the input network and better guide the spatial sequence. Then a spectral correlation module was used to represent the spectral band features of the HSI and to establish a correlation with the spatial location of the images to obtain better classification performance. Experimental results on three datasets show that the method proposed achieves better classification performance than other state-of-the-art methods.
Convolutional neural networks (CNNs) have shown excellent performance for hyperspectral image (HSI) classification due to their characteristics of both local connectivity and sharing weights. Nevertheless, with the in-depth study of network architecture, merely manual empirical design can no longer meet the current scenario needs. In addition, the existing CNN-based frameworks are heavily affected by the redundant three-dimensional cubes of the input and result in inefficient description issues of HSIs. We propose an image-based neural architecture automatic search framework (I-NAS) as an alternative to CNN. First, to alleviate the redundant spectral–spatial distribution, I-NAS feeds a full image into the framework via a label masking fashion. Second, an end-to-end cell-based structure search space is considered to enrich the feature representation. Then, it determined the optimal cells by employing a gradient descent search algorithm. Finally, the well-trained CNN architecture is automatically constructed by stacking the optimal cells. The experimental results from two real HSI datasets indicate that our proposal can provide a competitive performance in classification.
Preprocessing is a major area of interest in the field of hyperspectral endmember extraction, for it can provide a few high-quality candidates for fast endmember extraction without sacrificing endmember accuracy. We propose a superpixel-guided preprocessing (SGPP) algorithm to accelerate endmember extraction based on spatial compactness and spectral purity analysis. The proposed SGPP first transforms a hyperspectral image into low-dimension data using principal component analysis. SGPP then utilizes the superpixel method, which normally has linear complexity, to segment the first three components into a set of superpixels. Next, SGPP transforms low-dimension superpixels into noise-reduced superpixels and calculates their spatial compactness and spectral purity based on Tukey’s test and data convexity. SGPP finally retains a few high-quality pixels from each superpixel with high spatial compactness and spectral purity indices for subsequent endmember identification. Based on the spectral angle distance, root-mean-square error, and speedup, experiments are conducted on synthetic and real hyperspectral datasets, and they indicate that SGPP is superior to current state-of-the-art preprocessing techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.