Paper
14 May 2019 Blended learning for hyperspectral data
Ilya Kavalerov, Wojciech Czaja
Author Affiliations +
Abstract
We explore the spectral spatial representation capabilities of convolutional neural networks for the purpose of classification of hyperspectral images. We examine several types of neural networks, including a novel technique that blends the Fourier scattering transform with a convolutional neural network. This method is naturally suited for the representation of hyperspectral data because it decomposes signals into multi-frequency bands, removing small perturbations such as noise, while also having the capability of neural networks to learn a hierarchical representation. We test our proposed method on the standard Pavia University hyperspectral dataset and demonstrate a new training set sampling strategy that reveals the inherent spatial bias present in some purely neural network methods. The results indicate that our form of blended learning is more effective at representing spectral data and less prone to overfitting the artificial spatial bias in hyperspectral data.
© (2019) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Ilya Kavalerov and Wojciech Czaja "Blended learning for hyperspectral data", Proc. SPIE 10986, Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imagery XXV, 109861B (14 May 2019); https://doi.org/10.1117/12.2519977
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Scattering

Neural networks

Convolutional neural networks

Image classification

Hyperspectral imaging

Back to Top