The paper addresses the further advance in our complex research in the field of multisensory image fusion based on generative adversarial models [1-2] and their application to such practical tasks as visual representation of fused images, acquired in different spectral ranges (e.g. TV and IR), and changes detection on images, acquired in different conditions (e.g. season-varying images). A developed architecture of a neural network based on pix2pix model is presented, which can solve the both tasks mentioned above. A technique for generating training and test datasets including data augmentation process is described. The results are demonstrated on real-world images.
The paper proposes a semantic segmentation algorithm based on Convolutional Neural Networks (CNN) related to the problem of presenting multispectral sensor-derived images in Enhanced Vision Systems (EVS). The CNN architecture based on residual SqueezeNet with deconvolutional layers is presented. To create an in-domain training dataset for CNN, a semi-automatic scenario with the use of photogrammetric technique is described. Experimental results are shown for problem-oriented images, obtained by TV and IR sensors of the EVS prototype in a set of flight experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.