Paper
16 May 2024 SplitNet: enhancing edge information for remote sensing image segmentation
Qiguang Chen
Author Affiliations +
Proceedings Volume 13166, International Conference on Remote Sensing Technology and Survey Mapping (RSTSM 2024); 1316603 (2024) https://doi.org/10.1117/12.3029133
Event: International Conference on Remote Sensing Technology and Survey Mapping (RSTSM 2024), 2024, Changchun, China
Abstract
Semantic segmentation of remote sensing images is a critical task in computer vision, yet it has often been overlooked in the context of the images themselves. Given the high similarity between segmentation targets and the background in satellite remote sensing images, conventional deep networks tend to lose vital boundary features and contextual information, which are pivotal for accurate segmentation. To address this issue, I enhance the decoupled network architecture proposed by my predecessors. The improved network, named SplitNet, retrieves edge feature information from a shallow network and global features from the deep network applied to downsampled images. In a novel approach, I introduce a feature map fusion method that integrates edge, body, and global features, sharpening the network's focus on segmenting edge location features of the target. Our experiments demonstrate that SplitNet achieves substantial results on the DeepGlobe land classification dataset.
(2024) Published by SPIE. Downloading of the abstract is permitted for personal use only.
Qiguang Chen "SplitNet: enhancing edge information for remote sensing image segmentation", Proc. SPIE 13166, International Conference on Remote Sensing Technology and Survey Mapping (RSTSM 2024), 1316603 (16 May 2024); https://doi.org/10.1117/12.3029133
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image segmentation

Remote sensing

Semantics

Feature extraction

Feature fusion

Roads

Classification systems

Back to Top