You have requested a machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Neither SPIE nor the owners and publishers of the content make, and they explicitly disclaim, any express or implied representations or warranties of any kind, including, without limitation, representations and warranties as to the functionality of the translation feature or the accuracy or completeness of the translations.
Translations are not retained in our system. Your use of this feature and the translations is subject to all use restrictions contained in the Terms and Conditions of Use of the SPIE website.
9 August 2018Hybrid connection network for semantic segmentation
In recent years, deep convolutional neural networks like ResNet and DenseNet with short cut connected to each layer can be more accurate and easier to train. Although deep convolutional neural networks show their strength in many computer vision tasks, there is still a challenge to get more precise per-pixel prediction for semantic image segmentation task via deep convolutional neural networks. In this paper, we propose a hybrid connection network architecture for semantic segmentation which consists of an encoder network for extracting different scale feature maps and a decoder network for recovering extracted feature maps with the resolution of the input image. This architecture includes several skip connection paths between encoder and decoder. The paths help to fuse both localization information and global information. We show that our architecture can be quickly trained end-to-end without pre-training on an additional dataset and performs comparable results on semantic segmentation benchmark datasets such as PASCAL VOC 2012.
The alert did not successfully save. Please try again later.
Xiao Liang, Sei-ichiro Kamata, "Hybrid connection network for semantic segmentation," Proc. SPIE 10806, Tenth International Conference on Digital Image Processing (ICDIP 2018), 108066P (9 August 2018); https://doi.org/10.1117/12.2502963