17 February 2023 Image segmentation for blind lanes based on improved SegNet model
Yongquan Xia, Yiqing Li, Qianqian Ye, Jianhua Dong
Author Affiliations +
Abstract

The position of blind lanes must be correctly determined in order for blind people to travel safely. Aiming at the low accuracy and slow speed of traditional blind lanes image segmentation algorithms, a semantic segmentation method based on SegNet and MobileNetV3 is proposed. The main idea is to replace the coding part of the original SegNet model with the feature extraction part of MobileNetV3 and remove the pooling layer. Blind lanes images were collected through online search and self-shooting, and then the data were manually marked by LabelMe software and trained on TensorFlow deep learning framework. The experimental results show that the improved model has high segmentation accuracy and recognition speed. The pixel accuracy of blind lanes segmentation is 98.21%, the mean intersection over union is 96.29%, and the average time for processing a 416 × 416 image is 0.057 s, which meets the real-time requirements of the blind guidance system.

© 2023 SPIE and IS&T
Yongquan Xia, Yiqing Li, Qianqian Ye, and Jianhua Dong "Image segmentation for blind lanes based on improved SegNet model," Journal of Electronic Imaging 32(1), 013038 (17 February 2023). https://doi.org/10.1117/1.JEI.32.1.013038
Received: 15 October 2022; Accepted: 6 February 2023; Published: 17 February 2023
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image segmentation

Semantics

Education and training

Convolution

Feature extraction

Deep learning

Image processing

Back to Top