Presentation + Paper
7 October 2019 A multimodal vision sensor for autonomous driving
Author Affiliations +
Abstract
This paper describes a multimodal vision sensor that integrates three types of cameras, including a stereo camera, a polarization camera and a panoramic camera. Each sensor provides a specific dimension of information: the stereo camera measures depth per pixel, the polarization obtains the degree of polarization, and the panoramic camera captures a 360° landscape. Data fusion and advanced environment perception could be built upon the combination of sensors. Designed especially for autonomous driving, this vision sensor is shipped with a robust semantic segmentation network. In addition, we demonstrate how cross-modal enhancement could be achieved by registering the color image and the polarization image. An example of water hazard detection is given. To prove the multimodal vision sensor’s compatibility with different devices, a brief runtime performance analysis is carried out.
Conference Presentation
© (2019) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Dongming Sun, Xiao Huang, and Kailun Yang "A multimodal vision sensor for autonomous driving", Proc. SPIE 11166, Counterterrorism, Crime Fighting, Forensics, and Surveillance Technologies III, 111660L (7 October 2019); https://doi.org/10.1117/12.2535552
Lens.org Logo
CITATIONS
Cited by 2 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Sensors

Cameras

Stereoscopic cameras

Image segmentation

RGB color model

3D modeling

LIDAR

Back to Top