Paper
3 June 2014 Temporally consistent segmentation of point clouds
Jason L. Owens, Philip R. Osteen, Kostas Daniilidis
Author Affiliations +
Abstract
We consider the problem of generating temporally consistent point cloud segmentations from streaming RGB-D data, where every incoming frame extends existing labels to new points or contributes new labels while maintaining the labels for pre-existing segments. Our approach generates an over-segmentation based on voxel cloud connectivity, where a modified k-means algorithm selects supervoxel seeds and associates similar neighboring voxels to form segments. Given the data stream from a potentially mobile sensor, we solve for the camera transformation between consecutive frames using a joint optimization over point correspondences and image appearance. The aligned point cloud may then be integrated into a consistent model coordinate frame. Previously labeled points are used to mask incoming points from the new frame, while new and previous boundary points extend the existing segmentation. We evaluate the algorithm on newly-generated RGB-D datasets.
© (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Jason L. Owens, Philip R. Osteen, and Kostas Daniilidis "Temporally consistent segmentation of point clouds", Proc. SPIE 9084, Unmanned Systems Technology XVI, 90840H (3 June 2014); https://doi.org/10.1117/12.2050666
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication and 1 patent.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Clouds

Image segmentation

Cameras

Detection and tracking algorithms

RGB color model

Motion models

Motion estimation

RELATED CONTENT

Toward automatic extraction of video objects
Proceedings of SPIE (August 30 2002)
Image sequence recognition
Proceedings of SPIE (September 16 1994)

Back to Top