Translator Disclaimer
17 March 2015 Real-time viewpoint image synthesis using strips of multi-camera images
Author Affiliations +
Proceedings Volume 9391, Stereoscopic Displays and Applications XXVI; 939109 (2015)
Event: SPIE/IS&T Electronic Imaging, 2015, San Francisco, California, United States
A real-time viewpoint image generation method is achieved. Video communications with a high sense of reality are needed to make natural connections between users at different places. One of the key technologies to achieve a sense of high reality is image generation corresponding to an individual user's viewpoint. However, generating viewpoint images requires advanced image processing, which is usually too heavy to use for real-time and low-latency purposes. In this paper we propose a real-time viewpoint image generation method using simple blending of multiple camera images taken at equal horizontal intervals and convergence obtained by using approximate information of an object's depth. An image generated from the nearest camera images is visually perceived as an intermediate viewpoint image due to the visual effect of depth-fused 3D (DFD). If the viewpoint is not on the line of the camera array, a viewpoint image could be generated by region splitting. We made a prototype viewpoint image generation system and achieved real-time full-frame operation for stereo HD videos. The users can see their individual viewpoint image for left-and-right and back-and-forth movement toward the screen. Our algorithm is very simple and promising as a means for achieving video communication with high reality.
© (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Munekazu Date, Hideaki Takada, and Akira Kojima "Real-time viewpoint image synthesis using strips of multi-camera images", Proc. SPIE 9391, Stereoscopic Displays and Applications XXVI, 939109 (17 March 2015);

Back to Top