Recently stereoscopy has increased a lot its popularity and various technologies are spreading in theaters and
homes allowing observation of stereoscopic images and movies, becoming affordable even for home users. However
there are some golden rules that users should follow to ensure a better enjoyment of stereoscopic images, first
of all the viewing condition should not be too different from the ideal ones, which were assumed during the
production process.
To allow the user to perceive stereo depth instead of a flat image, two different views of the same scene are
shown to the subject, one is seen just through his left eye and the other just through the right one; the vision
process is making the work of merging the two images in a virtual three-dimensional scene, giving to the user
the perception of depth.
The two images presented to the user were created, either from image synthesis or from more traditional
techniques, following the rules of perspective. These rules need some boundary conditions to be explicit, such as
eye separation, field of view, parallax distance, viewer position and orientation.
In this paper we are interested in studying how the variation of the viewer position and orientation from
the ideal ones expressed as specified parameters in the image creation process, is affecting the correctness of the
reconstruction of the three-dimensional virtual scene.
The interest in the production of stereoscopic contents is growing rapidly. Stereo material can be produced
using different solutions, from high level devices to standard digital cameras suitably coupled. In the latter
case, color correction in stereoscopic images is complex, due to possible different Color Filter Arrays or settings
in the two acquisition devices: users must often tune each camera separately, and this can lead to visible color
inter-differences in the stereo pair. The color correction methods often considered in the post-processing stage
of stereoscopic production are mainly based on global transformations between the two views, but this approach
can not completely recover relevant limits in the gamuts of each image due to color distortions. In this paper we
evaluate the application of perceptually-based spatial color computational models, based or inspired by Retinex
theory, to pre-filter the stereo pairs. Spatial color algorithms apply an unsupervised local color correction to
each pixel, based on a simulation of color perception mechanisms, and were proven to effectively reduce color
dominants and adjust local contrasts in images. We filtered different stereoscopic streams with visible color
differences between right and left frames, using a GPU version of the Random Spray Retinex (RSR) algorithm,
that applies in few seconds an unsupervised color correction, and the Automatic Color Equalization (ACE)
algorithm, that considers both White Patch and Gray World equalization mechanisms. We analyse the effect
of the computational models both by visual assessment and by considering the changes in the image gamuts
before and after the filtering.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.