When we perform a visual analysis of a cosmic object photograph the contrast plays a fundamental role. A linear distribution of the observable values is not necessarily the best possible for the Human Visual System (HVS). In fact HVS has a non-linear response, and exploits contrast locally with different stretching for different lightness areas. As a consequence, according to the observation task, local contrast can be adjusted to make easier the detection of relevant information. The proposed approach is based on Spatial Color Algorithms (SCA) that mimic the HVS behavior. These algorithms compute each pixel value by a spatial comparison with all (or a subset of) the other pixels of the image. The comparison can be implemented as a weighted difference or as a ratio product over given sampling in the neighbor region. A final mapping allows exploiting all the available dynamic range. In the case of color images SCA process separately the three chromatic channels producing an effect of color normalization, without introducing channel cross correlation. We will present very promising results on amateur photographs of deep sky objects. The results are presented for a qualitative and subjective visual evaluation and for a quantitative evaluation through image quality measures, in particular to quantify the effect of algorithms on the noise. Moreover our results help to better characterize contrast measures.
Brain-Computer Interfaces (BCIs) provide users communication and control capabilities by analyzing their brain
activity. A technique to implement BCIs, used recently also in Virtual Reality (VR) environments, is based on
the Steady State Visual Evoked Potentials (SSVEPs) detection. Exploiting the SSVEP response, BCIs could
be implemented showing targets flickering at different frequencies and detecting which is gazed by the observer
analyzing her/his electroencephalographic (EEG) signals. In this work, we evaluate the use of stereoscopic
displays for the presentation of SSVEP eliciting stimuli, comparing their effectiveness between monoscopic and
stereoscopic stimuli. Moreover we propose a novel method to elicit SSVEP responses exploiting the stereoscopic
displays capability of presenting dichoptic stimuli. We have created an experimental scene to present flickering
stimuli on an active stereoscopic display, obtaining reliable control of the targets’ frequency independently for
the two stereo views. Using an EEG acquisition device, we analyzed the SSVEP responses from a group of
subjects. From the preliminary results, we got evidence that stereoscopic displays represent valid devices for the
presentation of SSVEP stimuli. Moreover, the use of different flickering frequencies for the two views of a single
stimulus proved to elicit non-linear interactions between the stimulation frequencies, clearly visible in the EEG
signal. This suggests interesting applications for SSVEP-based BCIs in VR environments able to overcome some
limitations imposed by the refresh frequency of standard displays, but also the use of commodity stereoscopic
displays to implement binocular rivalry experiments.
The research project intends to demonstrate how EEG detection through BCI device can improve the analysis and the
interpretation of colours-driven cognitive processes through the combined approach of cognitive science and information
technology methods. To this end, firstly it was decided to design an experiment based on comparing the results of the
traditional (qualitative and quantitative) cognitive analysis approach with the EEG signal analysis of the evoked
potentials. In our case, the sensorial stimulus is represented by the colours, while the cognitive task consists in
remembering the words appearing on the screen, with different combination of foreground (words) and background
colours.
In this work we analysed data collected from a sample of students involved in a learning process during which they
received visual stimuli based on colour variation. The stimuli concerned both the background of the text to learn and the
colour of the characters. The experiment indicated some interesting results concerning the use of primary (RGB) and
complementary (CMY) colours.
We present a methodology to calculate the color appearance of advertising billboards set in indoor and outdoor environments, printed on different types of paper support and viewed under different illuminations. The aim is to simulate the visual appearance of an image printed on a specific support, observed in a certain context and illuminated with a specific source of light. Knowing in advance the visual rendering of an image in different conditions can avoid problems related to its visualization. The proposed method applies a sequence of transformations to convert a four channels image (CMYK) into a spectral one, considering the paper support, then it simulates the chosen illumination, and finally computes an estimation of the appearance.
Stereoscopic visualization in cinematography and Virtual Reality (VR) creates an illusion of depth by means of
two bidimensional images corresponding to different views of a scene. This perceptual trick is used to enhance
the emotional response and the sense of presence and immersivity of the observers. An interesting question
is if and how it is possible to measure and analyze the level of emotional involvement and attention of the
observers during a stereoscopic visualization of a movie or of a virtual environment.
The research aims represent a challenge, due to the large number of sensorial, physiological and cognitive
stimuli involved. In this paper we begin this research by analyzing possible differences in the brain activity
of subjects during the viewing of monoscopic or stereoscopic contents. To this aim, we have performed some
preliminary experiments collecting electroencephalographic (EEG) data of a group of users using a Brain-
Computer Interface (BCI) during the viewing of stereoscopic and monoscopic short movies in a VR immersive
installation.
The relationship between color and lightness appearance and the perception of depth has been studied since
a while in the field of perceptual psychology and psycho-physiology. It has been found that depth perception
affects the final object color and lightness appearance. In the stereoscopy research field, many studies have
been proposed on human physiological effects, considering e.g. geometry, motion sickness, etc., but few has
been done considering lightness and color information.
Goal of this paper is to realize some preliminar experiments in Virtual Reality in order to determine the
effects of depth perception on object color and lightness appearance. We have created a virtual test scene with
a simple 3D simultaneous contrast configuration. We have created three different versions of this scene, each
with different choices of relative positions and apparent size of the objects. We have collected the perceptual
responses of several users after the observation of the test scene in the Virtual Theater of the University of
Milan, a VR immersive installation characterized by a semi-cylindrical screen that covers 120° of horizontal
field of view from an observation distance of 3.5 m.
We present a description of the experiments setup and procedure, and we discuss the obtained results.
The interest in the production of stereoscopic contents is growing rapidly. Stereo material can be produced
using different solutions, from high level devices to standard digital cameras suitably coupled. In the latter
case, color correction in stereoscopic images is complex, due to possible different Color Filter Arrays or settings
in the two acquisition devices: users must often tune each camera separately, and this can lead to visible color
inter-differences in the stereo pair. The color correction methods often considered in the post-processing stage
of stereoscopic production are mainly based on global transformations between the two views, but this approach
can not completely recover relevant limits in the gamuts of each image due to color distortions. In this paper we
evaluate the application of perceptually-based spatial color computational models, based or inspired by Retinex
theory, to pre-filter the stereo pairs. Spatial color algorithms apply an unsupervised local color correction to
each pixel, based on a simulation of color perception mechanisms, and were proven to effectively reduce color
dominants and adjust local contrasts in images. We filtered different stereoscopic streams with visible color
differences between right and left frames, using a GPU version of the Random Spray Retinex (RSR) algorithm,
that applies in few seconds an unsupervised color correction, and the Automatic Color Equalization (ACE)
algorithm, that considers both White Patch and Gray World equalization mechanisms. We analyse the effect
of the computational models both by visual assessment and by considering the changes in the image gamuts
before and after the filtering.
The study of humans' perceptual experiences in Virtual Environments (VEs) plays an essential role in Virtual
Reality (VR) research field. In particular, in the last years several researches were proposed regarding the
problem if depth and distance are perceived in VEs as they are perceived in Real Environments (REs), and
possibily what conditions affect a non correct estimation by the observers. This problem is very relevant in
order to use VR as a supporting tool in fields where correct perception of space and distance is vital, like e.g.
the training of personnel in dangerous environments.
Many theories have been suggested regarding the combination and relation between different depth cues;
unfortunately, no conclusive answer has been proposed. However, a common conclusion between all the
experiments is that observers underestimate long distances in VEs. Although the causes of this phenomenon
are still uncertain, it's reasonable to speculate that something must differ in the way distance and depth are
extracted and processed between the RE and the VE.
Moreover, it is worth noting that very few works considered VR installations with large projection screen,
covering a large field of view (FOV) in the observation process. In this paper, we aim at investigating depth
perception in the Virtual Theater of the University of Milan, a VR installation characterized by a large semicylindrical
screen that covers 120° of horizontal FOV. For its characteristics, the Virtual Theater represents
an interesting and never considered test ground for psychophysical experiments regarding deph perception in
VEs.
We will present some preliminar perceptual matching experiments regarding the effect of shadows and
reflections in the estimation of distances in VEs, and we will discuss the obtained results.
In the last years, relevant efforts have been dedicated to the development of advanced technological solutions
for immersive visualization of Virtual Reality (VR) scenarios, with particular attention to stereoscopic images
formation. Among the various solution proposed, INFITECTM technology is particularly interesting, because it
allows the reproduction of a more accurate chromatic range than anaglyphs or polarization-based approaches.
Recently, this technology was adopted in the Virtual Theater of the University of Milan, an immersive VR
installment, used for research purposes in the fields of Human-machine interaction and photorealistic, perceptual-based,
visualization of virtual scenarios. In this paper, we want to present a first set of measures related to
the determination of an accurate chromatic, colorimetric and photometric characterization of this visualization
system. The acquired data are analyzed in order to evaluate the efective inter-calibration between the four
devices and for the determination of an accurate description of the actual effect of the INFITECTM technology.
This analysis will be the basis for the future integration of visual perception and color appereance principles in
the visualization pipeline, and for the development of robust computational models and instruments for a correct
color management in the visualization of immersive virtual environments.
KEYWORDS: Sensors, Cones, Retina, Color vision, Multispectral imaging, Visual system, Human vision and color perception, Resolution enhancement technologies, Machine vision, Human subjects
In principle, an artificial retina should mimic as much as possible the spectral sensitivities of the real retina.
For technological reasons, building such an artificial device can lead to spectral approximations in comparison
with the real sensitivities. To understand if possible discrepancies can determine big differences in the final
perception, the whole visual system should be taken into consideration, not only the retinal input signal difference.
This paper aims at investigate how retinal sensitivity differences should affect the final perception. However,
answering to this question is a very complex problem related to the whole visual system, that we do not want
to extensively address in this paper. We only want to investigate the relationship between the spatial aspects
of color perception and the spectral differences among cone sensitivities. Moreover, a personal interdifference
has been observed in cone spatial distribution between human subjects, without any corresponding significant
difference in final color sensation. It is likely that spatial compensation, performed by human observers, strongly
decreases this subjectivity in color signal. We aim at address if a similar principle should be considered in
artificial vision. In this paper we analyze the interdifference among integrated values obtained using different
organic-based artificial sensors with different spectral sensitivities. Experiments show a significant decrease of
the effect of spectral sensitivity sensor differences when a spatial color correction is applied.
KEYWORDS: Electronic filtering, RGB color model, Visualization, Color vision, Information visualization, Image processing, Multispectral imaging, High dynamic range imaging, Colorimetry, Cones
We explore how an RGB representation can be computed from a spectral description of real images, with many different colors. To pass from spectral distribution to tristimulus values, several color-matching functions (CMFs) were proposed, derived from experimental setups with simplified visual conditions, considering the colors pointwise and independently from any visual context. A high interdifference is observed in cone spatial distribution between human subjects, without any corresponding significant difference in final color sensation. It is likely that a spatial compensation is performed by human observers that strongly decreases the subjectivity in color perception, and we ask if a similar principle should be considered in digital imaging. We investigate the interdifference among some CMFs when used to compute color information in complex visual conditions, where multiple, spatially distributed different colors are present. This is relevant in synthetic image generation of scenes under different illuminants, computed at the spectral energy distribution at 5-nm intervals, to be converted into RGB for monitor display. The analysis of the interdifference among tristimulus colors obtained using different CMFs shows a significant decrease of the interdifference when a contextual color correction is applied, based on Von Kries or Retinex methods
KEYWORDS: Electronic filtering, High dynamic range imaging, Information visualization, RGB color model, Multispectral imaging, Visualization, Associative arrays, Human vision and color perception, Color vision, Colorimetry
In real world no color exists. Only spectral light distributions interact to form the final color sensation. This paper presents preliminary experiments whose purpose is to test the robustness of a spatial color computation in relation to changes in the acquisition of spectral information. The basic idea is that human vision system has evolved into a robust system to acquire visual information, in this case the color, adapting to varying illumination conditions to guarantee color constancy. The presented experiments test changes in the output of a Retinex-derived tone mapping operator, varying illuminants and color matching function curves. Synthetic high dynamic range multispectral images have been computed by a photometric ray tracer using different illuminants. Then, using standard and modified color matching functions, a set of high dynamic range RGB images has been created. This set has been converted to standard RGB images using a linear tone mapping algorithm with no spatial color computation and one based on Retinex, performing a spatial color normalization. A discussion of the results is presented.
In this paper we present a tone mapping operator (TMO) for High Dynamic Range images, inspired by human visual system adaptive mechanisms. The proposed TMO is able to perform color constancy without a priori information about the scene. This is a consequence of its HVS inspiration. In our humble opinion, color constancy is very useful in TMO since we assume that it is preferable to look at an image that reproduces the color sensation rather than an image that follows classic photographic reproduction. Our proposal starts from the analysis of Retinex and ACE algorithms. Then we have extended ACE to HDR images, introducing novel features. These
are two non-linear controls: the first control allows the model to find a good trade-off between visibility and color distribution modifying the local operator at each pixel-to-pixel comparison while the second modifies the interaction between pixels estimating the local contrast. Solution towards unsupervised parameters tuning are
proposed.
In this paper we present tests and results of an automatic color
fading restoration process for digitized movies. The proposed
color correction method is based on the ACE model, an unsupervised
color equalization algorithm based on a perceptual approach and
inspired by some mechanisms of the human visual system. This
perceptual approach is local, robust and does not need any user
region selection or any other user supervision. However the model
has a small number of parameters that has to be set once before
the filtering. The tests presented in this paper aim to study
these parameters and find their effect on the final result.
KEYWORDS: Light sources and illumination, Image processing, RGB color model, Visualization, Light sources, Bidirectional reflectance transmission function, Ray tracing, Monte Carlo methods, Image compression, Digital imaging
There is a class of non-linear filtering algorithms for digital color enhancement characterized by data driven local effect and high computational cost. In this paper we propose a new method, called LLL for Local Linear LUT, to speed-up these filters without loosing their local effect. Usually LUT based methods are global while our approach uses the principles of LUT transformation in a local way. The main idea of the proposed method is to apply the algorithm to a small sub-sampled version of the original image and to employ a modified Look Up Table technique to maintain the local filtering effect of the original algorithm. In this way three functions, one for each chromatic channel, are created for each pixel of the original image. These functions filter the original full size image in a very short time. We have tested LLL with two of these filters, a Brownian Retinex implementation (BR) and ACE (Automatic Color Equalization). The computational cost for this algorithms is very high. The proposed method increases the speed of color filtering algorithms reducing the number of pixel involved in the computation by sub-sampling the original image. Results, comparison and conclusion are presented.
KEYWORDS: Colorimetry, Image processing, Image enhancement, Visual system, Lanthanum, Information technology, Visualization, Visual process modeling, Information visualization, Data corrections
The cinematographic archives represent an important part of our collective memory. We present in this paper some advances in automating the color fading restoration process, especially with regard to the automatic color correction technique. The proposed color correction method is based on the ACE model, an unsupervised color equalization algorithm based on a perceptual approach and inspired by some adaptation mechanisms of the human visual system, in particular lightness constancy and color constancy. There are some advantages in a perceptual approach: mainly its robustness and its local filtering properties, that lead to more effective results. The resulting technique, is not just an application of ACE on movie images, but an enhancement of ACE principles to meet the requirements in the digital film restoration field. The presented preliminary results are satisfying and promising.
Different image databases have been developed so far to test algorithms of color constancy. Each of them differs in the image characteristics, according to the features to test. In this paper we present a new image database, created at the University of Milano. Since a database cannot contain all the types of possible images, to limit the number of images it is necessary to make some choices and these choices should be as neutral as possible. The first image detail that we have addressed is the background. Which is the more convenient background for a color constancy test database? This choice can be affected by the goal of the color correction algorithms. In developing this DB we tried to consider a large number of possible approaches considering color constancy in a broader sense. Images under standard illuminants are presented together with particular non-standard light sources. In particular we collect two groups of lamps: with a weak and with a strong color casts. Another interesting feature is the presence of shadows, that allow to test the local effects of the color correction algorithms. The proposed DB can be used to test algorithms to recover the corresponding color under standard reference illuminants or alternatively assuming a visual appearance approach, to test algorithms for their capability to minimize color variations across the different illuminants, performing in this way a perceptual color constancy. This second approach is used to present preliminary tests. The IDB will be made available on the web.
KEYWORDS: Image processing, Visual process modeling, Light sources and illumination, RGB color model, Ray tracing, Visualization, Scattering, Computer simulations, Monte Carlo methods, Visual system
In the Photorealistic Image Synthesis process the spectral content of the synthetic scene is carefully reproduced, and the final output contains the exact spectral intensity light field of the perceived scene. This is the first important step toward the goal of producing a synthetic image that is indistinguishable from the actual one, but the real scene and its synthetic reproduction should be studied under the same conditions, in order to make a correct comparison and evaluate the degree of photorealism. To simplify this goal, a synthetic observer could be employed to compensate differences in the viewing conditions, since a real observer cannot enter into a synthetic world. Various solutions have been proposed to this end. Most of them are based more on perceptive measures of the Human Visual System (HVS) under controlled conditions rather than on the HVS behaviour under real conditions, e.g., observing a common image and not a controlled black and white striped pattern. Another problem in synthetic image generation is the visualization phase, or tone reproduction, whose purpose is to display the final result of the simulation model on a monitor screen or on a printed paper. The tone reproduction problem consists of finding the best solution to compress the extended dynamic range of the computed light field into the limited range of the displayable colors. We would like to propose a working hypothesis to solve the appearance and the tone reproduction problems in the synthetic image generation, integrating the Retinex model into the photorealistic image synthesis context, including in this way a model of the human visual system in the synthesis process.
KEYWORDS: Image filtering, Colorimetry, Visualization, Visual process modeling, Digital imaging, Algorithm development, RGB color model, Databases, Cameras, Visual system
Color equalization algorithms exhibit a variety of behaviors described in two differing types of models: Gray World and White Patch. These two models are considered alternatives to each other in methods of color correction. They are the basis for two human visual adaptation mechanisms: Lightness Constancy and Color Constancy. The Gray World approach is typical of the Lightness Constancy adaptation because it centers the histogram dynamic, working the same way as the exposure control on a camera. Alternatively, the White Patch approach is typical of the Color Constancy adaptation, searching for the lightest patch to use as a white reference similar to how the human visual system does. The Retinex algorithm basically belongs to the White Patch family due to its reset mechanism. Searching for a way to merge these two approaches, we have developed a new chromatic correction algorithm, called Automatic Color Equalization (ACE), which is able to perform Color Constancy even if based on Gray World approach. It maintains the main Retinex idea that the color sensation derives from the comparison of the spectral lightness values across the image. We tested different performance measures on ACE, Retinex and other equalization algorithms. The results of this comparison are presented.
In the field of cultural heritage restoration, experts are extremely interested in the analysis of the large amounts of data describing the condition and history of ancient monuments. In this paper we describe a method and its implementation for providing high quality photorealistic image synthesis of ancient buildings over the Internet through VRML and Java technology. A network-based Java application manages geometric 3D VRML models of an ancient building to provide an interface to add information and to compute high quality photorealistic snapshots of the entire model or any of its parts. The poor quality VRML real time rendering is upheld by a slower but more accurate rendering computed on a radiometric basis. The input data for this advanced rendering is taken form the geometric VRML model. We have also implemented some extensions to provide spectral dat including the measurement of light and materials obtained experimentally. The interface to access the ancient building database is descriptive VRML model itself. The Java application enhances the interaction with the model to provide and manage high quality images that allow visual qualitative evaluation of restoration hypotheses by providing a tool to improve the appearance of the resulting image under assigned lighting conditions.
We have examined the performance of various color-based retrieval strategies when coupled with a pre-filtering Retinex algorithm to see whether, and to what degree, Retinex improved the effectiveness of the retrieval, regardless of the strategy adopted. The retrieval strategies implemented included color and spatial-chromatic histogram matching, color coherence vector matching, and the weighted sum of the absolute differences between the first three moments of each color channel. The experimental results are reported and discussed.
Photorealistic Image Synthesis is a relevant research and application field in computer graphics, whose aim is to produce synthetic images that are undistinguishable from real ones. Photorealism is based upon accurate computational models of light material interaction, that allow us to compute the spectral intensity light field of a geometrically described scene. The fundamental methods are ray tracing and radiosity. While radiosity allows us to compute the diffuse component of the emitted and reflected light, applying ray tracing in a two pass solution we can also cope with non diffuse properties of the model surfaces. Both methods can be implemented to generate an accurate photometric distribution of light of the simulated environment. A still open problem is the visualization phase, whose purpose is to display the final result of the simulated mode on a monitor screen or on a printed paper. The tone reproduction problem consists of finding the best solution to compress the extended dynamic range of the computed light field into the limited range of the displayable colors. Recently some scholars have addressed this problem considering the perception stage of image formation, so including a model of the human visual system in the visualization process. In this paper we present a working hypothesis to solve the tone reproduction problem of synthetic image generation, integrating Retinex perception model into the photo realistic image synthesis context.
Visual retrieval by content in an Image DataBase (IDB) is still an open problem. So far, various methods with different semantic levels have been developed, for internet search or off-line IDBs, but few of them take into account the user's perceptual point of view. Two features primarily used for visual retrieval in IDBs are shape and color. We focus our attention on color, from the perspective of color appearance. The Human Visual System (HVS) has adaptation mechanisms that cause the user to perceive the relative chromaticity of an area, rather than its absolute color. In addition, due to the acquisition process, color distortions are added to heterogeneous IDBs. Digital pictures of real objects for IDBs must be digitized and the acquisition process is composed of various passages and means, each one introducing unwanted color shifting. Moreover, the color quantization and the device gamut can introduce additional distortion on the original color information. The overall result is a recognizable and the device gamut can introduce additional distortion on the original color information. The overall result is a digital image that can significantly differ in color from the real object. For the user the image may still be easily recognizable, but the color search change can vary widely and differ for each image or for the same image with different acquisition processes. For this reason, the user's perceptual point of view must be added into the management of color. The idea presented in this paper adds a pre-filtering algorithm that simulates the HVS and that discounts the acquisition color distortion in the query image as well as in each image in the IDB. Moreover, we suggest to use for the image retrieve, a more perceptively linear chromatic distance in the color comparison.
Understanding chromatic adaptation is a necessary step to solve the color constancy problem for a variety of application purposes. Retinex theory justifies chromatic adaptation, as well as other color illusions, on visual perception principles. Based on the above theory, we have derived an algorithm to solve the color constancy problem and to simulate chromatic adaption. The evaluation of the result depends on the kind of applications considered. Since our purpose is to contribute to the problem of color rendering on computer system display for photorealistic image synthesis, we have devised a specific test approach. A virtual 'Mondrian' patchwork has been created by applying a rendering algorithm with a photorealistic light model to generate images under different light sources. Trichromatic values of the computer generated patches are the input data for the Retinex algorithm, which computes new color corrected patches. The Euclidean distance in CIELAB space, between the original and Retinex color corrected trichromatic values, has been calculated, showing that the Retinex computational model is very well suited to solve the color constancy problem without any information on the illuminant spectral distribution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.