Emerging electronic display technologies for cinema and television such as LED, OLED, laser and quantum dot are permitting greatly enhanced color gamuts via increasingly narrow-band primary emission spectra. A recent standard adopted for Ultra High Definition television, ITU-R Rec. 2020, promotes RGB primary chromaticities coincident with the spectral locus. As displays trend towards larger gamuts in the traditional 3-primary design, variability in human color sensing is exacerbated. Metameric matches to aim stimuli for one particular observer may yield a notable color mismatch for others, even if all observers are members of a color-normal population. Multiprimary design paradigms may hold value for simultaneously enhancing color gamut and reducing observer metamerism. By carefully selecting primary spectra in systems employing more than 3 emission channels, intentional metameric performance can be controlled. At Rochester Institute of Technology, a prototype multiprimary display has been simulated to minimize observer metamerism and observer variability according to custom indices derived from emerging models for human color vision. The constructed display is further being implemented in observer experiments to validate practical performance and confirm these vision and metamerism models.
Goniospectrophotometers and custom laboratory setups are used to perform BRDF measurements of materials.
Those measurements can be used to improve the realism of previews of to be printed 3D objects, and for the
accurate representation of real objects in synthetic images. Unfortunately, the expensive nature of those devices
and the time required to measure each sample limit its use.
This paper presents a cost-effective, fast, and scalable solution to capture material appearance. This technique
is based on splitting the material information to capture into two main attributes: color and gloss. A
spectrophotometer is used to capture the color of a material, and the raw data of a linear sensor used in a
DOI-Gloss-Haze meter is used to obtain BRDF measurements, thus capturing the gloss appearance. Those
measurements can later be used to approximate the parameters of analytical BRDF models.
The technique is evaluated by comparing its results with high accuracy measurements of a goniospectrophotometer,
and the approximations obtained with the high accuracy measurements. A good approximation was
obtained when comparing the new technique to a goniospectrophotometer, except for a small underestimation
of the peak of the specular lobe on high gloss materials and the limitation to capture the specular lobe width of
broad specular lobes of low gloss materials.
In general, it can be stated that unfortunately in most countries the number of students interested in traditional scientific disciplines (e.g. physics, chemistry, biology, mathematics, etc.) for his/her future professional careers has considerably decreased during the past years. It is likely that among the reasons of this trend we can find that many students feel that these disciplines are particularly difficult, complex, abstract, and even boring, while they consider applied sciences (e.g. engineering) as much more attractive options to them. Here we aim to attract people of very different ages to traditional scientific disciplines, and promote scientific knowledge, using a set of colour questions related to everyday experiences. From our answers to these questions we hope that people can understand and learn science in a rigorous, relaxed and amusing way, and hopefully they will be inspired to continue exploring on their own. Examples of such colour questions can be found at the free website http://whyiscolor.org from Mark D. Fairchild. For a wider dissemination, most contents of this website have been recently translated into Spanish language by the authors, and published in the book entitled “La tienda de las curiosidades sobre el color” (Editorial University of Granada, Spain, ISBN: 9788433853820). Colour is certainly multidisciplinary, and while it can be said that it is mainly a perception, optics is a key discipline to understand colour stimuli and phenomena. The classical first approach in colour science as the result of the interaction of light, objects, and the human visual system will be also reviewed.
Proc. SPIE. 9015, Color Imaging XIX: Displaying, Processing, Hardcopy, and Applications
KEYWORDS: Visual process modeling, Visualization, Colorimetry, High dynamic range imaging, Associative arrays, Space operations, Optimization (mathematics), Computer graphics, Time multiplexed optical shutter, Image quality standards
In this paper, we present a novel approach of tone mapping as gamut mapping in a high-dynamic-range (HDR) color space. High- and low-dynamic-range (LDR) images as well as device gamut boundaries can simultaneously be represented within such a color space. This enables a unified transformation of the HDR image into the gamut of an output device (in this paper called HDR gamut mapping). An additional aim of this paper is to investigate the suitability of a specific HDR color space to serve as a working color space for the proposed HDR gamut mapping. For the HDR gamut mapping, we use a recent approach that iteratively minimizes an image-difference metric subject to in-gamut images. A psychophysical experiment on an HDR display shows that the standard reproduction workflow of two subsequent transformations – tone mapping and then gamut mapping – may be improved by HDR gamut mapping.
Traditional color spaces have been widely used in a variety of applications including digital color imaging, color image
quality, and color management. These spaces, however, were designed for the domain of color stimuli typically
encountered with reflecting objects and image displays of such objects. This means the domain of stimuli with
luminance levels from slightly above zero to that of a perfect diffuse white (or display white point). This limits the
applicability of such spaces to color problems in HDR imaging. This is caused by their hard intercepts at zero
luminance/lightness and by their uncertain applicability for colors brighter than diffuse white. To address HDR
applications, two new color spaces were recently proposed, hdr-CIELAB and hdr-IPT. They are based on replacing the
power-function nonlinearities in CIELAB and IPT with more physiologically plausible hyperbolic functions optimized
to most closely simulate the original color spaces in the diffuse reflecting color domain. This paper presents the
formulation of the new models, evaluations using Munsell data in comparison with CIELAB, IPT, and CIECAM02, two
sets of lightness-scaling data above diffuse white, and various possible formulations of hdr-CIELAB and hdr-IPT to
predict the visual results.
This paper describes a proof-of-concept implementation that uses a high dynamic range CMOS video camera to integrate daylight harvesting and occupancy sensing functionalities. It has been demonstrated that the proposed concept not only circumvents several drawbacks of conventional lighting control sensors, but also offers functionalities that are not currently achievable by these sensors. The prototype involves three algorithms, daylight estimation, occupancy detection and lighting control. The calibrated system directly estimates luminance from digital images of the occupied room for use in the daylight estimation algorithm. A novel occupancy detection algorithm involving color processing in YCC space has been developed. Our lighting control algorithm is based on the least squares technique. Results of a daylong pilot test show that the system i) can meet different target light-level requirements for different task areas within the field-of-view of the sensor, ii) is unaffected by direct sunlight or a direct view of a light source, iii) detects very small movements within the room, and iv) allows real-time energy monitoring and performance analysis. A discussion of the drawbacks of the current prototype is included along with the technological challenges that will be addressed in the next phase of our research.
Hyperspectral image data can provide very fine spectral resolution with more than 200 bands, yet presents challenges for
visualization techniques for displaying such rich information on a tristimulus monitor. This study developed a
visualization technique by taking advantage of both the consistent natural appearance of a true color image and the
feature separation of a PCA image based on a biologically inspired visual attention model. The key part is to extract the
informative regions in the scene. The model takes into account human contrast sensitivity functions and generates a
topographic saliency map for both images. This is accomplished using a set of linear "center-surround" operations
simulating visual receptive fields as the difference between fine and coarse scales. A difference map between the
saliency map of the true color image and that of the PCA image is derived and used as a mask on the true color image to
select a small number of interesting locations where the PCA image has more salient features than available in the
visible bands. The resulting representations preserve hue for vegetation, water, road etc., while the selected attentional
locations may be analyzed by more advanced algorithms.
Can images from professional digital SLR cameras be made equivalent in color using simple colorimetric
characterization? Two cameras were characterized, these characterizations were implemented on a variety of images, and
the results were evaluated both colorimetrically and psychophysically. A Nikon D2x and a Canon 5D were used. The
colorimetric analyses indicated that accurate reproductions were obtained. The median CIELAB color differences
between the measured ColorChecker SG and the reproduced image were 4.0 and 6.1 for the Canon (chart and spectral
respectively) and 5.9 and 6.9 for the Nikon. The median differences between cameras were 2.8 and 3.4 for the chart and
spectral characterizations, near the expected threshold for reliable image difference perception. Eight scenes were
evaluated psychophysically in three forced-choice experiments in which a reference image from one of the cameras was
shown to observers in comparison with a pair of images, one from each camera. The three experiments were (1) a
comparison of the two cameras with the chart-based characterizations, (2) a comparison with the spectral
characterizations, and (3) a comparison of chart vs. spectral characterization within and across cameras. The results for
the three experiments are 64%, 64%, and 55% correct respectively. Careful and simple colorimetric characterization of
digital SLR cameras can result in visually equivalent color reproduction.
The spectrum locus of a CIE chromaticity diagram defines the boundary within which all physically realizable color
stimuli must fall. While that is a physical and mathematical reality that cannot be violated, it is possible to create colors
that appear as if they were produced by physically impossible stimuli. This can be accomplished through careful control
of the viewing conditions and states of adaptation. This paper highlights the importance of considering color appearance
issues in the design of displays and specification of color gamuts and illustrates how the perceived color gamut can be
manipulated significantly through the relationship between white-point and primary luminance levels without changing
the chromaticity gamut of a display system. Using a color appearance model, such as CIECAM02, display color gamuts
can be specified in perceptual terms such as lightness, chroma, brightness, and colorfulness rather than in strictly
physical terms of the stimuli that create these perceptions. Examination of these perceptual gamuts, and their
relationships to the viewing conditions, allows demonstration of the possibility of producing display gamuts that appear
to reach beyond the locus of pure spectral colors when compared with typical display setups.
A psychophysical experiment was performed examining the effect of luminance and chromatic noise on perceived image quality. The noise was generated in a recently developed isoluminant opponent space. 5 spatial frequency octave bands centered at 2, 4, 8, 16, and 32 cycles-per-degree (cpd) of visual angle were generated for each of the luminance, red-green, and blue-yellow channels. Two levels of contrast at each band were examined. Overall there were 30 images and 1 "original" image. Four different image scenes were used in a paired-comparison experiment. Observers were asked to select the image that appears to be of higher quality.
The paired comparison data were used to generate interval scales of image quality using Thurstone's Law of Comparative Judgments. These interval scales provide insight into the effect of noise on perceived image quality. Averaged across the scenes, the original noise-free image was determined to be of highest quality. While this result is not surprising on its own, examining several of the individual scenes shows that adding low-contrast blue-yellow isoluminant noise does not statistically decrease image quality and can result in a slight increase in quality.
Multi-channel imaging gains more and more applications because of its advantage in better color reproduction and better spectral representation to avoid metamerism problem. Illuminant estimation for multi-channel images is not widely studied because most illuminant estimation methods are applied to trichromatic images. In this paper, some common illuminant estimation methods such as gray world and maximum RGB are extended to multi-channel images. Five methods are evaluated for multi-channel images including gray world, maximum RGB, Maloney-Wandell method, modified illuminant detection in linear space and reflectance constraint illuminant detection. The methods are evaluated in terms of illuminant detection efficiency through estimating the illuminant correlated color temperature. Among them, the method of reflectance constraint illuminant detection has the best efficiency. In addition, the former three methods, which were only used in illuminant estimation for three-channel images before, are attempted in illuminant spectral recovery. The recovery efficiencies are evaluated through comparing the difference between the recovered spectral distributions and the original ones. Maloney-Wandell method has large efficiency improvement when the number of channels increases from three to four. It also has the best spectral recovery among the three tested methods when the channel number is more than three.
Two psychophysical experiments were performed scaling overall image quality of black-and-white electrophotographic (EP) images. Six different printers were used to generate the images. There were six different scenes included in the experiment, representing photographs, business graphics, and test-targets. The two experiments were split into a paired-comparison experiment examining overall image quality, and a triad experiment judging overall similarity and dissimilarity of the printed images. The paired-comparison experiment was analyzed using Thurstone's Law, to generate an interval scale of quality, and with dual scaling, to determine the independent dimensions used for categorical scaling. The triad experiment was analyzed using multidimensional scaling to generate a psychological stimulus space. The psychophysical results indicated that the image quality was judged mainly along one dimension and that the relationships among the images can be described with a single dimension in most cases. Regression of various physical measurements of the images to the paired comparison results showed that a small number of physical attributes of the images could be correlated with the psychophysical scale of image quality. However, global image difference metrics did not correlate well with image quality.
Eye movement behavior was investigated for image-quality and chromatic adaptation tasks. The first experiment examined the differences between paired comparison, rank order, and graphical rating tasks, and the second experiment examined the strategies adopted when subjects were asked to select or adjust achromatic regions in images. Results indicate that subjects spent about 4 seconds looking at images in the rank order task, 1.8 seconds per image in the paired comparison task, and 3.5 seconds per image in the graphical rating task. Fixation density maps from the three tasks correlated highly in four of the five images. Eye movements gravitated toward faces and semantic features, and introspective report was not always consistent with fixation density peaks. In adjusting a gray square in an image to appear achromatic, observers spent 95% of their time looking only at the patch. When subjects looked around (less than 5% of the time), they did so early. Foveations were directed to semantic features, not achromatic regions, indicating that people do not seek out near-neutral regions to verify that their patch appears achromatic relative to the scene. Observers also do not scan the image in order to adapt to the average chromaticity of the image. In selecting the most achromatic region in an image, viewers spent 60% of the time scanning the scene. Unlike the achromatic adjustment task, foveations were directed to near-neutral regions, showing behavior similar to a visual search task.
One goal of image quality modeling is to predict human judgments of quality between image pairs, without needing knowledge of the image origins. This concept can be thought of as device-independent image quality modeling. The first step towards this goal is the creation of a model capable of predicting perceived magnitude differences between image pairs. A modular color image difference framework has recently been introduced with this goal in mind. This framework extends traditional CIE color difference formulae to include modules of spatial vision and adaptation, sharpness detection, contrast detection, and spatial localization. The output of the image difference framework is an error map, which corresponds to spatially localized color differences. This paper reviews the modular framework, and introduces several new techniques for reducing the multi-dimensional error map into a single metric. In addition to predicting overall image differences, the strength of the modular framework is its ability to predict the distinct mechanisms that cause the differences. These mechanisms can be thought of as attributes of image appearance. We examine the individual mechanisms of image appearance, such as local contrast, and compare them with overall perceived differences. Through this process, it is possible to determine the perceptual weights of multi-dimensional image differences. This represents the first stage in the development of an image appearance model designed for image difference and image quality modeling.
Traditional color appearance modeling has recently matured to the point that available, internationally-recommended models such as CIECAM02 are capable of making a wide range of predictions to within the observer variability in color matching and color scaling of stimuli in somewhat simplified viewing conditions. It is proposed that the next significant advances in the field of color appearance modeling will not come from evolutionary revisions of these models. Instead, a more revolutionary approach will be required to make appearance predictions for more complex stimuli in a wider array of viewing conditions. Such an approach can be considered image appearance modeling since it extends the concepts of color appearance modeling to stimuli and viewing environments that are spatially and temporally at the level of complexity of real natural and man-made scenes. This paper reviews the concepts of image appearance modeling, presents iCAM as one example of such a model, and provides a number of examples of the use of iCAM in still and moving image reproduction.
For a long time, the constraints on surface spectral reflectances are the range of 0 to 1, smooth and low frequency. Those constraints are tested to be too loose in practical use, typically for illuminant estimation with spectral recovery. The proposal of linear model and PCA decomposition made it possible to effectively reconstruct spectral reflectances with small numbers of parameters. Based on that, a new constraint on surface spectral reflectance is proposed to have better limitation and description of their characteristics. It is defined as a two-dimensional histogram of the coefficients for the spectral reflectances in the real world. The variables in the two dimensions are the ratios of the parameters from PCA, which describe the “saturation” property of reflectances. There are differences between the application of gamut and histogram in illuminant estimation. Histogram is preferred to gamut when the color space is composed of relative values. Based on that, the original color by correlation method is modified to have better performance especially on real images. The proposed constraint is applied to illuminant detection with spectral recovery. In the method, the recovered surface reflectances are examined by the constraint, and the scene illuminant is detected through possibility comparison. The proposed method is tested to have good efficiency compared with others, both on synthetic and real images.
The use of colorimetry within industry has grown extensively in the last few decades. Central to many of today's instruments is the work of the CIE system, established in 1931. Many have questioned the validity of the assumptions made by Wright and Guild, some suggesting that the 1931 color matching functions are not the best representation of the human visual system's cone responses. A computational analysis was performed to evaluate the CIE 1931 color matching functions against other responsivity functions using metameric data. An optimization was then performed to derive a new set of color matching functions using spectral data of visually matched metameric pairs.
This paper describes a new procedure of capturing spectral images of human portraiture. The designed imaging system was calibrated directly based on real human subjects and has the capability to provide accurate spectral images of human faces, including facial skin as well as the lips, eyes, and hair, from various ethnic races. The facial spectral reflectances obtained were analyzed by principal components analysis (PCA) method. Based on the results of PCA, spectral images using both three and six wide-band spectral sampling were estimated. The reconstructed spectral images for display based on an sRGB display model are evaluated. The results have proved that this new spectral imaging procedure is successful. The results also show that three basis functions are accurate enough to estimate the spectral reflectance of human faces. The derived spectral images can be applied to color-imaging system design and analysis.
In meetings just prior to the 1997 AIC Congress in Kyoto, CIE TC1-37, chaired by M. Fairchild, established the CIE 1997 Interim Colour appearance Model (Simple Version), known as CIECAM97s. CIECAM97s was formally published in 1998 in CIE publication 131. CIE TC1-37 was dissolved shortly after publication of CIECAM97s at which time, a reportership, R1- 24 held by M. Fairchild, was established to monitor ongoing developments in color appearance modeling and notify CIE Division 1 if it became necessary to form a new TC to consider revision or replacement of CIECAM97s. In the four years between AIC Congresses, there has been much activity, both by individual researchers and within the CIE, aimed at furthering our understanding of color appearance models and deriving improved models for consideration. The aim of this paper is to summarize these activities, report on the current status of CIE efforts on color appearance models, and suggest what the future might hold for CIE color appearance models.
The perceptual amplification of color, otherwise known as the Helmholtz-Kohlrausch effect, has been experimentally characterized for a common CRT computer monitor and the highly saturated, and pure, colors Red (255,0,0), Green (0,255,0), Blue(0,0,255), Cyan(0,255,255), Magenta(255,0,0) and Yellow(255,255,0). Twenty-four human observers conducted direct brightness adjustment, on a CRT monitor, of the original colors to match the brightness perception of corresponding equal luminance gray images. Experiments were conducted in complete darkness to eliminate the desaturating effect of flare light. Results are presented in both the monitor R, G, B color space and the 1976 CIELAB color space.
When we view a softcopy image on a CRT display, typical CRT white point color temperature is 9300K and standard ambient light is 5000K. In this case, the black on the CRT screen is lightened and the chromaticity of the black is far from the achromatic axis because of the reflection of ambient light on the CRT displays. Also, when viewing hardcopy in the same environment as a CRT monitor, the chromaticity of the printer black is far from the achromatic axis. If such a printer were the source device and CRT monitor the destination device, dark printer colors cannot be reproduced on the destination device because, for dark colors, the gamut of the destination is smaller than the gamut of the source. Thus, lightness compensation is needed to reproduce dark colors on destination device. Three methods were considered: (1) Simple lightness compression method, (2) Complete black point adaptation method that consists of mapping to the CRT black, (3) Incomplete black point adaptation method that is a compromise between method (1) and (2). Visual experiments were performed to investigate these methods. The results indicated that the appropriate black point adaptation ratio is located between softcopy and hardcopy black points.
In color gamut mapping of pictorial images, the lightness rendition of the mapped images plays a major role in the quality of the final image. For color gamut mapping tasks, where the goal is to produce a match to the original scene, it is important to maintain the perceived lightness contrast of the original image. Typical lightness remapping functions such as linear compression, soft compression, and hard clipping reduce the lightness contrast of the input image. Sigmoidal remapping functions were utilized to overcome the natural loss in perceived lightness contrast that results when an image from a full dynamic range device is scaled into the limited dynamic range of a destination device. These functions were tuned to the particular lightness characteristics of the images used and the selected dynamic ranges. The sigmoidal remapping functions were selected based on an empirical contrast enhancement model that was developed for the result of a psychophysical adjustment experiment. The results of this study showed that it was possible to maintain the perceived lightness contrast of the images by using sigmoidal contrast enhancement functions to selectively rescale images from a source device with a full dynamic range into a destination device with a limited dynamic range.
Investigation of the tradeoffs between the number of quantization levels and spatial addressability of printed color images was performed. Error diffusion in CMYK color space was used to quantize the images. Quantized images were printed on a single color printer simulating different spatial addressabilities. A psychophysical experiment was conducted to evaluate the perceived image quality (IQ) of the prints. The conclusions on the tradeoffs were drawn based on the results of the consequent statistical analysis. It was determined that the tradeoffs were scene dependent with pictorial scenes being able to sustain greater reduction in addressability without perceived IQ being decreased than graphics. The results of the experiment demonstrated that pictorial scenes were sufficient to be printed with 5 bits per color (bpc) per pixel at 100 dots per inch (dpi), and graphics -- 3 bits per color per pixel at 300 dpi in order to match the perceived IQ of the best possible, 8 bpc - 300 dpi, combination for the given system at normal viewing distance. If a single bpc-dpi combination was to be named as the optimum, it would have to be 3 bpc - 300 dpi.
A colorimetrically calibrated CRT display was used to measure constant perceptual hue surfaces in color space. Three hundred six points over fifteen equally spaced hue angles (every 24 degrees) in CIELAB color space were sampled. An average of 20 lightness-chroma combinations per reference hue plane was sampled. Thirty observers performed the matching task three times each. Intra-observer variation was used to weight mean observer hue matches for each of 306 colors. Analysis of perceived hue uniformity was performed in CIELAB and CIECAM97s color spaces. Other constant hue experimental results are analyzed and compared to data obtained here.
The color-image quality of color overhead transparencies depends on properties of the imaging system used to create the transparency and illuminating and viewing conditions of the transparency such as the projector's spectral power distribution, projector distance from the screen, and luminance, ambient lighting, screen gonio-spectral reflectance factor, and viewing distance and geometry. As different visual fields and/or luminance of those fields, some of these illuminating and viewing conditions can be taken into suitable
account using color-appearance models. A visual experiment is performed to determine whether color-appearance correlates of visual perception could be used to predict color-image quality for this
imaging modality. The Hunt 1991 color-appearance model is used to define correlates of hue, brightness, colorfulness, lightness, and chroma for both pictorial and business-graphic scenes viewed under several combinations of ambient illuminance and projector luminance. Gamut volume is defined based on either absolute attributes-hue, brightness, and colorfulness-or relative attributes-hue, lightness, and chroma. Seventeen observers performed a preference experiment generating interval scales of color-image quality. It is found that gamut volume defined by using correlates of hue, brightness, and colorfulness well predicted color-image quality. Of these correlates, colorfulness was the most important factor.
In order to systematically evaluate different gamut mapping algorithms, we have simulated gamut mapping on a CRT using simple rendered images of colored spheres floating in front of a gray background. Using CIELab as our device-independent color space, cut-off values for lightness and chroma, based on the statistics of the images, were chosen to reduce the gamuts for the test images. The gamut mapping algorithms consisted of combination of clipping and linearly mapping the original gamut in piecewise segments. Complete color space compression in RGB and CIELAB was also used. Each of the colored originals (R,G,B,C,M,Y, and Skin) were mapped separately in lightness and chroma. In addition, each algorithm was implemented with saturation (C*/L*) allowed to vary or remain constant. Using a paired-comparison paradigm, pairs of test images with reduced color gamuts were presented to twenty subjects along with the original image. For each pair the subjects chose the test images that better reproduced the original. Rank orders and interval scales of algorithm performance with confidence limits were then derived. Certain algorithms were found to perform best consistently over image color. For chroma mapping, clipping of all out-of-gamut colors while keeping lightness constant was the most preferred method. For lightness mapping at the top of the gamut, a particular piecewise mapping technique while keeping saturation constant was preferred. For lightness mapping at the bottom the results gave an indication of the type of algorithm that might be best while keeping chroma constant. The choice of device-independent color space may also influence the choice of gamut mapping algorithm.
Accurate color reproduction of images presented on a
computer-controlled CRT display as projected 35-mm transparencies is a complicated procedure requiring the characterization and control of several imaging processes and the application of appropriate color appearance modeling to account for the changes in viewing conditions. We review a process for image recorder characterization, projection system characterization, and testing of color appearance models for this application. Accurate image recorder characterization was achieved through a combination of empirical modeling of the exposure and processing system and of a physical model of photographic film. The projection system characterization included specification of the spectral properties of the light source, reflectance properties of the viewing screen, and the effects of light exposure and temperature on the photographic transparencies. Color appearance models were used to predict the changes in image color appearance due to changes in media, white point, luminance, and surround. The RLAB model proved to work best in this application.
The Joint Photographic Experts Group's image compression algorithm has been shown to provide a very efficient and powerful method of compressing images. However, there is little substantive information about which color space should be utilized when implementing the JPEG algorithm. Currently, the JPEG algorithm is set up for use with any three-component color space. The objective of this research is to determine whether or not the color space selected will significantly improve the image compression. The RGB, xYz, YIQ, CIELAB, CIELUV, and CIELAB LCh color spaces were examined and compared. Both numerical measures and psychophysical
techniques were used to assess the results. The final resuits indicate that the device space, RGB, is the worst color space to compress images. In comparison, the nonlinear transforms of the device space, CIELAB and CIELUV, are the best color spaces to compress images. The XYZ, YIQ, and CIELAB LCh color spaces resulted in intermediate levels of compression.
A new color space, RLAB, for cross-media color reproduction has been developed. This space is a modification of the CIE 1976 L*a*b* (CIELAB) color space which is widely used in a variety of industries. The RLAB modification of the CIELAB space allows for more accurate predictions of changes in color appearance due to chromatic adaptation, prediction of differences due to the types of media, and adjustment for changes in the relative luminance of the stimulus surround. In addition, the RLAB space can be used for the accurate calculation of color differences through a modification of the CIELAB color difference equation. These enhancements allow useful application of the CIELAB color space to problems of device- independent color imaging. This paper describes the formulation of the RLAB color space and its implementation. It is based on the chromatic adaptation model previously published by Fairchild and the Bartleson and Breneman corrections for image surround. In addition, psychophysical results comparing use of the RLAB color space to other techniques and color appearance models are presented.
The human visual system has evolved with a sophisticated set of mechanisms to produce stable perceptions of object colors across changes in illumination. This phenomenon is typically referred to as chromatic adaptation or color constancy. When viewing scenes or hard-copy reproductions, it is generally assumed that one adapts almost completely to the color and luminance of the prevailing light source. This is likely not the case when soft-copy image displays are viewed. Differences in the degree of chromatic adaptation to hard-copy and soft- copy displays point to two types of chromatic-adaptation mechanisms: sensory and cognitive. Sensory mechanisms are those that act automatically in response to the stimulus, such as retinal gain control. Cognitive mechanisms are those that rely on observers' knowledge of scene content. A series of experiments that measured the spatial, temporal, and chromatic properties of chromatic-adaptation mechanisms are reviewed and a mathematical model for predicting these chromatic adaptation effects is briefly described along with some practical recommendations, based on psychophysical experiments, on how to approach these problems in typical cross-media color reproduction situations.
With the development of high-definition television (HDTV) systems came the 16:9 (width to height) viewing image aspect ratio. This is compared to the National Television System Committee (NTSC) standard ratio of 4:3 (width to height). This variation in width-toheight aspect ratio has led to the question of which ratio is preferred by the viewing public. The use of a paired-comparison preferencejudgment
experiment is described that was designed to determine
whether or not significant differences exist in image preference between the two aspect ratios. Observers were asked to choose a preferred image from a set of two (NTSC versus HDTV) of various image sizes over 84 separate trials. Three separate image types were used in the study: a portrait, a landscape, and a still life. The results indicate that image quality perception is a function of image aspect ratio. The HDTV image was preferred for all three image types.