A display’s color subpixel geometry provides an intriguing opportunity for improving readability of text. True type fonts
can be positioned at the precision of subpixel resolution. With such a constraint in mind, how does one need to design
font characteristics? On the other hand, display manufactures try hard in addressing the color display’s dilemma: smaller
pixel pitch and larger display diagonals strongly increase the total number of pixels. Consequently, cost of column and
row drivers as well as power consumption increase. Perceptual color subpixel rendering using color component
subsampling may save about 1/3 of color subpixels (and reduce power dissipation). This talk will try to elaborate the
following questions, based on simulation of several different layouts of subpixel matrices: Up to what level are display
device constraints compatible with software specific ideas of rendering text? How much of color contrast will remain?
How to best consider preferred viewing distance for readability of text? How much does visual acuity vary at 20/20
vision? Can simplified models of human visual color perception be easily applied to text rendering on displays? How
linear is human visual contrast perception around band limit of a display’s spatial resolution? How colorful does the
rendered text appear on the screen? How much does viewing angle influence the performance of subpixel layouts and
color subpixel rendering?
High-end PC monitors and TVs continue to increase their native display resolution to 4k by 2k and beyond.
Subsequently, uncompressed pixel amplitude processing becomes costly not only when transmitting over cable or
wireless communication channels, but also when processing with array processor architectures. We recently presented a
block-based memory compression architecture for text, graphics, and video which we named parametric functional
compression (PFC) enabling multi-dimensional error minimization with context sensitive control of visually noticeable
artifacts. The underlying architecture was limited to small block sizes of 4x4 pixels. Although well suitable for random
access, its overall compression ratio ranges between 1.5 and 2.0. To increase compression ratio as well as image quality,
we propose a new hybrid approach. Within an extended block size we apply two complementary methods using a set of
vectors with orientation and curvature attributes across a 3x3 kernel of pixel positions. The first method searches for
linear interpolation candidate pixels that result in very low interpolation errors using vectorized linear interpolation
(VLI). The second method calculates the local probability of orientation and curvature (POC) to predict and minimize
PFC coding errors. Detailed performance estimation in comparison with the prior algorithm highlights the effectiveness
of our new approach, identifies its current limitations with regard to high quality color rendering with lower number of
bits per pixel, and illustrates remaining visual artifacts.
This paper proposes two novel approaches to Video Quality Assessment (VQA). Both approaches attempt to develop video evaluation techniques capable of replacing human judgment when rating video quality in subjective experiments. The underlying study consists of selecting fundamental quality metrics based on Human Visual System (HVS) models and using artificial intelligence solutions as well as advanced statistical analysis. This new combination enables suitable video quality ratings while taking as input multiple quality metrics. The first method uses a neural network based machine learning process. The second method consists in evaluating the video quality assessment using non-linear regression model. The efficiency of the proposed methods is demonstrated by comparing their results with those of existing work done on synthetic video artifacts. The results obtained by each method are compared with scores from a database resulting from subjective experiments.
High-end monitors and TVs based on LCD technology continue to increase their native display resolution to 4k by 2k and beyond. Subsequently, uncompressed pixel amplitude processing becomes costly not only when transmitting over cable or wireless communication channels, but also when processing with array processor architectures. For motion video content, spatial preprocessing from YCbCr 444 to YCbCr 420 is widely accepted. However, due to spatial low pass filtering in horizontal and vertical direction, quality and readability of small text and graphics content is heavily compromised when color contrast is high in chrominance channels. On the other hand, straight forward YCbCr 444 compression based on mathematical error coding schemes quite often lacks optimal adaptation to visually significant image content. We present a block-based memory compression architecture for text, graphics, and video enabling multidimensional error minimization with context sensitive control of visually noticeable artifacts. As a result of analyzing image context locally, the number of operations per pixel can be significantly reduced, especially when implemented on array processor architectures. A comparative analysis based on some competitive solutions highlights the effectiveness of our approach, identifies its current limitations with regard to high quality color rendering, and illustrates remaining visual artifacts.
High-end monitors and TVs based on LCD technology continue to increase their native display resolution to 4k2k and
beyond. Subsequently, uncompressed pixel data transmission becomes costly when transmitting over cable or wireless
communication channels. For motion video content, spatial preprocessing from YCbCr 444 to YCbCr 420 is widely
accepted. However, due to spatial low pass filtering in horizontal and vertical direction, quality and readability of small
text and graphics content is heavily compromised when color contrast is high in chrominance channels. On the other
hand, straight forward YCbCr 444 compression based on mathematical error coding schemes quite often lacks optimal
adaptation to visually significant image content. Therefore, we present the idea of detecting synthetic small text fonts
and fine graphics and applying contour phase predictive coding for improved text and graphics rendering at the decoder
side. Using a predictive parametric (text) contour model and transmitting correlated phase information in vector format
across all three color channels combined with foreground/background color vectors of a local color map promises to
overcome weaknesses in compression schemes that process luminance and chrominance channels separately. The
residual error of the predictive model is being minimized more easily since the decoder is an integral part of the encoder.
A comparative analysis based on some competitive solutions highlights the effectiveness of our approach, discusses
current limitations with regard to high quality color rendering, and identifies remaining visual artifacts.
High-end monitors based on LCD technology increasingly address wide color gamut implementations featuring precise
color calibration within a variety of different color spaces such as extended sRGB or AdobeRGB. However, images are
often reconstructed from digitally compressed images files such as JPEG or MPEG where color quality could be
questionable. In particular, when such image files are scaled up or zoomed in, different types of image artifacts become
visually noticeable. Among these artifacts we find pixel repetition, blockiness, ringing, and color blotching. While pixel
repetition and ringing appear due to insufficient adaptation to image context using a static or context adaptive filter
kernel in temporal domain, blockiness and ringing occur due to image compression in frequency domain, when image
compression factors are significant. In addition, chrominance channels often undergo an even higher compression ratio
that amplifies visibility of artifacts such as color blotches. Consequently, we are interested in improving the quality of
images to be displayed depending on image zoom factors. We propose to discriminate most relevant visual artifacts
using power spectrum analysis in DCT domain as well as kernel based rescaling combined with statistical analysis
taking into account characteristic non-stationary behavior of image content and identifiable visual artifacts. A
comparative analysis based on some competitive solutions highlights the effectiveness of our approach and identifies its
current limitations with regard to wide color gamut representation, primarily due to mathematical uncertainty of the
studied artifacts.
High-end monitors based on LCD technology increasingly address wide color gamut implementations featuring precise
color calibration within a variety of different color spaces such as extended sRGB or AdobeRGB. Combining a Look-
Up-Table method with linear interpolation in RGB component space using 3x3 matrix multiplication provides optimized
means of tone curve adjustments as well as independent adjustment of device primaries. The proposed calibration
method completes within several seconds compared to traditional color calibration procedures easily taking several
minutes. In addition, the user can be given subjective control over color gamut boundary settings based on dynamic
gamut boundary visualization. The proposed component architecture not only provides independent control over 8 color
vertices but also enables adjustment in quantities of 10-4 of full amplitude range. User defined color patches can be
adjusted manually while simultaneously tracking color gamut boundaries and visualizing gamut boundary violation in
real time. All this provides a convenient approach to fine tuning tone curves and matching particular constraints with
regard to user preferences, for example specific ambient lighting conditions, across different devices such a monitors and printers.
KEYWORDS: Visualization, Spatial frequencies, Visual process modeling, Transparency, 3D displays, Modulation, RGB color model, 3D modeling, High dynamic range imaging, Color vision
3D-LCD technology is beginning to become popular in video entertainment display devices. For ultimate viewing
experience it appears attractive to better understand the influence of color within 3D scenery with regard to visual
perception of shape and depth. To facilitate such a task, 3D graphics renderers offer a suitable experimental approach.
Enabling comparison of simple synthetic scenarios when combining 3D-modeling and color component synthesis one
can also extend it to naturally looking scenarios. Consequently, relevant color parameter modifications carried out
successively and independently may lead to optimal discrimination of their influence. Taking into account mathematical
color mapping techniques that are based on mapping 3-dimensional color space into monochromatic space we address
valuable findings such as subtle blue shifts in aerial perspective (due to light scattering) often reported by artists as
important visual cues. After searching for counter-proofs that efficiently define contradiction associated with each
parameter one can predict influence of color in relative quantities with regard to depth cues. Further experiments focus
on color dependent depth perception of an object moving within synthetic or natural scenarios. Applying synthetic
objects in such scenarios we searched for visually perceived depth locations as a function of object color. Subsequently,
a comparison of color dependent depth perception between static objects and motion objects is also discussed.
KEYWORDS: Light emitting diodes, Visualization, LED backlight, LCDs, Modulation, RGB color model, Visual process modeling, Image quality, LED displays, Visibility
Large-scale, direct view TV screens, in particular those based on liquid crystal technology, are beginning to use LED
(Light Emitting Diode) backlight technology. Conservative estimates show that LED-LCD TVs additional cost will be
compensated by reduced power consumption over average operational lifetime. Local dimming not only promises power
savings but also improving additional features such as overall image contrast and color gamut. Based on a simplified
human visual model with regard to content dependent color discrimination and local adaptation possible tradeoffs with
regard to color uniformity, viewing angle, local contrast, and brightness variations are being discussed. Comparing an
'ideal reference' (input to elaborated model) with the output of a tunable model enabled estimating a threshold of artifact
visibility for still images as well as video clips that were considered relevant. Overall image quality with regard to
dynamic contrast as well as dynamic color gamut, spatially and temporally, was optimized after having obtained
visibility thresholds. Finally, some cost functions are proposed that enable optimizing most important color quality
parameters.
With the development of fast liquid crystal cells and with advent of backlight units with separate red, green and blue
light emitting diodes also first commercial color sequential display emerge. Another technique in commercial application
proposes implementing more than three subpixels and thus enhanced color gamut by using multiple color primaries. Also
the combination of both color mixture techniques is possible. It is thus desirable to have a simulation workbench at hand
that is flexible enough to adapt to the various possibilities of subpixel design and color sequences in display design to
evaluate the displayed image in advance. The combination of a multiprimary display model, which can emulate a
multiprimary display on a standard RGB LC display, and a spatio-temporal model, that describes LC pixel behavior to
arbitrary input signals over time, provides means for simulating the perceived image of a color-sequential display
behavior. This article describes the combined model and gives also simulation results that compare advanced displays to
conventional vertical stripe RGB LC display.
Large-scale, direct view TV screens, in particular those based on liquid crystal technology, are beginning to use subpixel
structures with more than three subpixels to implement a multi-primary display with up to six primaries. Since their
input color space is likely to remain tri-stimulus RGB we first focus on some fundamental constraints. Among them, we
elaborate simplified gamut mapping architectures as well as color filter geometry, transparency, and chromaticity
coordinates in color space. Based on a 'display centric' RGB color space tetrahedrization combined with linear
interpolation we describe a simulation framework which enables optimization for up to 7 primaries. We evaluated the
performance through mapping the multi-primary design back onto a RGB LC display gamut without building a
prototype multi-primary display. As long as we kept the RGB equivalent output signal within the display gamut we
could analyze all desirable multi-primary configurations with regard to colorimetric variance and visually perceived
quality. Not only does our simulation tool enable us to verify a novel concept it also demonstrates how carefully one
needs to design a multiprimary display for LCD TV applications.
KEYWORDS: Quantization, Visualization, Signal to noise ratio, Filtering (signal processing), LCDs, Signal processing, Diamond, Image quality, Spatial resolution, Human vision and color perception
Large-scale, direct view TV screens, in particular those based on liquid crystal technology, are beginning to use
structures with more than three subpixels to further reduce the visibility of defective subpixels, increase spatial
resolution, and perhaps even implement a multi-primary display with up to six different primaries. This newly available
"subpixel" resolution enables us to improve color edge contrast thereby allowing for shorter viewing distances through a
reduction of perceived blur. However, not only noise, but also amplitude quantization can lead to undesirable, additional
visual artifacts along contours. Using our recently introduced method of contour phase synthesis in combination with
non-linear, color channel processing, we propose a simple method that maximizes color edge contrast while maintaining
or suppressing visual artifacts as well as noise below the threshold of visual perception. To demonstrate the advantages
of our method, we compare it with some classical contrast enhancement techniques such as cubic spline interpolation
and color transient improvement.
KEYWORDS: Image quality, Spiral phase plates, Image resolution, Diamond, Large screens, Visualization, Image enhancement, Zoom lenses, Human vision and color perception, Linear filtering
To display a low-resolution digital image on a large screen display with a native resolution that reaches or exceeds the well-known HDTV standard, it is desirable to re-scale the image to full screen resolution. However, many well known re-scale algorithms, such as cubic spline interpolation or bi-cubic interpolation filters, only reach limited performance in terms of perceived sharpness and leave contour artifacts like jaggedness or haloing, which easily appear above the threshold of visual perception with regard to a reduced viewing distance. Since human vision demonstrates a non-linear behavior towards the detection of these artifacts, we elaborate some simple image operations that better adapt to the image quality constraints imposed by most of the linear methods known to us today. In particular, we propose a nonlinear method that can reduce or hide the above-mentioned contour artifacts in the spatial frequency domain of the display raster and increase perceived sharpness to a very noticeable degree. As a result, low-resolution images look better on large screen displays due to an enhanced local contrast and show less annoying artifacts than some of the classical approaches reviewed in this paper.
KEYWORDS: Skin, Computer programming, Visualization, Video processing, Algorithm development, Analog electronics, Video, Visual process modeling, Digital electronics, Integrated circuits
With the incoming of digital TV, sophisticated video processing algorithms have been developed to improve the rendering of motion or colors. However, the perceived subjective quality of these new systems sometimes happens to be in conflict with the objective measurable improvement we expect to get. In this presentation, we show examples where algorithms should visually improve the skin tone rendering of decoded pictures under normal conditions, but surprisingly fail, when the quality of mpeg encoding drops below a just noticeable threshold. In particular, we demonstrate that simple objective criteria used for the optimization, such as SAD, PSNR or histogram sometimes fail, partly because they are defined on a global scale, ignoring local characteristics of the picture content. We then integrate a simple human visual model to measure potential artifacts with regard to spatial and temporal variations of the objects' characteristics. Tuning some of the model's parameters allows correlating the perceived objective quality with compression metrics of various encoders. We show the evolution of our reference parameters in respect to the compression ratios. Finally, using the output of the model, we can control the parameters of the skin tone algorithm to reach an improvement in overall system quality.
KEYWORDS: Visualization, RGB color model, Visibility, Linear filtering, LCDs, Signal to noise ratio, Nonlinear filtering, Video, Visual process modeling, Plasma
As large-scale direct view TV screens such as LCD flat panels or plasma displays become more and more affordable, consumers not only expect to buy a ‘big screen’ but to also get ‘great picture quality’. To enjoy the big picture, its viewing distance is significantly reduced. Consequently, more artifacts related to digital compression techniques show above the threshold of visual detectability. The artifact that caught our attention can be noticed within uniform color patches. It presents itself as ‘color blobs’ or color pixel clustering. We analyze the artifact’s color characteristics in RGB and CIELAB color spaces and underline them by re-synthesizing an artificial color patch. To reduce the visibility of the artifact, we elaborate several linear methods, such as low pass filtering and additive white gaussian noise and verify, whether they could correct or mask the visible artifacts. From the huge list of nonlinear filter methods we analyze the effect of high frequency dithering and pixel shuffling, also based on the idea that spatial visual masking should dominate signal correction. By applying shuffling, we generate artificial high frequency components within the uniform color patch. As a result, the artifact characteristics change significantly and its visibility is strongly reduced.
During recent years color reproduction systems for consumer needs have experienced various difficulties. In particular, flat panels and printers could not reach a satisfactory color match. The RGB image stored on an Internet server of a retailer did not show the desired colors on a consumer display device or printer device.
STMicroelectronics addresses this important color reproduction issue inside their advanced display engines using novel algorithms targeted for low cost consumer flat panels. Using a new and genuine RGB color space transformation, which combines a gamma correction Look-Up-Table, tetrahedrization, and linear interpolation, we satisfy market demands.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.