PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE
Proceedings Volume 6808, including the Title Page, Copyright
information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image Quality Standards for Capture, Print, and Display
In September 2000, INCITS W1 (the U.S. representative of ISO/IEC JTC1/SC28, the standardization committee for office equipment) was chartered to develop an appearance-based image quality standard.(1),(2) The resulting W1.1 project is based on a proposal(3) that perceived image quality can be described by a small set of broad-based attributes. There are currently six ad hoc teams, each working towards the development of standards for evaluation of perceptual image quality of color printers for one or more of these image quality attributes. This paper summarizes the work in progress of the teams addressing the attributes of Macro-Uniformity, Colour Rendition, Gloss & Gloss Uniformity, Text & Line Quality and Effective Resolution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A flatbed reflection scanner is a tempting device to use as a surrogate for a microdensitometer in the evaluation of print
image quality. Since reflection scanners were never designed with this purpose in mind, many concerns exist regarding their usefulness as a microdensitometer surrogate. This paper addresses the concerns regarding scan uniformity that must be addressed in order to qualify a reflection scanner for use in print image quality evaluation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Edition 2 of ISO 12233, Resolution and Spatial Frequency Response (SFR) for Electronic Still Picture Imaging, is likely
to offer a choice of techniques for determining spatial resolution for digital cameras different from the initial standard.
These choices include 1) the existing slanted-edge gradient SFR protocols but with low contrast features, 2) polar coordinate sine wave SFR technique using a Siemens star element, and 3) visual resolution threshold criteria using a continuous linear spatial frequency bar pattern features. A comparison of these methods will be provided. To establish the level of consistency between the results of these methods, theoretical and laboratory experiments were performed by members of ISO TC42/WG18 committee. Test captures were performed on several consumer and SLR digital cameras using the on-board image processing pipelines. All captures were done in a single session using the same lighting conditions and camera operator. Generally, there was good conformance between methods albeit with some notable differences. Speculation on the reason for these differences and how this can be diagnostic in digital camera evaluation will be offered.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the first ISO digital camera standards to address image microstructure was ISO 12233, which introduced the SFR, spatial frequency response, based on the analysis of edge features in digital images. The SFR, whether derived from edges or periodic signals, describes the capture of image detail as a function of spatial frequency. Often during camera testing, however, there is an interest in distilling SFR results down to a single value that can be compared with acceptable tolerances. As a measure of limiting resolution, it has been suggested that the frequency at which the SFR falls to, e.g., 10%, can be used. We use this limiting resolution to introduce a sampling efficiency measure, being considered under the current ISO 12233 standard revision effort. The measure is the ratio of limiting resolution frequency to that implied by the image (sensor) sampling alone. The rationale and details of this measure are described,
as are example measurements. One-dimensional sampling efficiency calculations for multiple directions are included in a two-dimensional analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image Quality Attributes Characterization and Measurement: Printer
We explore two recent methods for measuring the Modeling Transfer Function of a printing system12. We
investigate the dependency on the amplitude when using the sinusoidal patches of the method proposed in1 and
show that for too small amplitudes the measurement of the MTF is not trustworthy. For the method proposed
in2 we discuss the underlying theory and in particular the use of a significance test for a statistical analysis.
Finally we compare both methods with respect our application - the processing and printing of photographic
images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Printer resolution is an important attribute for determining print quality, and it has been frequently referred to hardware optical resolution. However, the spatial addressability of hardcopy is not directly related to optical resolution because it is affected by printing mechanism, media, or software data processing such as resolution enhancement techniques (RET). The international organization ISO/IEC SC28 addresses this issue, and makes efforts to develop a new metric to measure this effective resolution. As the development process, this paper proposes a candidate metric for measuring printer resolution. Slanted edge method has been used to evaluate image sharpness for scanners and digital still cameras. In this paper, it is applied to monochrome laser printers. A test chart is modified to reduce the effect of halftone patterns. Using
a flatbed scanner, the spatial frequency response (SFR) is measured and modeled with a spline function. The frequency corresponding to 0.1 SFR is used in the metric for printer resolution. The stability of the metric is investigated in five separate experiments: (1) page to page variations, (2) different ROI locations, (3) different ROI sizes, (4) variations of toner density, and (5) correlation with visual quality. The 0.1 SFR frequencies of ten printers are analyzed. Experimental results show the strong correlation between a proposed metric and perceptual quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mottle is a common defect in printing. Mottle evaluation is crucial in image quality assessment and system
diagnosis. In this paper, we present a new automatic mottle estimation method which improves the existing technologies
in two aspects. First, a modified mottle noise frequency range is proposed, which further separates the banding and
streak spectra from mottle spectrum. Second, a robust estimation algorithm is introduced. It is less sensitive to the
outliers that may appear in the measurement. These outliers include other defects within the mottle frequency range,
such as spots, or defects outside of mottle frequency range, but are strong enough that can not be completely eliminated
by normal spatial filtering.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose new techniques for detecting and quantifying print defects. In our previous work, we
introduced a scanner-based print quality system to characterize directional print defects, such as banding, jitter,
and streaking. We extend our previous print quality work two ways. First, we introduce techniques for detecting
2-D isotropic, mottled print defects such as grain and mottle. Wavelet pre-filtering is used to limit the defect's
size or frequency range. Then we analyze the L* variation in the wavelet-processed images. The methods used
to quantify grain and mottle are similar to ISO/IEC 13660 techniques. The second part of this paper provides
techniques for detecting and quantifying low frequency directional defects, which we call left-to-right and
top-to-bottom L* variation. Since these defects extend less than two cycles across the page, and probably less than
a complete cycle, we fit a 4th-degree polynomial to the defect profile. To measure the strength of the defect,
we use variational analysis of the fitted polynomial. Experimental results on 10 printers and 100 print samples
showed an average correlation for isotropic defects of 0.85 between the proposed measures and experts' visual
evaluation, and 0.97 for low frequency defects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fine-pitch banding is one of the most unwanted artifacts in laser electrophotographic (EP) printers. It is
perceived as a quasiperiodic fluctuation in the process direction. Therefore, it is essential for printer vendors to
know how banding is perceived by humans in order to improve print quality. Monochrome banding has been
analyzed and assessed by many researchers; but there is no literature that deals with the banding of color laser
printers as measured from actual prints. The study of color banding is complicated by the fact that the color
banding signal is physically defined in a three-dimensional color space, while banding perception is described
in a one-dimensional sense such as more banding or less banding. In addition, the color banding signal arises
from the independent contributions of the four primary colorant banding signals. It is not known how these four
distinct signals combine to give rise to the perception of color banding. In this paper, we develop a methodology
to assess the banding visibility of the primary colorant cyan based on human visual perception. This is our
first step toward studying the more general problem of color banding in combinations of two or more colorants.
According to our method, we print and scan the cyan test patch, and extract the banding profile as a one
dimensional signal so that we can freely adjust the intensity of banding. Thereafter, by exploiting the pulse
width modulation capability of the laser printer, the extracted banding profile is used to modulate a pattern
consisting of periodic lines oriented in the process direction, to generate extrinsic banding. This avoids the
effect of the halftoning algorithm on the banding. Furthermore, to conduct various banding assessments more
efficiently, we also develop a softcopy environment that emulates a hardcopy image on a calibrated monitor, which
requires highly accurate device calibration throughout the whole system. To achieve the same color appearance
as the hardcopy, we perform haploscopic matching experiments that allow each eye to independently adapt to
different viewing conditions; and we find an appearance mapping function in the adapted XYZ space. Finally, to
validate the accuracy of the softcopy environment, we conduct a banding matching experiment at three different
banding levels by the memory matching method, and confirm that our softcopy environment produces the same
banding perception as the hardcopy. In addition, we perform two more separate psychophysical experiments
to measure the differential threshold of the intrinsic banding in both the hardcopy and softcopy environments,
and confirm that the two thresholds are statistically identical. The results show that with our target printer,
human subjects can see a just noticeable difference with a 9% reduction in the banding magnitude for the cyan
colorant.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Print mottle is one of the most significant defects in modern offset printing influencing overall print quality.
Mottling can be defined as undesired unevenness in perceived print density. Previous research in the field
considered designing and improving perception models for evaluating print mottle. Mottle has traditionally
been evaluated by estimating the reflectance variation in the print. In our work, we present an approach of
estimating mottling effect prior to printing. Our experiments included imaging non printed media under various
lighting conditions, printing the samples with sheet fed offset printing and imaging afterwards. For the preprint
examinations we used a set of preprint images and for the outcome testing we used high resolution scans. For
the set of papers used in experiment only uncoated mechanical speciality paper showed a good chance of print
mottle prediction. Other tested paper types had a low correlation between non-printed and printed images.
The achieved results allow predicting the amount of mottling on the final print using preprint area images for
a certain paper type. Current experiment settings suited well for uncoated paper, but for the coated samples
other settings need to be tested. The results show that the estimation can be made on the coarse scale and for
better results extra parameters will be required, i.e., paper type, coating, printing process in question.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image Quality Attributes Characterization and Measurement: Capture and Display
In recent years, several methods for evaluation or quantification of video image quality have been studied, such
as MPRT (Moving Picture Response Time) for quantification of motion blur occurred on hold-type displays.
It is required to improve the methods or criteria to consider human visual characteristics, especially anisotropy
and spatio-temporal dependency of contrast sensitivity. In this study, we quantify motion blur of the display by
comparing it with static blur edge. We examine the influence of conditions for edge presentation, such as moving
speed and moving direction of the edges, on perceived blurriness. According to the results of the assessment, we
found that the anisotropy of the display had a significant influence on perception of motion blurs. This result
suggests that multidirectional measurement is required to improve criteria of motion blur.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is important to estimate the noise of digital image quantitatively and efficiently for many applications such as noise
removal, compression, feature extraction, pattern recognition, and also image quality assessment. For these applications,
it is necessary to estimate the noise accurately from a single image. Ce et al proposed a method to use a Bayesian MAP
for the estimation of noise. In this method, the noise level function (NLF) which is standard deviation of intensity of
image was estimated from the input image itself. Many NLFs were generated by using computer simulation to construct
a priori information for Bayesian MAP. This a priori information was effective for the accurate noise estimation but not
enough for practical applications since the a priori information didn't reflect the variable characteristics of the individual
camera depending on the exposure and shutter speed.
In this paper, therefore, we propose a new method to construct a priori information for specific camera in order to
improve accuracy of noise estimation. To construct a priori information of noise, the NLFs were measured and
calculated from the images captured under various conditions. We compared the accuracy of noise estimation between
proposed method and Ce's model. The results showed that our model improved the accuracy of noise estimation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Can images from professional digital SLR cameras be made equivalent in color using simple colorimetric
characterization? Two cameras were characterized, these characterizations were implemented on a variety of images, and
the results were evaluated both colorimetrically and psychophysically. A Nikon D2x and a Canon 5D were used. The
colorimetric analyses indicated that accurate reproductions were obtained. The median CIELAB color differences
between the measured ColorChecker SG and the reproduced image were 4.0 and 6.1 for the Canon (chart and spectral
respectively) and 5.9 and 6.9 for the Nikon. The median differences between cameras were 2.8 and 3.4 for the chart and
spectral characterizations, near the expected threshold for reliable image difference perception. Eight scenes were
evaluated psychophysically in three forced-choice experiments in which a reference image from one of the cameras was
shown to observers in comparison with a pair of images, one from each camera. The three experiments were (1) a
comparison of the two cameras with the chart-based characterizations, (2) a comparison with the spectral
characterizations, and (3) a comparison of chart vs. spectral characterization within and across cameras. The results for
the three experiments are 64%, 64%, and 55% correct respectively. Careful and simple colorimetric characterization of
digital SLR cameras can result in visually equivalent color reproduction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper is focused on the implementation of a modular color image difference model, as described in [1], with aim to predict visual magnitudes between pairs of uncompressed images and images compressed using lossy JPEG and JPEG 2000. The work involved programming each pre-processing step, processing each image file and deriving the error map, which was further reduced to a single metric. Three contrast sensitivity function implementations were tested; a
Laplacian filter was implemented for spatial localization and the contrast masked-based local contrast enhancement method, suggested by Moroney, was used for local contrast detection. The error map was derived using the CIEDE2000 color difference formula on a
pixel-by-pixel basis. A final single value was obtained by calculating the median value of the error map. This metric was finally tested against relative quality differences between original and compressed images, derived from psychophysical investigations on the same dataset. The outcomes revealed a grouping of images which was attributed to correlations between the busyness of the test scenes (defined as image property indicating the presence or absence of high frequencies) and different clustered results. In conclusion, a method for accounting for the amount of detail in test is required for a more accurate prediction of image quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The mobile imaging market is a rapidly developing market, and has outgrown the traditional imaging market. This
market is dominated by CMOS sensors, with pixels getting small and smaller. As pixel size is reduced, the sensitivity is
lowered and must be compensated by longer exposure times. However, in the mobile market, this amount to increased
motion blur. We characterize the hand motion with a typical shooting scenario. This data can be used to create an
evaluation procedure for image stabilization solutions, and we indeed present one such procedure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Quality assessment methods are classified into three types depending on the availability of the reference image or video:
full-reference (FR), reduced-reference (RR), or no-reference (NR). This paper proposes efficient RR visual quality
metrics, called motion vector histogram based quality metrics (MVHQMs). In assessing the visual quality of a video, the
overall impression of a video tends to be regarded as the visual quality of the video. To compare two motion vectors
(MVs) extracted from reference and distorted videos, we define the one-dimensional (horizontal and vertical) MV
histograms as features, which are computed by counting the number of occurrences of MVs over all frames of a video.
For testing the similarity between MV histograms, two different MVHQMs using the histogram intersection and
histogram difference are proposed. We evaluate the effectiveness of the two proposed MVHQMs by comparing their
results with differential mean opinion score (DMOS) data for 46 video clips of common intermediate format
(CIF)/quarter CIF (QCIF) that are coded under varying bit rates/frame rates with H.263. We compare the performance of
the proposed metrics and conventional quality measures. Experimental results with various test video sequences show
that the proposed MVHQMs give better performance than the conventional methods in various aspects such as the
performance, stability, and data size.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The study of the Human Visual System (HVS) is very interesting to quantify the quality of a picture, to predict
which information will be perceived on it, to apply adapted tools ... The Contrast Sensitivity Function (CSF) is
one of the major ways to integrate the HVS properties into an imaging system. It characterizes the sensitivity of
the visual system to spatial and temporal frequencies and predicts the behavior for the three channels. Common
constructions of the CSF have been performed by estimating the detection threshold beyond which it is possible
to perceive a stimulus. In this work, we developed a novel approach for spatio-chromatic construction based
on matching experiments to estimate the perception threshold. It consists in matching the contrast of a test
stimulus with that of a reference one. The obtained results are quite different in comparison with the standard
approaches as the chromatic CSFs have band-pass behavior and not low pass. The obtained model has been
integrated in a perceptual color difference metric inspired by the
s-CIELAB. The metric is then evaluated with
both objective and subjective procedures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The method of paired comparisons is often used in image quality evaluations. Psychometric scale values for quality
judgments are modeled using Thurstone's Law of Comparative Judgment in which distance in a psychometric scale
space is a function of the probability of preference. The transformation from psychometric space to probability is a
cumulative probability distribution.
The major drawback of a complete paired comparison experiment is that every treatment is compared to every other,
thus the number of comparisons grows quadratically. We ameliorate this difficulty by performing paired
comparisons in two stages, by precisely estimating anchors in the psychometric scale space which are spaced apart
to cover the range of scale values and comparing treatments against those anchors.
In this model, we employ a generalized linear model where the regression equation has a constant offset vector
determined by the anchors. The result of this formulation is a straightforward statistical model easily analyzed using
any modern statistics package. This enables model fitting and diagnostics.
This method was applied to overall preference evaluations of color pictorial hardcopy images. The results were
found to be compatible with complete paired comparison experiments, but with significantly less effort.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we discuss the application of online surveys to the field of image quality. We examine an online survey
technique that measures the subject's response time, in particular the time it takes for the subject to identify which of two
otherwise identical images contains an image defect. The efficiency and accuracy of the results will be discussed for a
case where subjects are asked to identify defects known as graininess and mottle in photographs with a varying amount
of "quiet regions".
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to the rise in performance of digital printing, image-based applications are gaining popularity. This creates needs for
specifying the quality potential of printers and materials in more detail than before. Both production and end-use
standpoints are relevant. This paper gives an overview of an
on-going study which has the goal of determining a
framework model for the visual quality potential of paper in color image printing. The approach is top-down and it is
founded on the concept of a layered network model. The model and its subjective, objective and instrumental
measurement layers are discussed. Some preliminary findings are presented. These are based on data from samples
obtained by printing natural image contents and simple test fields on a wide range of paper grades by ink-jet in a color
managed process. Color profiles were paper specific. Visual mean opinion score data by human observers could be
accounted for by two or three dimensions. In the first place these are related to brightness and color brightness. Image
content has a marked effect on the dimensions. This underlines the challenges in designing the test images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study presents a methodology of forming contextually valid scales for subjective video quality measurement. Any
single value of quality e.g. Mean Opinion Score (MOS) can have multiple underlying causes. Hence this kind of a
quality measure is not enough for example, in describing the performance of a video capturing device. By applying
Interpretation Based Quality (IBQ) method as a qualitative/quantitative approach we have collected attributes familiar to the end user and that are extracted directly from the material offered by the observers' comments. Based on these
findings we formed contextually valid assessment scales from the typically used quality attributes. A large set of data
was collected from 138 observers to generate the video quality vocabulary. Video material was shot by three types of
video cameras: Digital video cameras (4), digital still cameras (9) and mobile phone cameras (9). From the quality
vocabulary, we formed 8 unipolar 11-point scales to get better insight of video quality. Viewing conditions were adjusted
to meet the ITU-T Rec. P.910 requirements. It is suggested that the applied qualitative/quantitative approach is especially
efficient for finding image quality differences in video material where the quality variations are multidimensional in
nature and especially when image quality is rather high.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The subjective quality of an image is a non-linear product of several, simultaneously contributing subjective factors such
as the experienced naturalness, colorfulness, lightness, and clarity. We have studied subjective image quality by using a
hybrid qualitative/quantitative method in order to disclose relevant attributes to experienced image quality. We describe
our approach in mapping the image quality attribute space in three cases: still studio image, video clips of a talking head
and moving objects, and in the use of image processing pipes for 15 still image contents. Naive observers participated in
three image quality research contexts in which they were asked to freely and spontaneously describe the quality of the
presented test images. Standard viewing conditions were used. The data shows which attributes are most relevant for
each test context, and how they differentiate between the selected image contents and processing systems. The role of
non-HVS based image quality analysis is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In psychological research, it is common to perform investigations on the World Wide Web in the form of questionnaires to collect data from a large number of participants. By comparison, visual experiments have been mainly performed in the laboratory, where it is possible to use calibrated devices and controlled viewing conditions. Recently, the Web has been exploited also for "uncontrolled" visual experiments, despite the lack of control on image rendering at the client side, assuming that the large number of participants involved in a Web investigation "averages out" the parameters that
the experiments would require to keep fixed if, following a traditional approach, it was performed under controlled
conditions. This paper describes the design and implementation of a Web-based visual experiment management system, which acts as a repository of visual experiment, and is designed with the purpose of facilitating the publishing of online investigations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The psychological complexity of multivariate image quality evaluation makes it difficult to develop general image quality metrics. Quality evaluation includes several mental processes and ignoring these processes and the use of a few test images can lead to biased results. By using a qualitative/quantitative (Interpretation Based Quality, IBQ) methodology, we examined the process of pair-wise comparison in a setting, where the quality of the images printed by laser printer on different paper grades was evaluated. Test image consisted of a picture of a table covered with several objects. Three other images were also used, photographs of a woman, cityscape and countryside. In addition to the pair-wise comparisons, observers (N=10) were interviewed about the subjective quality attributes they used in making their quality decisions. An examination of the individual pair-wise comparisons revealed serious inconsistencies in observers' evaluations on the test image content, but not on other contexts. The qualitative analysis showed that this inconsistency was due to the observers' focus of attention. The lack of easily recognizable context in the test image may have contributed to this inconsistency. To obtain reliable knowledge of the effect of image context or attention on subjective image quality, a qualitative methodology is needed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Images and videos are subject to a wide variety of distortions during acquisition, digitizing, processing, restoration,
compression, storage, transmission and reproduction, any of which may result in degradation in visual quality.
That is why image quality assessment plays a major role in many image processing applications.
Image and video quality metrics can be classified by using a number of criteria such as the type of the application
domain, the predicted distortion (noise, blur, etc.) and the type of information needed to assess the quality (original
image, distorted image, etc.).
In the literature, the most reliable way of assessing the quality of an image or of a video is subjective evaluation [1],
because human beings are the ultimate receivers in most applications. The subjective quality metric, obtained from a
number of human observers, has been regarded for many years as the most reliable form of quality measurement.
However, this approach is too cumbersome, slow and expensive for most applications [2].
So, in recent years a great effort has been made towards the development of quantitative measures. The objective
quality evaluation is automated, done in real time and needs no user interaction. But ideally, such a quality
assessment system would perceive and measure image or video impairments just like a human being [3].
The quality assessment is so important and is still an active and evolving research topic because it is a central issue in
the design, implementation, and performance testing of all systems [4, 5].
Usually, the relevant literature and the related work present only a state of the art of metrics that are limited to a
specific application domain. The major goal of this paper is to present a wider state of the art of the most used
metrics in several application domains such as compression [6], restoration [7], etc.
In this paper, we review the basic concepts and methods in subjective and objective image/video quality assessment
research and we discuss their performances and drawbacks in each application domain. We show that if in some
domains a lot of work has been done and several metrics were developed, on the other hand, in some other domains a
lot of work has to be done and specific metrics need to be developed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To overcome shortcomings of digital image, or to reproduce grain of traditional silver halide photographs, some
photographers add noise (grain) to digital image. In an effort to find a factor of preferable noise, we analyzed how a
professional photographer introduces noise into B&W digital images and found two noticeable characteristics: 1) there is
more noise in mid-tones, gradually decreasing in highlights and shadows toward the ends of tonal range, and 2)
histograms in highlights are skewed toward shadows and vice versa, while almost symmetrical in mid-tones. Next, we
examined whether the professional's noise could be reproduced. The symmetrical histograms were approximated by
Gaussian distribution and skewed ones by chi-square distribution. The images on which the noise was reproduced were
judged by the professional himself to be satisfactory enough. As the professional said he added the noise so that "it
looked like the grain of B&W gelatin silver photographs," we compared the two kinds of noise and found they have in
common: 1) more noise in mid-tones but almost none in brightest highlights and deepest shadows, and 2) asymmetrical
histograms in highlights and shadows. We think these common characteristics might be one condition for "good" noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most image quality metrics are derived from feature values of specified test charts. However, such test charts can explain
only a small portion of the comprehensive performances on image quality of imaging systems. Thus, designers of
imaging systems need to check every possible type of natural image to verify the performance even if they check every
image quality factor by test charts. But it is not clear how many and what types of images should be used to verify the
performances. Meanwhile a number of studies have shown that the amplitude spectrum of natural images falls inversely
with spatial frequency. This paper proposes a new image quality evaluation methodology using a quasi-random noise
image that has 1/f spectrum property as a generalized natural image. After being processed by image processing
operations, the power spectra of the image show reasonable responses to the operations and their parameters. In addition,
a metric derived from this image can predict the subjective judgments on the spatial reproducibility of imaging systems
with a high correlation coefficient. The results suggest that this image can be used for the purpose of evaluation of the
comprehensive performance on image quality of imaging systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Quality assessment is a very challenging problem and will still as is since it is difficult to define universal tools. So, subjective assessment is one adapted way but it is tedious, time consuming and needs normalized room. Objective metrics can be with reference, with reduced reference and with no-reference. This paper presents a study carried out for the development of a no-reference objective metric dedicated to the quality evaluation of display devices. Initially, a
subjective study has been devoted to this problem by asking a representative panel (15 male and 15 female; 10 young
adults, 10 adults and 10 seniors) to answer questions regarding their perception of several criteria for quality assessment.
These quality factors were hue, saturation, contrast and texture. This aims to define the importance of perceptual criteria
in the human judgment of quality. Following the study, the factors that impact the quality evaluation of display devices have been proposed. The development of a no-reference objective metric has been performed by using statistical tools allowing to separate the important axes. This no-reference metric based on perceptual criteria by integrating some specificities of the human visual system (HVS) has a high correlation with the subjective data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A quality metric based on a classification process is introduced. The main idea of the proposed method is to avoid
the error pooling step of many factors (in frequential and spatial domain) commonly applied to obtain a final quality
score. A classification process based on final quality
class with respect to the standard quality scale provided by the UIT. Thus, for each degraded color image, a feature
vector is computed including several Human Visual System characteristics, such as, contrast masking effect, color
correlation, and so on. Selected features are of two kinds: 1)
full-reference features and 2) no-reference characteristics.
That way, a machine learning expert, providing a final class number is designed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multifunctional devices (MFDs) are increasingly used as a document hub. The MFD is used as a copier, scanner, printer, and it facilitates digital document distribution and sharing. This imposes new requirements on the design of the data path and its image processing. Various design aspects need to be taken into account, including system performance, features, image quality, and cost price. A good balance is required in order to develop a competitive MFD. A modular datapath architecture is presented that supports all the envisaged use cases. Besides copying, colour scanning is becoming an important use case of a modern MFD. The copy-path use case is described and it is shown how colour scanning can also be supported with a minimal adaptation to the architecture. The key idea is to convert the scanner data to an opponent colour space representation at the beginning of the image processing pipeline. The sub-sampling of chromatic information allows for the saving of scarce hardware resources without significant perceptual loss of quality. In particular, we have shown that functional FPGA modules from the copy application can also be used for the scan-to-file application. This makes the presented approach very cost-effective while
complying with market conform image quality standards.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ideally, a quality assessment system would perceive and measure image or video impairments just like a human
being. But in reality, objective quality metrics do not necessarily correlate well with perceived quality [1]. Plus,
some measures assume that there exists a reference in the form of an "original" to compare to, which prevents their
usage in digital restoration field, where often there is no reference to compare to. That is why subjective evaluation
is the most used and most efficient approach up to now.
But subjective assessment is expensive, time consuming and does not respond, hence, to the economic requirements
[2,3].
Thus, reliable automatic methods for visual quality assessment are needed in the field of digital film restoration.
The ACE method, for Automatic Color Equalization [4,6], is an algorithm for digital images unsupervised
enhancement. It is based on a new computational approach that tries to model the perceptual response of our vision
system merging the Gray World and White Patch equalization mechanisms in a global and local way.
Like our vision system ACE is able to adapt to widely varying lighting conditions, and to extract visual information
from the environment efficaciously. Moreover ACE can be run in an unsupervised manner. Hence it is very useful
as a digital film restoration tool since no a priori information is available.
In this paper we deepen the investigation of using the ACE algorithm as a basis for a reference free image quality
evaluation. This new metric called DAF for Differential ACE Filtering [7] is an objective quality measure that can
be used in several image restoration and image quality assessment systems. In this paper, we compare on different
image databases, the results obtained with DAF and with some subjective image quality assessments (Mean Opinion
Score MOS as measure of perceived image quality). We study also the correlation between objective measure and
MOS.
In our experiments, we have used for the first image test set "Single Stimulus Continuous Quality Scale" (SSCQS)
method and in the second "Double Stimulus Continuous Quality Scale" (DSCQS) method. The users, which are
non-experts, were asked to identify their preferred image (between original and ACE filtered images) according to
contrast, naturalness, colorfulness, quality, chromatic diversity and overall subjective preference. Test and results
are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image evaluation and quality measurements are fundamental components in all image processing applications and
techniques. Recently, a no-reference perceptual blur metrics (PBM) was suggested for numerical evaluation of blur
effects. The method is based on computing the intensity variations between neighboring pixels of the input image
before and after low-pass filtering. The method was proved to demonstrate a very good correlation between the
quantitative measure it provides and visual evaluation of perceptual image quality. However, this quantitative image
blurriness measure has no intuitive meaning and has no association with conventionally accepted imaging system design
parameters such as, for instance, image bandwidth.
In this paper, we suggest an extended modification of this PBM-method that provides such a direct association and
allows evaluation image in terms of the image efficient bandwidth. To this end we apply the PBM-method to a series of
test pseudo-random images with uniform spectrum of different spread within the image base-band defined by the image
sampling rate and map the image blur measurement results obtained for this set of test images to corresponding
measures of their bandwidths. In this way we obtain a new image feature, which provides evaluation of image in terms
of the image effective bandwidth measured in fractions, from 0 to 1, of the image base-band. In addition, we also show
that the effective bandwidth measure provides a good estimation for the potential JPEG encoder compression rate,
which allows one to choose the best compression quality for a requested compressed image size.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Colour information is not faithfully maintained by a CCTV imaging chain. Since colour can play an important role in
identifying objects it is beneficial to be able to account accurately for changes to colour introduced by components in the
chain. With this information it will be possible for law enforcement agencies and others to work back along the imaging
chain to extract accurate colour information from CCTV recordings.
A typical CCTV system has an imaging chain that may consist of scene, camera, compression, recording media and
display. The response of each of these stages to colour scene information was characterised by measuring its response to
a known input. The main variables that affect colour within a scene are illumination and the colour, orientation and
texture of objects. The effects of illumination on the appearance of colour of a variety of test targets were tested using
laboratory-based lighting, street lighting, car headlights and artificial daylight. A range of typical cameras used in CCTV
applications, common compression schemes and representative displays were also characterised.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The goal of our study is to clarify the realistic viewing conditions surrounding flat panel display television (FPD TV) and
the relationship between preferred luminance and TV screen size, as flat panel display TV is becoming increasingly
popular. We have conducted an investigation of TV viewing conditions at homes. Our study of viewing conditions
indicates that the viewing distance at home is 3 times of absolute display height at minimum with an average of 2.5m and
mean screen illuminance is 100-300 lx. We also have conducted an investigation of the relationship between preferred
luminance and TV screen size using LCD TV. Our study indicates that the most preferred luminance depends on visual
angle (screen size and viewing distance). At fixed viewing distance, the most preferred luminance depends on TV screen
size. As TV screen size gets larger, the most preferred luminance becomes darker. And in home, most preferred
luminance of distance 3 times of absolute display height (3H) and screen illuminance at 180 lx is approximately
240cd/m2. And as the screen illuminance becomes darker, the most preferred luminance becomes darker while the most
preferred luminance depends on visual angle.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Display image quality, image reproducibility and compliance to standards are getting more and more important. It is
known that LCDs suffer from viewing angle dependency, meaning that the characteristics of the LCD change with
viewing angle.
Display calibration and corresponding quality checks typically take place for on-axis viewing. However, users typically
use their display under a rather broad range of viewing angles. Several studies have shown that when calibration is done
for on-axis viewing then the display is not accurately complying with the standard when viewing off-axis.
A possible solution is tracking the position of the user in
real-time and adapting the configuration/characteristics of the
display accordingly. In this way the user always perceives the display as being calibrated independently of the viewing
angle. However, this method requires an expensive user tracking method (such as an infrared, ultrasound or vision
based head tracking device) and is not useful for multiple concurrent users.
This paper presents another solution: instead of tracking the user and dynamically changing the behavior of the display,
we develop calibration algorithms that have inherent robustness against change of viewing angle. This new method also
has the advantage that it is a very cheap solution that does not require additional hardware such as head tracking. In
addition it also works for multiple viewers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There are many uses of an image quality measure. It is often used to evaluate the effectiveness of an image processing
algorithm, yet there is no one widely used objective measure. It can be used to compare similarity between two-dimensional
data. In many papers, the mean squared error (MSE) or peak signal to noise ratio (PSNR) are used. These
measures rely on pixel intensities instead of image structure. Though these measures are well understood and easy to
implement, they do not correlate well with perceived image quality. This paper will present an image quality metric that
analyzes image structure rather than entirely on pixels. It extracts image structure with the use of quadtree
decomposition. A similarity comparison function based on contrast, luminance, and structure will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Two internet-based psychophysical experiments were conducted to investigate the performance of an image sharpness enhancement method, based on adjustment of spatial frequencies in the image according to the contrast sensitivity function and compensation of MTF losses of the display. The method was compared with the widely-used unsharp mask
(USM) filter from PhotoShop. The experiment was performed in two locations with different groups of observers: one in the UK, and the second in the USA. Three Apple LCD displays (15" studio, 23" HD cinema and 15" PowerBook) were used at both sites. Observers assessed the sharpness and pleasantness of the displayed images. Analysis of the results led to four major conclusions: (1) Performance of the sharpening methods; (2) Influence of MTF compensation; (3) Image
dependency; and (4) Comparison between sharpness perception and preference judgement at both sites.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a statistical technique for autonomously detecting defective pixels in a CCD sensor array. Our data-driven
analysis technique can autonomously identify a wide range of faulty and 'suspect' pixels (hypo-sensitive
or hyper-sensitive pixels), without the need for any defect model or prior knowledge of the nature of pixel faults.
We apply our technique to the autonomous detection of the defective pixels in regular images captured with a
camera, equipped with a CCD.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In traditional super-resolution methods, researchers generally assume that accurate subpixel image registration parameters are given a priori. In reality, accurate image registration on a subpixel grid is the single most critically important step for the accuracy of super-resolution image reconstruction. In this paper, we introduce affine invariant features to improve subpixel image registration, which considerably reduces the number of mismatched points and hence makes traditional image registration more efficient and more accurate for super-resolution video enhancement. Affine invariant interest points include those corners that are invariant to affine transformations, including scale, rotation, and translation. They are extracted from the second moment matrix through the integration and differentiation covariance matrices. Our tests are based on two sets of real video captured by a small Unmanned Aircraft System (UAS) aircraft, which is highly susceptible to vibration from even light winds. The experimental results from real UAS surveillance video show that affine invariant interest points are more robust to perspective distortion and present more accurate matching than traditional Harris/SIFT corners. In our experiments on real video, all matching affine invariant interest points are found correctly. In addition, for the same super-resolution problem, we can use many fewer affine invariant points than Harris/SIFT corners to obtain good super-resolution results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The quality of encoded image sequences is often assessed by a subjective test. Depending of the test paradigm
used in the subjective campaign the observer get as much time as he needs to find a stable result. In video
sequences, three parameters have to be managed carefully: the bitrate, the frame-rate and the motion contained
in video itself. Therefore it is desirable to know the relation between the frame rate, the motion, the bitrate and
the quality. This study aims to allow the selection of coherent contents for the subjective assessment experiments.
For example, at which frame-rate a sequence has to be displayed in order to have a correct judgment of its quality.
The image sequences were presented with five different frame-rates ranging from 10 to 40 and four bitrates ranging
between 0.12bpp and 0.32bpp. The video sequences were chosen with regards to their motion quantity. The
motion has been characterized using the specification of MPEG-7 in order to organize the optical flux in several
classes. Special care was taken to avoid the memorization effect usually present after short presentations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
From image retrieval to image classification, all research shares one common requirement: a good image database to test or train the algorithms. In order to create a large database of images, we set up a project that allowed gathering a collection of more than 33000 photographs with keywords and tags from all over the world. This project was part of the "We Are All Photographers Now!" exhibition at the Musee de l'Elysee in Lausanne, Switzerland. The "Flux," as it was called, gave all photographers, professional or amateur, the opportunity to have their images shown in the museum. Anyone could upload pictures on a website. We required that some simple tags were filled in. Keywords were optional. The information was collected in a MySQL database along with the original photos. The pictures were projected at the museum in five second intervals. A webcam snapshot was taken and sent back to the photographers via email to show how and when their image was displayed at the museum.
During the 14 weeks of the exhibition, we collected more than 33000 JPEG pictures with tags and keywords. These pictures come from 133 countries and were taken by 9042 different photographers. This database can be used for non-commercial research at EPFL. We present some preliminary analysis here.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Assessing the perceptual quality of pictures still remains a difficult task even for humans. This is true, especially when
there are many interesting regions to look at (e.g. sea and foreground subject) or when the differences among the pictures
are subtle. Despite that, trends in user preference do exist and they can be a valuable source of information for designing
enhancement algorithms. However, a major problem is to assess preference trends and to translate them in an algorithm
with a formal methodology. The approach that we describe in this paper proposes a multi-step solution. In the first
instance we relate the space of possible enhancement sequences (intended as chain of enhancement algorithms) to the
content of the image and then reduce the number of sequences through an iterative selection penalizing the sequences
that produce artifacts or that generates close results. We then present the user with pairs of images enhanced with the
various sequences and we ask to select the best in each comparison. Finally, we perform a statistical analysis of users'
votes through a statistical method. Preliminary results show preference for saturated and colorful sea and sky and "de-saturated"
snow.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is a myth that more pixels alone result in better images. The marketing of camera phones in particular has focused on
their pixel numbers. However, their performance varies considerably according to the conditions of image capture.
Camera phones are often used in low-light situations where the lack of a flash and limited exposure time will produce
underexposed, noisy and blurred images. Camera utilization can be quantitatively described by photospace distributions,
a statistical description of the frequency of pictures taken at varying light levels and camera-subject distances. If the
photospace distribution is known, the user-experienced distribution of quality can be determined either directly by direct
measurement of subjective quality, or by photospace-weighting of objective attributes.
The population of a photospace distribution requires examining large numbers of images taken under typical camera
phone usage conditions. ImagePhi was developed as a user-friendly software tool to interactively estimate the primary
photospace variables, subject illumination and subject distance, from individual images. Additionally, subjective
evaluations of image quality and failure modes for low quality images can be entered into ImagePhi.
ImagePhi has been applied to sets of images taken by typical users with a selection of popular camera phones varying in
resolution. The estimated photospace distribution of camera phone usage has been correlated with the distributions of
failure modes. The subjective and objective data show that photospace conditions have a much bigger impact on image
quality of a camera phone than the pixel count of its imager. The 'megapixel myth' is thus seen to be less a myth than an
ill framed conditional assertion, whose conditions are to a large extent specified by the camera's operational state in
photospace.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The image quality circle is a commonly accepted framework to model the relation between the technology variables of a display and the resulting image quality. 3D-TV systems, however, go beyond the concept of image quality. Research has shown that, although 3D scenes are clearly more appreciated by subjects, the concept 'image quality' does not take this added value of depth into account. Concepts as 'naturalness' and 'viewing experience' have turned out to be more useful when assessing the overall performance of 3D displays. In this paper, experiments are described that test 'perceived depth', 'perceived image quality' and 'perceived naturalness' in images with different levels of blur and different depth levels. Results show that naturalness incorporates both blur level as well as depth level, while image quality does not include depth level. These results confirm that image quality is not a good measure to assess the overall performance of 3D displays. Naturalness is a more promising concept.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Texture is an important element of the world around us. It can convey information about the object at hand.
Although embossing has been used in a limited way, to enhance the appearance of greeting cards and book covers
for example, texture is something that printed material traditionally lacks. Recently, techniques have been
developed that allow the incorporation of texture in printed material. Prints made using such processes are similar to
traditional 2D prints but have added texture such that a reproduction of an oil painting can have the texture of oil
paint on canvas or a picture of a lizard can actually have the texture of lizard skin. It seems intuitive that the added
dimensionality would add to the perceived quality of the image, but to what degree? To examine the question of the
impact of a third dimension on the perceived quality of printed images, a survey was conducted asking participants
to determine the relative worth of sets of print products. Pairs of print products were created, where one print of each
pair was 2D and the other was the same image with added texture. Using these print pairs, thirty people from the
Rochester Institute of Technology community were surveyed. The participants were shown seven pairs of print
products and asked to rate the relative value of each pair by apportioning a specified amount of money between the
two items according to their perception of what each item was worth. The results indicated that the addition of a
third dimension or texture to the printed images gave a clear boost to the perceived worth of the printed products.
The rating results were 50% higher for the 3D products than the 2D products, with the participants apportioning
approximately 60% of each dollar to the 3D product and 40% to the 2D product. About 80% of the time participants
felt that the 3D items had at least some added value over their 2D counterparts, about 15% of the time, they felt the
products were essentially equivalent in value and 4% of the time they rated the 3D product as having lower value
than the 2D product. The comments of the participants indicated that they were clearly impressed with the 3D
technology and their ratings indicated that they were might be willing to pay more for it, meaning advertisers and
package designers will be interested in using this technology in their products. As 3D printing technology emerges
it will add yet another dimension to the work of print quality analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Stereoscopic technologies have developed significantly in recent years. These advances require also more understanding
of the experiental dimensions of stereoscopic contents. In this article we describe experiments in which we explore the
experiences that viewers have when they view stereoscopic contents. We used eight different contents that were shown
to the participants in a paired comparison experiment where the task of the participants was to compare the same content
in stereoscopic and non-stereoscopic form. The participants indicated their preference but were also interviewed about
the arguments they used when making the decision. By conducting a qualitative analysis of the interview texts we
categorized the significant experiental factors related to viewing stereoscopic material. Our results indicate that reality-likeness
as well as artificiality were often used as arguments in comparing the stereoscopic materials. Also, there were
more emotional terms in the descriptions of the stereoscopic films, which might indicate that the stereoscopic projection
technique enhances the emotions conveyed by the film material. Finally, the participants indicated that the three-dimensional
material required longer presentation time, as there were more interesting details to see.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.