Small digital camera modules such as those in mobile phones have become ubiquitous. Their low-light performance is of
utmost importance since a high percentage of images are made under low lighting conditions where image quality failure
may occur due to blur, noise, and/or underexposure. These modes of image degradation are not mutually exclusive: they
share common roots in the physics of the imager, the constraints of image processing, and the general trade-off situations
in camera design. A comprehensive analysis of failure modes is needed in order to understand how their interactions
affect overall image quality.
Low-light performance is reported for DSLR, point-and-shoot, and mobile phone cameras. The measurements target
blur, noise, and exposure error. Image sharpness is evaluated from three different physical measurements: static spatial
frequency response, handheld motion blur, and statistical information loss due to image processing. Visual metrics for
sharpness, graininess, and brightness are calculated from the physical measurements, and displayed as orthogonal image
quality metrics to illustrate the relative magnitude of image quality degradation as a function of subject illumination. The
impact of each of the three sharpness measurements on overall sharpness quality is displayed for different light levels.
The power spectrum of the statistical information target is a good representation of natural scenes, thus providing a
defined input signal for the measurement of power-spectrum based signal-to-noise ratio to characterize overall imaging
performance.
Photospace data previously measured on large image sets have shown that a high percentage of camera phone pictures
are taken under low-light conditions. Corresponding image quality measurements linked the lowest quality to these
conditions, and subjective analysis of image quality failure modes identified image blur as the most important
contributor to image quality degradation.
Camera phones without flash have to manage a trade-off when adjusting shutter time to low-light conditions. The shutter
time has to be long enough to avoid extreme underexposures, but not short enough that hand-held picture taking is still
possible without excessive motion blur. There is still a lack of quantitative data on motion blur. Camera phones often do
not record basic operating parameters such as shutter speed in their image metadata, and when recorded, the data are
often inaccurate. We introduce a device and process for tracking camera motion and measuring its Point Spread Function
(PSF). Vision-based metrics are introduced to assess the impact of camera motion on image quality so that the low-light
performance of different cameras can be compared. Statistical distributions of user variability will be discussed.
It is a myth that more pixels alone result in better images. The marketing of camera phones in particular has focused on
their pixel numbers. However, their performance varies considerably according to the conditions of image capture.
Camera phones are often used in low-light situations where the lack of a flash and limited exposure time will produce
underexposed, noisy and blurred images. Camera utilization can be quantitatively described by photospace distributions,
a statistical description of the frequency of pictures taken at varying light levels and camera-subject distances. If the
photospace distribution is known, the user-experienced distribution of quality can be determined either directly by direct
measurement of subjective quality, or by photospace-weighting of objective attributes.
The population of a photospace distribution requires examining large numbers of images taken under typical camera
phone usage conditions. ImagePhi was developed as a user-friendly software tool to interactively estimate the primary
photospace variables, subject illumination and subject distance, from individual images. Additionally, subjective
evaluations of image quality and failure modes for low quality images can be entered into ImagePhi.
ImagePhi has been applied to sets of images taken by typical users with a selection of popular camera phones varying in
resolution. The estimated photospace distribution of camera phone usage has been correlated with the distributions of
failure modes. The subjective and objective data show that photospace conditions have a much bigger impact on image
quality of a camera phone than the pixel count of its imager. The 'megapixel myth' is thus seen to be less a myth than an
ill framed conditional assertion, whose conditions are to a large extent specified by the camera's operational state in
photospace.
For more than thirty years imaging scientists have constructed metrics to predict psychovisually perceived image quality. Such metrics are based on a set of objectively measurable basis functions such as Noise Power Spectrum (NPS), Modulation Transfer Function (MTF), and characteristic curves of tone and color reproduction. Although these basis functions constitute a set of primitives that fully describe an imaging system from the standpoint of information theory, we found that in practical imaging systems the basis functions themselves are determined by system-specific primitives, i.e. technology parameters. In the example of a printer, MTF and NPS are largely determined by dot structure. In addition MTF is determined by color registration, and NPS by streaking and banding. Since any given imaging system is only a single representation of a class of more or less identical systems, the family of imaging systems and the single system are not described by a unique set of image primitives. For an image produced by a given imaging system, the set of image primitives describing that particular image will be a singular instantiation of the underlying statistical distribution of that primitive. If we know precisely the set of imaging primitives that describe the given image we should be able to predict its image quality. Since only the distributions are known, we can only predict the distribution in image quality for a given image as produced by the larger class of 'identical systems'. We will demonstrate the combinatorial effect of the underlying statistical variations in the image primitives on the objectively measured image quality of a population of printers as well as on the perceived image quality of a set of test images. We also will discuss the choice of test image sets and impact of scene content on the distribution of perceived image quality.
In a companion paper we discuss the impact of statistical variability on perceived image quality. Early in a development program, systems may not be capable of rendering images suitable for quality testing. This does not diminish the program need to estimate the perceived quality of the imaging system. During the development of imaging systems, simulations are extremely effective for demonstrating the visual impact of design choices, allowing both the development process to prioritize these choices and management to understand the risks and benefits of such choices. Where the simulation mirrors the mechanisms of image formation, it not only improves the simulation but also informs the understanding of the image formation process. Clearly the simulation process requires display or printing devices whose quality does not limit the simulation. We will present a generalized methodology. When used with common profile making and color management tools, it will provide simulations of both source and destination devices. The device to be simulated is modeled by its response to a fixed set of input stimuli. In the case of a digital still camera (DSC), these are the reflection spectra of a fixed set of color patches -e.g. the MacBeth DCC, and in the case of a printer, the set of image RGBs. We will demonstrate this methodology with examples of various print media systems.
KEYWORDS: Spatial frequencies, Visualization, Modulation transfer functions, Visual process modeling, Human vision and color perception, Global system for mobile communications, Electronic imaging, Visual system, Systems modeling, Imaging systems
A number of observer based objective measures of image sharpness have been proposed: Higgins, Granger, Cohen &
Carison and Barten. Many of these measures, seemingly different in concept, are highly correlated with the
psychovisual perception of sharpness. We re-examine the basis for these sharpness measures, and will show that:
The signal power spectra of a number of real scenes exhibit a 1/u2 dependence. The corresponding amplitude spectra can
be interpreted as the frequency disiribution of spatial frequency content in average scenes. The relationship of these
results to simple models of image structure will be discussed.
The log spatial integration used in all the measures is equivalent to performing a linear spatial frequency integration
with a weighting function (1/u) given by the frequency of occurrence of information in real scenes.
While the forms of the visual MTF are substantially different, the measures belong to a class of generalized log
frequency sharpness measures that, for equivalent viewing conditions, provide equivalent objective measures of
sharpness.
The linear dependence of subjective sharpness with the generalized SQF makes this class of sharpness measures a useful
tool for product development in both the photographic and display industries.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.