Analysis of Sampled Imaging Systems

Analysis of Sampled Imaging Systems
Author(s):    Richard H Vollmerhausen; Ronald G Driggers
Published:   2000
DOI:             10.1117/3.353257
eISBN: 9780819478580  |  Print ISBN13: 9780819434890  |  Print ISBN10: 0819434892
Description:

Advances in solid state detector arrays, flat panel displays, and digital image processing have prompted an increasing variety of sampled imaging products and possibilities. These technology developments provide new opportunities and problems for the design engineer and system analyst—this tutorial's intended reader.

Please Wait... Processing your request... Please Wait.

This tutorial is written for the design engineer or system analyst who is interested in quantifying the effect of sampling on imager performance. It is assumed that the reader has at least some background in linear systems and Fourier transform methods. However, a review of these subjects is included in Chapter 2 and the appendices.

Sampled imagers are not new, and optimizing the design of sampled imagers is not a new topic. Television imagery has always been vertically sampled, for example. From its inception as a mechanically scanned Nipkow disk, television imagery consisted of a raster of horizontal lines. Television raster effects were analyzed by Otto Shade (Perception of Displayed Information, ed. Biberman, 1973) and many others. Over the years, these analysts developed “rules of thumb” for minimizing raster effects and optimizing the viewed image. Television manufacturers have undoubtedly developed their own proprietary design rules. Whatever the analytical or experiential basis, designers have done an outstanding job of minimizing visible raster and other sampling artifacts in commercial television.

However, advancing technology in solid state detector arrays, flat panel displays, and digital image processing has led to a greatly increased variety of sampled imaging products and possibilities. These technology developments provide new opportunities and problems for the design engineer and system analyst.

An extensive variety of InSb and PtSi mid-wave focal plane arrays has been developed in the last few years, and this trend continues with HgCdTe, quantum wells, and thermal focal plane detectors. A common characteristic of the imagers that use these focal planes is that they are undersampled. The typical InSb imager circa 1998 uses a 256 × 256 detector array and the best available array is 512 × 512 or 640 × 480. The relatively sparse sample spacing provided by these detector arrays can lead to artifacts which degrade the displayed imagery.

Poor sampling can corrupt the image by generating localized disturbances or artifacts. The corruption will result in shifting object points, lines and edges. Poor sampling can also modify the apparent width of an object or make a small object or detail disappear. That is, a fence post imaged by an undersampled sensor can be seen as thicker, thinner, or slightly misplaced.

Although larger, better-sampled arrays are under development, we expect that the lower resolution arrays will be used in many systems for years to come. In addition to good availability, the low resolution arrays are cheaper, smaller, and require less electronic processing hardware. Further, in many applications, multispectral or color imagery is more useful than high-resolution imagery, and it is often necessary to sacrifice high resolution in order to achieve a multispectral or color capability.

The cost and difficulty of displaying high-resolution imagery is also a significant design consideration. While the cathode ray tube (CRT) is still the preeminent choice for display quality, flat panel displays have size and form-factor advantages that can override other considerations. However, flat panel displays lack both the addressability and Gaussian blur characteristics that make the CRT such a flexible and desirable display medium. Flat panel displays can have sharply demarcated pixels (that is, pixels with sharply defined edges), and the pixels are at fixed, uniformly spaced locations in the display area. These flat panel characteristics can lead to serious sampling artifacts in the displayed image.

Images on flat panel displays can exhibit artifacts such as blocky representations of objects, stair-stepping in lines and arcs, jagged edges, and luminance gaps or bands. A CRT image can exhibit visible raster lines. These display artifacts make it difficult for the human visual system to spatially integrate the underlying image. These artifacts do not arise from corruption of the baseband image by aliasing; these artifacts arise from the display characteristics.

The quality of a sampled image depends as much on the display technique as on the number of samples taken by the sensor. While the display cannot overcome the fundamental limitations of the sensor, sensor information is often hidden by a poor display choice.

In this text, Fourier transform theory is used to describe and quantify sampling artifacts like display raster, blocky images, and the loss or alteration of image detail due to aliasing. The theory is then used to predict the type and level of sampling artifacts expected for a particular sensor and display combination. Knowing the sensor and display design, the various kinds of sampling artifacts are predictable and can be quantified using the analytical technique described in this book.

This book also provides metrics (that is, the design rules) that can be used to optimize the design of a sampled imager. In practical systems, control of sampling artifacts often entails increased pre-sample or post-sample filtering. Increased pre-sample filtering can be accomplished by defocusing the objective lens, for example, and increased post-sample filtering can be accomplished by defocusing the CRT display spot. This increased filtering can help performance by removing sampling artifacts but degrades performance by blurring the displayed image. We present methods that can be used to quantify these sampled-imager design trade-offs.

There are a number of excellent texts on the application of Fourier transform theory to imaging systems including Linear Systems, Fourier Transforms, and Optics (Wiley) by J. Gaskill and Fourier Optics (McGraw-Hill) by J. Goodman. These texts discuss ideal sampling and the Sampling Theorem, but they do not address either the sampling artifacts found in practical sensors or sampled system design optimization. This book addresses the application of Fourier theory to practical, sampled systems.

One important topic not covered in this book is the existence of the Fourier transform when dealing with physical systems. For an excellent discussion of this topic, we cannot do better than refer the reader to the preface and first two chapters of Bracewell's The Fourier Transform and Its Applications (McGraw-Hill, 1986).

In this book, it is assumed that the spatial and temporal functions associated with real (realizable, physical) systems have a Fourier transform. In some cases, the Fourier transform cannot be found by directly evaluating the Fourier integral as described in Chapter 2 of this book. This happens, for example, when the function to be integrated is periodic and does not decrease to zero as frequency approaches infinity. These functions are not absolutely convergent to zero and the Fourier integral cannot be evaluated.

It turns out, however, that the ability to evaluate the Fourier integral is a sufficient, but not necessary, condition for the existence of the Fourier transform. Mathematical “tricks” can be used to discover the Fourier transform even when the integral does not exist.

We assume that all functions of physical interest have a Fourier transform. As stated by Bracewell on pages 8 and 9 of The Fourier Transform and Its Applications:A circuit expert finds it obvious that every waveform has a spectrum, and the antenna designer is confident that every antenna has a radiation pattern. It sometimes comes as a surprise to those whose aquaintance with Fourier transforms is through physical experience rather than mathematics that there are some functions without Fourier transforms. Nevertheless, we may be confident that no one can generate a waveform without a spectrum or construct an antenna without a radiation pattern.…The question of the existence of transforms may be safely ignored when the function to be transformed is an accurately specified description of a physical quantity. Physical possibility is a valid sufficient condition for the existence of a transform.

Sampling theory and sampled imager analysis techniques are covered in Chapters 1 through 4. Chapter 1 introduces several subjects important to the discussion of sampled imagers.

Chapter 2 describes the Fourier representation of the imaging process. Fourier electro-optics theory relies on the principle of linear superposition. An imager is characterized by its response to a point of light. The image of a scene is the sum of the responses to the individual points of light constituting the original scene. The Fourier transform of the blur spot produced by the optics (and detector and other parts of the imaging system) when imaging a point of light is called the Optical Transfer Function (OTF). The amplitude of the OTF is the Modulation Transfer Function, or MTF. Experience has shown that MTF is a good way to characterize the quality of an imaging system. An image cannot be defined until the scene is described, but the characterization of the imager's response to a point source provides a good indication of the quality of images which can be expected under a variety of environments.

In Chapter 3, Fourier theory is extended to sampled imaging systems. A response function for sampled imagers is derived by examining the image formed on the display by a point source of light in the scene. The response of a sampled system to a point source depends on sample phase; that is, the response depends on the distance between the point source and a sample location. It is true, therefore, that the image expected from a sampled imager cannot be defined without first specifying the scene. However, the response function is a good way to characterize both the quality of the sampled imager and its tendency to generate sampling artifacts.

The design of sampled imaging systems is discussed in Chapter 4. First, the effect of interpolation on display quality is described. Next, the optimization of sampled imaging systems is performed using a number of classical design guidelines. Finally, a new optimization technique, the MTF squeeze, is described, and optimization using this technique is compared to the classical techniques.

Chapter 5 describes interlace and dither. Interlace and dither (dither is sometimes called microscanning) improve sensor sampling without increasing detector count. A high-resolution frame image is comprised of two or more lower-resolution field sub-images taken in time sequence. Between each field sub-image, a nodding mirror or other mechanical means is used to move the locations where the scene is sampled. Interlace and dither achieve high resolution while minimizing focal plane array complexity. Chapter 5 describes the sampling benefits of interlace and dither, and also discusses the display artifacts which can arise when scene-to-sensor motion occurs.

Dynamic sampling is covered in Chapter 6. This chapter was contributed by Jonathon Schuler and Dean Scribner of the Naval Research Laboratory. With high-performance computers and digital processors, resolution enhancement is now possible by combining multiple frames of an undersampled imager to construct a well-sampled image. First, the optical flow of the image is computed to determine the placement of the samples. “Super-resolution” can then be achieved by combining the dynamic sampling techniques with an image restoration process.

Chapter 7 is devoted to the Sampling Theorem. The Sampling Theorem is described and an example given of a near-ideal reconstruction of a sampled waveform. The primary purpose of the chapter, however, is to discuss the limited value of the Sampling Theorem in evaluating real systems. The Sampling Theorem assumes that the signal is band-limited before sampling and that an ideal filter is used to reconstruct the signal. In practical systems, neither criteria is met. The Sampling Theorem does not provide a basis for evaluating real systems since it is the compromises to the rules of the Sampling Theorem that are the essence of practical system design.

The techniques and problems associated with the measurement of sampled imaging system performance are presented in Chapter 8. Sampled imaging systems are not shift-invariant, and the output imagery contains spurious response artifacts. Measurements on these systems require special procedures.

Finally, the appendices provide a summary of the Fourier integrals and series along with the characteristics of impulse functions. These two appendices are intended to be reference sources for the mathematics described in the main text.

Richard H. Vollmerhausen

Ronald G. Driggers

February 2000

© 2000 Society of Photo-Optical Instrumentation Engineers

Access to SPIE eBooks is limited to subscribing institutions and is not available as part of a personal subscription. Print or electronic versions of individual SPIE books may be purchased via SPIE.org.

Your library does not currently subscribe to eBooks on the SPIE Digital Library. Print or electronic versions of individual SPIE books may be purchased via SPIE.org.

  • Don't have an account?
  • Subscribe to the SPIE Digital Library
  • Create a FREE account to sign up for Digital Library content alerts and gain access to institutional subscriptions remotely.
Access This Article
Sign in or Create a personal account to Buy this article ($20 for members, $25 for non-members).
Access This Proceeding
Sign in or Create a personal account to Buy this article ($15 for members, $18 for non-members).
Access This Chapter

Access to SPIE eBooks is limited to subscribing institutions and is not available as part of a personal subscription. Print or electronic versions of individual SPIE books may be purchased via SPIE.org.