PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new technique for temporally decimating an image soquence is presented where the amount of decimation in the time direction varies with the local temporal activity. The technique is used to reduce the average frame rate of an image sequence before compression to achieve high-compression ratios. One trivial form of achieving temporal decimation is by periodically dropping whole frames from the image sequence. However, dropping whole frames from the image sequence may cause objectionable temporal aliasing artifacts, especially in regions where temporal activity is high. Hence, to avoid the temporal allasing artifacts, we propose to adaptively adjust the amount of decimation in the time direction depending on the local activity. The temporal decimation procedure can be combined with several strategies for reduction of spatial redundancy. We have utilized subband coding and block-based transform coding for still image compression in our experiments. The proposed International Standards Organization (ISO) standards, the Consultative Committee on International Telephony and Telegraphy (CCITT) H.261, and the Motion Picture Expert Group (MPEG) proposed standard are described, and we illustrate how our contribution fits into these frameworks. The algorithm is demonstrated for a wide range of compression ratios. The examples include CCITT video sequences and medical ultrasound.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We address the restoration of noisy and blurred images that are scanned from photographic paper. In this case, we need to consider the nonilnearities introduced by both the photographic film and the paper. To this effect, we first transform the measured reflection density of the photographic paper into the scene exposure domain in which a linear convolutional relationship between the original scene and the degraded image can be established. As a result of this transformation, any additive noise in the reflection density domain becomes multiplicative in the scene exposure domain. We
then propose a linear filter for deconvolution in the exposure domain in the presence of multiplicative observation noise. Experiments with actual noisy and blurred images scanned from photographic paper indicate that the nonlinear sensor characteristics must be incorporated into image restoration to obtain satisfactory results. We also compare the quality of the restoration results when the same blurred image is scanned from the photographic negative (film) and the print (paper) obtained from this negative.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Several methods of increasing the speed and simplicity of the computation of off-axis transmission holograms are presented, with applications to the real-time display of holographic images. The bipolar intensity approach allows for the real-valued linear summation of interference fringes, a factor of 2 speed increase, and the elimination of image noise caused by object self-interference. An order of magnitude speed increase is obtained through the use of a precomputed look-up table containing a large array of elemental interference patterns corresponding to point source contributions from each of the possible locations in image space. Results achieved using a data-parallel supercomputer to compute horizontal-parallax-only holographic patterns containing six megasamples indicate that an image comprised of 10,000 points with arbitrary brightness (gray scale) can be computed in under 1 s. Implemented on a common workstation, the look-up table approach increases computation speed by a factor of 43.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The proliferation of electronic photographic images and electronic imaging systems has raised questions and concerns regarding the long-term or archival electronic storage of image files. Various options and methodologies are discussed together with the limitations of all the presently available approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Gabor transform is very useful for image compression, but its implementation is very complicated and time consuming because the Gabor elementary functions are not mutually orthogonal. An efficient algorithm that combines the successive overrelaxation iteration and the look-up table techniques can be used to carry out the Gabor transform. The performance of the Gabor transform can be improved by using a modified transform, a Gabor discrete cosine transform (DCT). We present an adaptive Gabor DCT image coding algorithm. Experimental results show that a better performance can be achieved with the adaptive Gabor DCT than with the Gabor DCT.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
TOPICS: Head, Video, Televisions, Signal to noise ratio, Error analysis, Analog electronics, Digital video recorders, Scanners, Remote sensing, Image processing
For high-definition television postproduction, the demand for a digital video cassette recorder capable of handling a data rate of approximately 1.2 Gbit/s is substantial. An experimental cassette recorder with a 19-mm D-2-type cassette has been developed that
conforms to the European HDTV EUREKA EU95 1250/50 interlaced standard. Results from initial prototypes indicate that the concept is feasible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A practical colorimetric calibration method using tetrahedral interpolation techniques was proposed for electronic imaging devices not having accurate analytical models. By dividing color gamut into many tetrahedrons and using ilnear matrices, 3-D forward and backward transformations were performed without iterative calculation. To reduce the measurement points, a nonlinear interpolation technique was also proposed. These interpolation errors were simulated on hypothetical devices obeying known analytical models to avoid measurement error and the device stability problem. According to the simulations, a 33 × 33 × 33 look up table was enough to model the analytical models with color difference ΔE*uv≤0.5 at worst, and 5 × 5 × 5 colors were enough to predict the color of the output devices in practice.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A dot-to-dot error diffusion algorithm is presented that combines traditional halftoning (clustered-dot ordered dithering) and error diffusion. Error diffusion can create a large number of nonprintable
individual pixels. Therefore, it is not suitable for hard-copy printing. On the other hand, traditional halftoning has only limited output gray levels. With the algorithm we proposed, we try to avoid the problems of both techniques, yet retain their merits. In our algorithm, the error diffusion is performed on halftone dots instead of pixels, and the error diffusion is performed only on full dots. To
accommodate the dot-to-dot error propagation, a modified but equivalent version of error diffusion is introduced. The success of the algorithm is demonstrated by the experimental results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An automated procedure for measuring the size and number of microscopic particles contaminating a fluid is described. The procedure, based on edge detection and region filling, exhibits substantially improved accuracy when compared to simple thresholding techniques, while being only slightly more complicated. The procedure is much less complex than model-based approaches and is shown to be more robust than simple threshold-based techniques in the presence of image blur caused by out-of-focus particles. A clever filling scheme is presented that efficiently segments particles from the background by first identifying pixels that are not particles. Results for two particle types and several focus conditions are given for comparison.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.