Simulations of flatbed scanners can shorten the development cycle of new designs, estimate image quality, and lower manufacturing costs. In this paper, we present a flatbed scanner simulation a strobe RGB scanning method that investigates the effect of the sensor height on color artifacts. The image chain model from the remote sensing community was adapted and tailored to fit flatbed scanning applications. This model allows the user to study the relationship between various internal elements of the scanner and the final image quality. Modeled parameters include: sensor height, intensity and duration of illuminant, scanning rate, sensor aperture, detector modulation transfer function (MTF), and motion blur created by the movement of the sensor during the scanning process. These variables are also modeled mathematically by utilizing Fourier analysis, functions that model the physical components, convolutions, sampling theorems, and gamma corrections. Special targets were used to validate the simulation include single frequency pattern, a radial chirp-like pattern, or a high resolution scanned document. The simulation is demonstrated to model the scanning process effectively both on a theoretical and experimental level.
Uniformity is one of the issues of most critical concern for laser electrophotographic (EP) printers. Typically, full coverage constant-tint test pages are printed to assess uniformity. Exemplary nonuniformity defects include mottle, grain, pinholes, and “finger prints". It is a real challenge to make an overall Print Quality (PQ) assessment due to the large coverage of a letter-size, constant-tint printed test page and the variety of possible nonuniformity defects. In this paper, we propose a novel method that uses a block-based technique to analyze the page both visually and metrically. We use a grid of 150 pixels × 150 pixels ( ¼ inch × ¼ inch at 600-dpi resolution) square blocks throughout the scanned page. For each block, we examine two aspects: behavior of its pixels within the block (metrics of graininess) and behavior of the blocks within the printed page (metrics of nonuniformity). Both ΔE (CIE 1976) and the L* lightness channel are employed. For an input scanned page, we create eight visual outputs, each displaying a different aspect of nonuniformity. To apply machine learning, we train scanned pages of different 100% solid colors separately with the support vector machine (SVM) algorithm. We use two metrics as features for the SVM: average dispersion of page lightness and standard deviation in dispersion of page lightness. Our results show that we can predict, with 83% to 90% accuracy, the assignment by a print quality expert of one of two grades of uniformity in the print.
Laser electrophotographic printers are complex systems that can generate prints with a number of possible
artifacts that are very di_erent in nature. It is a challenging task to develop a single processing algorithm that
can effectively identify such a wide range of print quality defects.
In this paper, we describe an image processing and analysis pipeline that can effectively assess the presence
of a wide range of artifacts, as a general approach. In our paper, we will discuss in detail the algorithm that
comprises the image processing and analysis pipeline, and will illustrate the efficacy of the pipeline with a number
of examples.
Accurate estimation of toner usage is an area of on-going importance for laser, electrophotographic (EP) printers. We
propose a new two-stage approach in which we first predict on a pixel-by-pixel basis, the absorptance from printed and
scanned pages. We then form a weighted sum of these pixel values to predict overall toner usage on the printed page.
The weights are chosen by least-squares regression to toner usage measured with a set of printed test pages. Our twostage
predictor significantly outperforms existing methods that are based on a simple pixel counting strategy in terms of
both accuracy and robustness of the predictions.
KEYWORDS: Raster graphics, Image segmentation, Image processing algorithms and systems, Halftones, RGB color model, Printing, Binary data, Nonimpact printing, Simulation of CCA and DLA aggregates, Image processing
We describe a segmentation-based object map correction algorithm, which can be integrated in a new imaging
pipeline for laser electrophotographic (EP) printers. This new imaging pipeline incorporates the idea of
object-oriented halftoning, which applies different halftone screens to different regions of the page, to improve the
overall print quality. In particular, smooth areas are halftoned with a low-frequency screen to provide more
stable printing; whereas detail areas are halftoned with a high-frequency screen, since this will better reproduce
the object detail. In this case, the object detail also serves to mask any print defects that arise from the use of
a high frequency screen. These regions are defined by the initial object map, which is translated from the page
description language (PDL). However, the information of object type obtained from the PDL may be incorrect.
Some smooth areas may be labeled as raster causing them to be halftoned with a high frequency screen, rather
than being labeled as vector, which would result in them being rendered with a low frequency screen. To
correct the misclassification, we propose an object map correction algorithm that combines information from the
incorrect object map with information obtained by segmentation of the continuous-tone RGB rasterized page
image. Finally, the rendered image can be halftoned by the object-oriented halftoning approach, based on the
corrected object map. Preliminary experimental results indicate the benefits of our algorithm combined with
the new imaging pipeline, in terms of correction of misclassification errors.
Laser electrophotographic printers are complex systems with many rotating components that are used to advance
the media, and facilitate the charging, exposure, development, transfer, fusing, and cleaning steps. Irregularities
that are constant along the axial direction of a roller or drum, but which are localized in circumference can give
rise to distinct isolated bands in the output print that are constant in the scan direction, and which may or may
not be observed to repeat at an interval in the process direction that corresponds to the circumference of the
roller or drum that is responsible for the artifact.
In this paper, we describe an image processing and analysis pipeline that can effectively assess the presence
of isolated periodic and aperiodic bands in the output from laser electrophotographic printers. In our paper, we
will discuss in detail the algorithms that comprise the image processing and analysis pipeline, and will illustrate
the efficacy of the pipeline with an example.
In this paper, we consider a dual-mode halftoning process for the electrophotographic laser printer - a low
frequency halftoning for smooth regions and a high frequency halftoning for detail regions. These regions are
described by an object map that is extracted from the page description language (PDL) version of the document.
This manner of switching screens depending on the local content provides a stable halftone without artifacts
in smooth areas and preserves detail rendering in detail or texture areas. However, when switching between
halftones with two different frequencies, jaggies may occur along the boundaries between areas halftoned with
low and high frequency screens. To reduce the jaggies, our screens obey a harmonic relationship. In addition, we
implement a blending process based on a transition region. We propose a nonlinear blending process in which
at each pixel, we choose the maximum of the two weighted halftones where the weights vary according to the
position in the transition region. Moreover, we describe an on-line tone-mapping for the boundary blending
process, based on an off-line calibration procedure that effectively assures the desired tone values within the
transition region.
In this paper, we present an algorithm for image stitching that avoids performance hindrance and memory issues in
diverse image processing applications/ environments. High-resolution images could be cut into smaller pieces by various
applications for ease of processing, especially if they are sent over a computer network. Image pieces (from several highresolution
images) could be stored as a single image-set with no information about the original images. We propose a
robust stitching methodology to reconstruct the original high-resolution image(s) from a target image-set that contains
components of various sizes and resolutions. The proposed algorithm consists of three major modules. The first step
sorts image pieces into different planes according to their spatial position, size, and resolution. It avoids sorting
overlapped pieces of the same resolution in the same plane. The second module sorts the pieces from different planes
according to their content by minimizing a cost function based on Mean Absolute Difference (MAD). The third module
relates neighboring pieces and determines output images. The proposed algorithm could be used at a pre-processing
stage in applications such as rendering, enhancement, retrieval etc, as these cannot be carried out without access to
original images as individual whole components.
We propose a novel unsupervised multiresolution adaptive and progressive gradient-based color-image segmentation algorithm (MAPGSEG) that takes advantage of gradient information in an adaptive and progressive framework. The proposed methodology is initiated with a dyadic wavelet decomposition scheme of an arbitrary input image accompanied by a vector gradient calculation of its color-converted counterpart in the 1976 Commission Internationale de l'Eclairage (CIE) L*a*b* color space. The resultant gradient map is used to automatically and adaptively generate thresholds to segregate regions of varying gradient densities at different resolution levels of the input image pyramid. At each level, the classification obtained by a progressively thresholded growth procedure is integrated with an entropy-based texture model by using a unique region-merging procedure to obtain an interim segmentation. A confidence map and nonlinear spatial filtering techniques are combined, and regions of high confidence are passed from one resolution level to another until the final segmentation at the highest (original) resolution is achieved. A performance evaluation of our results on several hundred images with a recently proposed metric called the normalized probabilistic Rand index demonstrates that the proposed work computationally outperforms published segmentation techniques with superior quality.
In this paper, we propose a novel unsupervised color image segmentation algorithm named GSEG. This Gradient-based
SEGmentation method is initialized by a vector gradient calculation in the CIE L*a*b* color space. The obtained
gradient map is utilized for initially clustering low gradient content, as well as automatically generating thresholds for a
computationally efficient dynamic region growth procedure, to segment regions of subsequent higher gradient densities
in the image. The resultant segmentation is combined with an entropy-based texture model in a statistical merging
procedure to obtain the final result. Qualitative and quantitative evaluation of our results on several hundred images,
utilizing a recently proposed evaluation metric called the Normalized Probabilistic Rand index shows that the GSEG
algorithm is robust to various image scenarios and performs favorably against published segmentation techniques.
In this paper, we present an image understanding algorithm for automatically identifying and ranking different image
regions into several levels of importance. Given a color image, specialized maps for classifying image content namely:
weighted similarity, weighted homogeneity, image contrast and memory colors are generated and combined to provide a
metric for perceptual importance classification. Further analysis yields a region ranking map which sorts the image
content into different levels of significance. The algorithm was tested on a large database of color images that consists of
the Berkeley segmentation dataset as well as many other internal images. Experimental results show that our technique
matches human manual ranking with 90% efficiency. Applications of the proposed algorithm include image rendering,
classification, indexing and retrieval.
KEYWORDS: Image segmentation, Image processing algorithms and systems, Color image segmentation, Color image processing, Electrical engineering, Human vision and color perception, Electronic imaging, Image processing, RGB color model, Radiation oncology
We propose a novel algorithm for unsupervised segmentation of color images. The proposed approach utilizes a dynamic
color gradient thresholding scheme that guides the region growing process. Given a color image, a weighted vectorbased
color gradient map is generated. Seeds are identified and a dynamic threshold is then used to perform reliable
growing of regions on the weighted gradient map. Over-segmentation, if any, is addressed by a Similarity Measurebased
region merging stage to produce the final segmented image. Comparative results demonstrate the effectiveness of
this algorithm for color image segmentation.
KEYWORDS: Error analysis, Image segmentation, Data conversion, CMYK color model, Statistical analysis, Data analysis, RGB color model, Visualization, Color imaging, Instrument modeling
This paper proposes a method of gamut estimation using data segmentation and 2D surface splines. The device data is first segmented into hue intervals, and then each hue interval is analyzed iteratively to construct a 2D gamut boundary descriptor for that hue interval. The accuracy and smoothness of the gamut boundary estimate can be controlled by the number of hue intervals selected and the sampling within each hue interval.
KEYWORDS: Printing, CMYK color model, Data conversion, Inkjet technology, Image quality, Color difference, Principal component analysis, Graphic arts, Composites, Reflectivity
The use of four process inks (CMYK) is common practice in the graphic arts, and provides the foundation for many output device technologies. In commercial applications the number of inks are sometimes extended beyond the process inks depending on the customers’ requirements, and cost constraints. In inkjet printing extra inks have been used to both extend the color gamut, and/or improve the image quality in the highlight regions by using "light" inks. The addition of "light" inks are sometimes treated as an extension of the existing Cyan or Magenta inks, with the Cyan tone scale smoothly transitioning from the light to the dark ink as the required density increases, or are sometimes treated independently.
If one is to treat the light ink as an extension of the dark ink, a simple blend can work well where the light and dark inks fall at the same hue angle, but will exhibit problems if the light and dark inks hues deviate significantly. The method documented in this paper addresses the problem where the hues of the light and dark inks are significantly different. An ink interaction model is built for the light and dark inks, then a composite primary is constructed that smoothly transitions from the light ink to dark ink, preventing the blended ink from over inking, while ensuring a smooth transition in lightness, chroma, and hue.
The method was developed and tested on an XES (Xerox Engineering Systems) ColorGraphx X2 printer, on multiple substrates, and found to provide superior results to the alternative linear blending techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.