It is of great value to be able to track image quality of a printing system and detect changes before/when it occurs. To do
that effectively, image quality data need to be constantly gathered and processed. A common approach is to print and
measure test-patterns over-time at a pre-determined schedule and then analyze the measured image quality data to
discover/detect changes. But due to the presence of other printer noise, such as page-to-page instability, mottle etc., it is
likely that the measured image quality data for a given image quality attribute of interest (e.g. streaks) at a given time is
governed by a statistical model rather than a deterministic one. This imposes difficulty for methods intended to detect
image quality changes reliably unless sufficient data of test samples are collected. However, these test samples are non-value-
add to the customers and should be minimized. An alternative is to directly measure and assess the image quality
attributes of interest from customer pages and post-processing them for detecting changes. In addition to the difficulty
caused by sources of other printer noise, variable image contents from customer pages further impose challenges in the
change detection. This paper addresses these issues and presents a feasible solution in which change points are detected
by statistical model-ranking.
The method of paired comparisons is often used in image quality evaluations. Psychometric scale values for quality
judgments are modeled using Thurstone's Law of Comparative Judgment in which distance in a psychometric scale
space is a function of the probability of preference. The transformation from psychometric space to probability is a
cumulative probability distribution.
The major drawback of a complete paired comparison experiment is that every treatment is compared to every other,
thus the number of comparisons grows quadratically. We ameliorate this difficulty by performing paired
comparisons in two stages, by precisely estimating anchors in the psychometric scale space which are spaced apart
to cover the range of scale values and comparing treatments against those anchors.
In this model, we employ a generalized linear model where the regression equation has a constant offset vector
determined by the anchors. The result of this formulation is a straightforward statistical model easily analyzed using
any modern statistics package. This enables model fitting and diagnostics.
This method was applied to overall preference evaluations of color pictorial hardcopy images. The results were
found to be compatible with complete paired comparison experiments, but with significantly less effort.
Computational mathematical morphology (CMM) is a nonlinear filter representation
particularly amenable to real-time image processing. In the state of the art
implementation each pixel value in a windowed observation is
indexed into a separate lookup table to retrieve a set of bit vectors.
Each bit in the vector corresponds to a basis element in the CMM filter representation. All retrieved bit
vectors are "anded" together to produce a bit vector with a unique nonzero
bit. The position of that bit corresponds to a basis element containing the observation and
it used to look up a filter value in a table. The number of stored
bit vectors is a linear function of the image or signal bit depth. We present
an architecture for CMM implementation that uses a minimal number of bit
vectors and required memory is less sensitive to bit depth.
In the proposed architecture, basis elements are projected to subspaces and only
bit vectors unique to each subspace are stored. With the addition of an intermediate
lookup table to map observations to unique bit vectors, filter memory is greatly reduced.
Simulations show that the architecture provides an advantage for random tessellations of the
observation space. A 50% memory savings is shown for a practical application to digital
darkness control in electronic printing.
Image evaluation tasks are often conducted using paired comparisons or ranking. To elicit interval scales, both methods rely on Thurstone's Law of Comparative Judgment in which objects closer in psychological space are more often confused in preference comparisons by a putative discriminal random process. It is often debated whether paired comparisons and ranking yield the same interval scales. An experiment was conducted to assess scale production using paired comparisons and ranking. For this experiment a Pioneer Plasma Display and Apple Cinema Display were used for stimulus presentation. Observers performed rank order and paired comparisons tasks on both displays. For each of five scenes, six images were created by manipulating attributes such as lightness, chroma, and hue using six different settings. The intention was to simulate the variability from a set of digital cameras or scanners. Nineteen subjects, (5 females, 14 males) ranging from 19-51 years of age participated in this experiment. Using a paired comparison model and a ranking model, scales were estimated for each display and image combination yielding ten scale pairs, ostensibly measuring the same psychological scale. The Bradley-Terry model was used for the paired comparisons data and the Bradley-Terry-Mallows model was used for the ranking data. Each model was fit using maximum likelihood estimation and assessed using likelihood ratio tests. Approximate 95% confidence intervals were also constructed using likelihood ratios. Model fits for paired comparisons were satisfactory for all scales except those from two image/display pairs; the ranking model fit uniformly well on all data sets. Arguing from overlapping confidence intervals, we conclude that paired comparisons and ranking produce no conflicting decisions regarding ultimate ordering of treatment preferences, but paired comparisons yield greater precision at the expense of lack-of-fit.
We present a real-time compact architecture for translation- invariant windowed nonlinear discrete filters represented in computational mathematical morphology (CMM). The architecture enables filter values to be computed in a deterministic number of operations and thus can be pipelined. Memory requirements are proportional to the size of the filter basis. A filter is implemented by three steps: 1) each component of a vector observation is used as an index into a table of bit vectors; 2) all retrieved bit vectors are ANDed together; and 3) the position of the unique nonzero bit is used as an index to a table of filter values. We motivate and describe CMM and illustrate the architecture through examples. We also formally analyze the representation upon which the architecture rests. A modification of the basic architecture provides for increasing filters.
KEYWORDS: Image segmentation, Databases, Switches, Optical character recognition, Systems modeling, Image processing, Data modeling, Digital imaging, Roads, Visualization
A table in a document is a rectilinear arrangement of cells where each cell contains a sequence of words. Several lines of text may compose one cell. Cells may be delimited by horizontal or vertical lines, but often this is not the case. A table analysis system is described which reconstructs table formatting information from table images whether or not the cells are explicitly delimited. Inputs to the system are word bounding boxes and any horizontal and vertical lines that delimit cells. Using a sequence of carefully-crafted rules, multi-line cells and their interrelationships are found even though no explicit delimiters are visible. This robust system is a component of a commercial document recognition system.
Image processing is properly viewed as modeling and estimation in two dimensions. Image are often projections of higher dimensional phenomena onto a 2D grid. The scope of phenomena that can be imaged is unbounded, thus a wealth of image models is required. In addition, models should be constructed according to rigorous mathematics from first principles. One such approach is random- set modeling. The fit between random sets and our intuitive notion of image formation is natural, but poses difficult mathematical and statistical problems. We review the foundation of the random set approach in the continuous and discrete setting and present several highlights in estimation and filtering for binary images.
The Boolean random set model is a tractable random set model used in image analysis, geostatistics, and queueing systems, among others. It can be formulated in the continuous and discrete settings, each of which offers certain advantages with regard to modeling and estimation. The continuous model enjoys more elegant theory but often results in intractable formulas. The discrete model, especially in the 1D directional case, provides flexible models, tractable estimators, and optimal filters.
Gray-scale textures can be viewed as random surfaces in gray-scale space. One method of constructing such surfaces is the Boolean random function model wherein a surface is formed by taking the maximum of shifted random functions. This model is a generalization of the Boolean random set model in which a binary image is formed by the union of randomly positioned shapes. The Boolean random set model is composed of two independent random processes: a random shape process and a point process governing the placement of grains. The union of the randomly shifted grains forms a binary texture of overlapping objects. For the Boolean random function model, the random set or grain is replaced by a random function taking values among the admissible gray values. The maximum over all the randomly shifted functions produces a model of a rough surface that is appropriate for some classes of textures. The Boolean random function model is analyzed by viewing its behavior on intersecting lines. Under mild conditions in the discrete setting, 1D Boolean random set models are induced on intersecting lines. The discrete 1D model has been completely characterized in previous work. This analysis is used to derive a maximum- likelihood estimation for the Boolean random function.
The Boolean model is a random set process in which random shapes are positioned according to the outcomes of an independent point process. In the discrete case, the point process is Bernoulli. To do estimation on the two-dimensional discrete Boolean model, we sample the germ-grain model at widely spaced points. An observation using this procedure consists of jointly distributed horizontal and vertical runlengths. An approximate likelihood of each cross observation is computed. Since the observations are taken at widely spaced points, they are considered independent and are multiplied to form a likelihood function for the entire sampled process. Estimation for the two-dimensional process is done by maximizing the grand likelihood over the parameter space. Simulations on random-rectangle Boolean models show significant decrease in variance over the method using horizontal and vertical linear samples. Maximum-likelihood estimation can also be used to fit models to real textures.
Region-based coding is applied to images composed of disjoint texture regions, where in each region the image is generated by a discrete random Boolean model. The image is segmented into regions by applying pixelwise granulometric classification and the region boundaries are chain encoded. Maximum-likelihood estimation based upon induced 1D Boolean models is used to estimate the parameters of the governing processes in each region. The regions are coded by these parameters. Decoding is accomplished by generating in each region a realization of the appropriate random set.
A one-dimensional discrete Boolean model is a random process on the discrete line where random-length line segments are positioned according to the outcomes of a Bernoulli process. Points on the discrete line are either covered or left uncovered by a realization of the process. An observation of the process consists of runs of covered and not-covered points, called black and white runlengths, respectively. The black and white runlengths form an alternating sequence of independent random variables. We show how the Boolean model is completely determined by probability distributions of these random variables by giving explicit formulas linking the marking probability of the Bernoulli process and segment length distribution with the runlength distributions. The black runlength density is expressed recursively in terms of the marking probability and segment length distribution and white runlengths are shown to have a geometric probability law. Filtering for the Boolean model can also be done via runlengths. The optimal minimum mean absolute error filter for union noise is computed as the binary conditional expectation for windowed observations, expressible as a function observed black runlengths.
Parametric estimation is achieved for the discrete 2D Boolean model by applying maximum- likelihood estimation on linear samples. Under certain conditions, a 2D Boolean model induces a 1D Boolean model so that the likelihood function of a 1D observation is expressed in terms of the parameters of the 2D inducing model, thereby enabling maximum-likelihood estimation to be performed on the 2D model using linear samples.
The exact probability density for a windowed observation of a discrete 1D Boolean process having convex grains is found via recursive probability expressions. This observation density is used as the likelihood function for the process and numerically yields the maximum- likelihood estimator for the process intensity and the parameters governing the distribution of the grain lengths. Maximum-likelihood estimation is applied in the case of Poisson distributed lengths.
Restoration is ultimately a problem of statistical estimation. An optimal filter is estimated to restore binary fax images. The filter is an approximation of the binary conditional expectation that minimizes the expected absolute error between the degraded image and the ideal image. It is implemented as a morphological hit-or-miss filter. Estimation methodology employs a model-based simulation of the degradation due to the fax process. Simulated images are used to generate data from which the filter is estimated. The methodology presented can be used for other classes of images. Restoration examples are given.
An optimal filter is estimated to restore binary fax images. The filter is an approximation of the binary conditional expectation which minimizes the expected absolute error between the degraded image and the ideal image. It is implemented as a morphological hit-or-miss filter. Estimation methodology employs a model-based simulation of the degradation due to the fax process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.