EM and level set algorithms are competing methods for segmenting MRI brain images. This paper presents a
fair comparison of the two techniques using the Montreal Neurological Institute's software phantom.
There are many flavors of level set algorithms for segmentation into multiple regions (multi-phase algorithms,
multi-layer algorithms). The specific algorithm evaluated by us is a variant of the multi-layer level set algorithm.
It uses a single level set function for segmenting the image into multiple classes and can be run to completion
without restarting. The EM-based algorithm is standard.
Both algorithms have the capacity to model a variable number of partial volume classes as well as image
inhomogeneity (bias field). Our evaluation consists of systematically changing the number of partial volume
classes, additive image noise, and regularization parameters. The results suggest that the performances of both
algorithms are comparable across noise, number of partial volume classes, and regularization. The segmentation
errors of both algorithms are around 5 - 10% for cerebrospinal fluid, gray and white matter. The level set
algorithm appears to have a slight advantage for gray matter segmentation. This may be beneficial in studying
certain brain diseases (Multiple Sclerosis or Alzheimer's disease) where small changes in gray matter volume
are significant.
This paper evaluates the quality of segmentation achieved by a level set evolution strategy call Tunneling Descent. Level sets often evolve and become stationary at the nearest local minimum of an energy function. A
comparison of the local level set minimum with a global minimum is important for many applications. This is
especially true of ultrasound segmentation where ultrasound speckle can introduce many local minima which trap
the level set. In this paper, we compare the quality of the level set segmentation with the quality of segmentation
achieved by (1) simulated annealing (with three different cooling schedules), and (2) random sampling, and (3)
three experts (manual segmentation). Simulated annealing and random sampling offer global minimization.
In this paper, the quality of the segmentation is compared for 21 clinically-obtained images. The comparison
is carried out using two performance measures: the amount by which the global minimizers can further decrease
the level set energy, and the contour distance between the segmentations themselves. The results show that level
set segmentation is within one ultrasound resolution cell of the global minimum. The results also show that the
level set segmentation is quite close to manual segmentation.
The presence of speckle (a spatial stochastic process in an ultrasound image) makes ultrasound segmentation difficult. Speckle
introduces local minima in the MAP energy function of an active contour, and when evolving under gradient descent, the contour gets trapped in a spurious local minimum. In this paper, we propose an alternate technique for evolving a MAP active contour. The technique has two parts: a deterministic evolution strategy called tunneling descent which escapes from spurious local minima, and a stopping rule for terminating the evolution. The combination gives an algorithm that is robust and gives good segmentations. The algorithm also benefits from having only a few free parameters which do not require tweaking. We present the conceptual framework of the algorithm in this paper, and study the impact of different stopping rules on the performance of the algorithm. The algorithm is used to segment the endocardium in cardiac ultrasound images. We present segmentation results in this paper and an experimental evaluation of different stopping rules on the performance of the algorithm. Although the algorithm is presented as an ultrasound segmentation technique, in fact, it can be used to segment any first-order texture boundary.
Fast retrieval using complete or partial shapes of
organs is an important functionality in medical image databases.
Shapes of organs can be defined as points in shape spaces, which,
in turn, are curved manifolds with a well-defined metric. In this
paper, we experimentally compare two indexing techniques for shape
spaces: first, we re-embed the shape space in a Euclidean space
and use co-ordinate based indexing, and second, we used metric
based hierarchical clustering for directly indexing shape space.
The relative performances are evaluated with images from the
NHANES II database of lumbar and cervical spine x-ray images on a
shape similarity query. The experiments show that indexing using
re-embedding is superior to cluster-based indexing.
A method for constructing transitive nonrigid image registration
algorithms is presented. Although transitivity is a desirable property
of nonrigid image registration algorithms, the algorithms available in
the literature are not transitive. The proposed approach can be
applied to any nonrigid image registration algorithm and it
generalizes to any-dimensional case. The transitivity property is
achieved exactly up to the error of numerical implementation, which
can be arbitrary small. To the best of our knowledge, this is the
first time that transitivity of image registration algorithms has been
achieved.
An existing 2D nonrigid image registration algorithm was made
transitive using the presented method. The algorithm was tested on two
sequences of cardiac short axis MR images. The maximal transitivity
error (defined in the paper) for several triples of images randomly
selected from the two sequences of cardiac images was on the order of
a millionth of a pixel.
Interaction with image databases is facilitated by using example images in a query. Query-by-example often requires a comparison of the features in the query image with features of the database image. The appropriate comparison function need not be the Euclidean distance between the two features - several non-Euclidean similarity measures have been shown to be visual more appropriate. This paper considers the problem of efficient retrieval of images using such similarity measures. A classical k-d tree based indexing algorithm is extended to such similarity measures and experimental performance evaluation of the algorithm is also provided.
Recent developments in edge detection have exposed diiferent criteria to gauge the performance of edge detectors in the presense of noise. One of the criteria is "Localization", which is the ability of the edge detector to produce from noisy data a detected edge that is as close as possible to the true edge in the image. In this paper, we show the limitation of the localization criteria as previously formulated and propose an alternative. This new performance measure is based on the theory of zero-crossings of stochastic processes. We show that the derivative of a Gaussian is the optimal edge detector for this new measure.
Classical shape-from-shading and photometric stereo theones assume that diffuse reflection from real-world surfaces is Lambertian. However, there is considerable evidence that diffuse reflection from a large dass of surfaces is nonLambertian' . Using a Lambertian model to reconstruct such surfaces can cause serious errors in the reconstruction. In this paper, we propose a theory of non-Lambertian shading and photometric stereo. First, we explore the physics of scattering and obtain a realistic model for the reflectance map of non-Lambertian surfaces. The reflectance map is significantly non-linear. We then explore the number of light sources and the conditions on their placement for a globally unique inversion of the photometric stereo equation for this reflectance map. We theoretically establish the minimum number oflight sources needed to achieve this. These results are then extended in several directions. The main part of the extension is the joint estimation of surface normal along with the surface albedo. In the literature, this problem has been addressed only for Lambertian surfaces. We establish some basic results on the problem ofjoint estimation using the manifold structure of intensities obtained from photometric stereo. We will show that the joint estimation problem is ill-posed and propose a regularization scheme for it. Our experiments show that using the techniques proposed here, the fidelity of reconstruction can be increased by an order of magnitude over existing techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.