Multi-modal image registration is needed to align medical images collected from different protocols or imaging sources, thereby allowing the mapping of complementary information between images. One challenge of multimodal image registration is that typical similarity measures rely on statistical correlations between image intensities to determine anatomical alignment. The use of alternate image representations could allow for mapping of intensities into a space or representation such that the multimodal images appear more similar, thus facilitating their co-registration. In this work, we present a spectral embedding based registration (SERg) method that uses non-linearly embedded representations obtained from independent components of statistical texture maps of the original images to facilitate multimodal image registration. Our methodology comprises the following main steps: 1) image-derived textural representation of the original images, 2) dimensionality reduction using independent component analysis (ICA), 3) spectral embedding to generate the alternate representations, and 4) image registration. The rationale behind our approach is that SERg yields embedded representations that can allow for very different looking images to appear more similar, thereby facilitating improved co-registration. Statistical texture features are derived from the image intensities and then reduced to a smaller set by using independent component analysis to remove redundant information. Spectral embedding generates a new representation by eigendecomposition from which only the most important eigenvectors are selected. This helps to accentuate areas of salience based on modality-invariant structural information and therefore better identifies corresponding regions in both the template and target images. The spirit behind SERg is that image registration driven by these areas of salience and correspondence should improve alignment accuracy. In this work, SERg is implemented using Demons to allow the algorithm to more effectively register multimodal images. SERg is also tested within the free-form deformation framework driven by mutual information. Nine pairs of synthetic T1-weighted to T2-weighted brain MRI were registered under the following conditions: five levels of noise (0%, 1%, 3%, 5%, and 7%) and two levels of bias field (20% and 40%) each with and without noise. We demonstrate that across all of these conditions, SERg yields a mean squared error that is 81.51% lower than that of Demons driven by MRI intensity alone. We also spatially align twenty-six ex vivo histology sections and in vivo prostate MRI in order to map the spatial extent of prostate cancer onto corresponding radiologic imaging. SERg performs better than intensity registration by decreasing the root mean squared distance of annotated landmarks in the prostate gland via both Demons algorithm and mutual information-driven free-form deformation. In both synthetic and clinical experiments, the observed improvement in alignment of the template and target images suggest the utility of parametric eigenvector representations and hence SERg for multimodal image registration.
Spectral embedding (SE), a graph-based manifold learning method, has previously been shown to be useful in high
dimensional data classification. In this work, we present a novel SE based active contour (SEAC) segmentation
scheme and demonstrate its applications in lesion segmentation on breast dynamic contrast enhance magnetic
resonance imaging (DCE-MRI). In this work, we employ SE on DCE-MRI on a per voxel basis to embed the
high dimensional time series intensity vector into a reduced dimensional space, where the reduced embedding
space is characterized by the principal eigenvectors. The orthogonal eigenvector-based data representation allows
for computation of strong tensor gradients in the spectrally embedded space and also yields improved region
statistics that serve as optimal stopping criteria for SEAC. We demonstrate both analytically and empirically
that the tensor gradients in the spectrally embedded space are stronger than the corresponding gradients in the
original grayscale intensity space. On a total of 50 breast DCE-MRI studies, SEAC yielded a mean absolute
difference (MAD) of 3.2±2.1 pixels and mean Dice similarity coefficient (DSC) of 0.74±0.13 compared to manual
ground truth segmentation. An active contour in conjunction with fuzzy c-means (FCM+AC), a commonly used
segmentation method for breast DCE-MRI, produced a corresponding MAD of 7.2±7.4 pixels and mean DSC
of 0.58±0.32. In conjunction with a set of 6 quantitative morphological features automatically extracted from
the SEAC derived lesion boundary, a support vector machine (SVM) classifier yielded an area under the curve
(AUC) of 0.73, for discriminating between 10 benign and 30 malignant lesions; the corresponding SVM classifier
with the FCM+AC derived morphological features yielded an AUC of 0.65.
Dynamic contrast enhanced (DCE) MRI has emerged as a promising new imaging modality for breast cancer
screening. Currently, radiologists evaluate breast lesions based on qualitative description of lesion morphology
and contrast uptake profiles. However, the subjectivity associated with qualitative description of breast lesions
on DCE-MRI introduces a high degree of inter-observer variability. In addition, the high sensitivity of MRI
results in poor specificity and thus a high rate of biopsies on benign lesions. Computer aided diagnosis (CAD)
methods have been previously proposed for breast MRI, but research in the field is far from comprehensive. Most
previous work has focused on either quantifying morphological attributes used by radiologists, characterizing
lesion intensity profiles which reflect uptake of contrast dye, or characterizing lesion texture. While there has
been much debate on the relative importance of the different classes of features (e.g., morphological, textural,
and kinetic), comprehensive quantitative comparisons between the different lesion attributes have been rare.
In addition, although kinetic signal enhancement curves may give insight into the underlying physiology of the
lesion, signal intensity is susceptible to MRI acquisition artifacts such as bias field and intensity non-standardness.
In this paper, we introduce a novel lesion feature that we call the kinetic texture feature, which we demonstrate
to be superior compared to the lesion intensity profile dynamics. Our hypothesis is that since lesion intensity is
susceptible to artifacts, lesion texture changes better reflect lesion class (benign or malignant). In this paper,
we quantitatively demonstrate the superiority of kinetic texture features for lesion classification on 18 breast
DCE-MRI studies compared to over 500 different morphological, kinetic intensity, and lesion texture features.
In conjunction with linear and non-linear dimensionality reduction methods, a support vector machine (SVM)
classifier yielded classification accuracy and positive predictive values of 78% and 86% with kinetic texture
features compared to 78% and 73% with morphological features and 72% and 83% with textural features,