A robust multi-volume three-dimensional (3D) registration algorithm is introduced to improve the contrast of optical coherence tomography (OCT) volumes. Our method involves registering multiple volumes to a selected reference volume to correct for the translational and rotational differences between each target and the reference volume and averaging the registered volumes. We tested our registration algorithm on the volumes obtained from three OCT systems with different field-of-views and resolutions. To demonstrate its accuracy, our developed method is evaluated using two different metrics, and its advantages over the other registration algorithms and its limitations are discussed.
We present a novel approach of leveraging deep learning to reconstruct high-resolution OCT B-scans from reduced axial resolution data. In this work, the original OCT signal is used as the ground truth, and lower resolution was simulated by windowing the interference fringes. A super-resolution pixel-to-pixel generative adversarial network (GAN) was investigated for reconstructing high-resolution OCT data in the spatial domain and is compared against reconstructing in the spectral domain.
Methods: Alzheimer’s disease (AD) is a worldwide prevalent age-related neurodegenerative disease with no available cure yet. Early prognosis is therefore crucial for planning proper clinical intervention. It is especially true for people diagnosed with mild cognitive impairment, to whom the prediction of whether and when the future disease onset would happen is particularly valuable. However, such prognostic prediction has been proven to be challenging, and previous studies have only achieved limited success.
Approach: In this study, we seek to extract the principal component of the longitudinal disease progression trajectory in the early stage of AD, measured as the magnetic resonance imaging (MRI)-derived structural volume, to predict the onset of AD for mild cognitive impaired patients two years ahead.
Results: Cross-validation results of LASSO regression using the longitudinal functional principal component (FPC) features show significant improved predictive power compared to training using the baseline volume 12 months before AD conversion [area under the receiver operating characteristic curve (AUC) of 0.802 versus 0.732] and 24 months before AD conversion (AUC of 0.816 versus 0.717).
Conclusions: We present a framework using the FPCA to extract features from MRI-derived information collected from multiple timepoints. The results of our study demonstrate the advantageous predictive power of the population-based longitudinal features to predict the disease onset compared with using only cross-sectional data-based on volumetric features extracted from a single timepoint, demonstrating the improved prediction power using FPC-derived longitudinal features.
We present novel approaches of implementing state-of-the-art deep learning techniques for the processing of optical coherence tomography angiography (OCT-A) images for the classification of diabetic retinopathy (DR) severity. The effects of feature-engineering on a deep neural network’s classification performance is compared against unprocessed OCT-A images. We investigate the effects of lower axial resolution (simulated by using a narrower spectral bandwidth) on the classification of DR severity, and the recovery of lost features using a generative adversarial network. We also explore the relationship between DR severity classification and lateral resolution.
Spaceborne Synthetic Aperture Radar (SAR) and Optical sensors, are one of the main sources of Earth observation in the present age. Both the data types have their inherent advantages and disadvantages. Spaceborne Optical sensor are restricted by clouds but can offer strong information content in ideal conditions. On the other hand, SAR sensors rely on their own energy and can see through clouds. SAR is potentially an all-weather day/night imager. But SAR sensors have limitations in terms of data collection geometry and algorithmic approximations. Both sensors offer complimentary information for exploitation in data fusion for enhanced results. This research is focused on capitalizing the fusion potential for spaceborne High resolution SAR and Optical data in urban settings. The fusion of high reflection of SAR energy from urban areas and optical features of such areas can be combined to enhance the urban infrastructure detection and monitoring in a SAR/Optical fused scenario. SAR/Optical fusion can take place at three levels 1) pixel level, 2) feature level; and 3) information level. Pixel level fusion is often considered most difficult for high resolution data as precise registration up to subpixel level is required and even slight misregistration results in unfavorable circumstances. Simon Fraser University (SFU) Burnaby Mountain Campus has been chosen for area of interest because of its ongoing student housing and university infrastructure developmental projects. TerraSAR-X High Resolution Spotlight (TSX-HS) Single Look Complex (SLC) images of 1.0 m resolution have continuously being acquired over SFU; along with high resolution Optical (RGB) and Infrared (IR) images (3.0 m resolution each) from “The Planet” acquisitions. Limited high-resolution images from “Google Earth” (GE) in the coinciding period of TSX-HS acquisitions were also acquired for the study. Six fusion techniques have been studied for urban infrastructure detection and have been categorized based on their performance. Precision change maps will be created based on time series analysis for SAR/optical fused data in conjunction with Interferometric SAR (InSAR) analysis to study the long-term effect of urban infrastructure developments over a period of two years.
We present updates upon our novel machine-learning methods for the acquisition, processing, and classification of Optical Coherence Tomography Angiography (OCT-A) images. Transitioning from traditional registration methods to machine-learning based methods provided significant reductions in computation time for serial image acquisition and averaging. Through a vessel segmentation network, clinically useful parameters were extracted and then fed to our classification network which was able to classify different diabetic retinopathy severities. The DNN pipeline was also implemented on data acquired with Sensorless Adaptive Optics OCT-A. This work has potential to subsequently reduce clinical overhead and help expedite treatments, resulting in improved patient prognoses.
High quality visualization of the retinal microvasculature can improve our understanding of the onset and development of retinal vascular diseases, especially Diabetic Retinopathy (DR), which is a major cause of visual morbidity and is increasing in prevalence. Optical Coherence Tomography Angiography (OCT-A) images are acquired over multiple seconds and are particularly susceptible to motion artifacts, which are more prevalent when imaging individuals with DR whose ability to fixate is limited due to deteriorating vision. The sequential acquisition and averaging of multiple OCT-A images can be performed for removing motion artifact and increasing the contrast of the vascular network. As motion artifacts often irreversibly corrupt OCT-A images of DR eyes, a robust registration pipeline is needed before feature preserving image averaging can be performed.
In this report we present an improvement upon a novel method for the acquisition, processing, segmentation, registration, and averaging of sequentially acquired OCT-A images, to correct for motion artifacts in images of DR eyes. Image discontinuities caused by rapid micro-saccadic movements and image warping due to smoother reflex movements were corrected by strip-wise affine registration and subsequent local similarity-based non-rigid registration. Where our previous work was limited by the need for at least one image containing no motion artifact, thus reducing its clinical relevance, this novel template-less method stitches together partial images to form complete, motion-free images. These techniques significantly improve image quality, increasing the value for clinical diagnosis and increasing the range of patients for whom high quality OCT-A images can be acquired.
High quality visualization of the retinal microvasculature can improve our understanding of the onset and development of retinal vascular diseases, which are a major cause of visual morbidity and are increasing in prevalence. Optical Coherence Tomography Angiography (OCT-A) images are acquired over multiple seconds and are particularly susceptible to motion artifacts, which are more prevalent when imaging patients with pathology whose ability to fixate is limited. The acquisition of multiple OCT-A images sequentially can be performed for the purpose of removing motion artifact and increasing the contrast of the vascular network through averaging. Due to the motion artifacts, a robust registration pipeline is needed before feature preserving image averaging can be performed.
In this report, we present a novel method for a GPU-accelerated pipeline for acquisition, processing, segmentation, and registration of multiple, sequentially acquired OCT-A images to correct for the motion artifacts in individual images for the purpose of averaging. High performance computing, blending CPU and GPU, was introduced to accelerate processing in order to provide high quality visualization of the retinal microvasculature and to enable a more accurate quantitative analysis in a clinically useful time frame. Specifically, image discontinuities caused by rapid micro-saccadic movements and image warping due to smoother reflex movements were corrected by strip-wise affine registration estimated using Scale Invariant Feature Transform (SIFT) keypoints and subsequent local similarity-based non-rigid registration. These techniques improve the image quality, increasing the value for clinical diagnosis and increasing the range of patients for whom high quality OCT-A images can be acquired.
The visibility of retinal microvasculature in optical coherence tomography angiography (OCT-A) images is negatively affected by the small dimension of the capillaries, pulsatile blood flow, and motion artifacts. Serial acquisition and time-averaging of multiple OCT-A images can enhance the definition of the capillaries and result in repeatable and consistent visualization. We demonstrate an automated method for registration and averaging of serially acquired OCT-A images. Ten OCT-A volumes from six normal control subjects were acquired using our prototype 1060-nm swept source OCT system. The volumes were divided into microsaccade-free en face angiogram strips, which were affine registered using scale-invariant feature transform keypoints, followed by nonrigid registration by pixel-wise local neighborhood matching. The resulting averaged images were presented of all the retinal layers combined, as well as in the superficial and deep plexus layers separately. The contrast-to-noise ratio and signal-to-noise ratio of the angiograms with all retinal layers (reported as average±standard deviation) increased from 0.52±0.22 and 19.58±4.04 dB for a single image to 0.77±0.25 and 25.05±4.73 dB, respectively, for the serially acquired images after registration and averaging. The improved visualization of the capillaries can enable robust quantification and study of minute changes in retinal microvasculature.
Accurate segmentation of the retinal microvasculature is a critical step in the quantitative analysis of the retinal circulation, which can be an important marker in evaluating the severity of retinal diseases. As manual segmentation remains the gold standard for segmentation of optical coherence tomography angiography (OCT-A) images, we present a method for automating the segmentation of OCT-A images using deep neural networks (DNNs). Eighty OCT-A images of the foveal region in 12 eyes from 6 healthy volunteers were acquired using a prototype OCT-A system and subsequently manually segmented. The automated segmentation of the blood vessels in the OCT-A images was then performed by classifying each pixel into vessel or nonvessel class using deep convolutional neural networks. When the automated results were compared against the manual segmentation results, a maximum mean accuracy of 0.83 was obtained. When the automated results were compared with inter and intrarater accuracies, the automated results were shown to be comparable to the human raters suggesting that segmentation using DNNs is comparable to a second manual rater. As manually segmenting the retinal microvasculature is a tedious task, having a reliable automated output such as automated segmentation by DNNs, is an important step in creating an automated output.
Many studies using T1 magnetic resonance imaging (MRI) data have found associations between changes in global metrics (e.g. volume) of brain structures and preterm birth. In this work, we use the surface displacement feature extracted from the deformations of the surface models of the third ventricle, fourth ventricle and brainstem to capture the variation in shape in these structures at 8 years of age that may be due to differences in the trajectory of brain development as a result of very preterm birth (24-32 weeks gestation). Understanding the spatial patterns of shape alterations in these structures in children who were born very preterm as compared to those who were born at full term may lead to better insights into mechanisms of differing brain development between these two groups. The T1 MRI data for the brain was acquired from children born full term (FT, n=14, 8 males) and preterm (PT, n=51, 22 males) at age 8-years. Accurate segmentation labels for these structures were obtained via a multi-template fusion based segmentation method. A high dimensional non-rigid registration algorithm was utilized to register the target segmentation labels to a set of segmentation labels defined on an average-template. The surface displacement data for the brainstem and the third ventricle were found to be significantly different (p < 0.05) between the PT and FT groups. Further, spatially localized clusters with inward and outward deformation were found to be associated with lower gestational age. The results from this study present a shape analysis method for pediatric MRI data and reveal shape changes that may be due to preterm birth.
Transgenic mouse models have been instrumental in the elucidation of the molecular mechanisms behind many genetically based cardiovascular diseases such as Marfan syndrome (MFS). However, the characterization of their cardiac morphology has been hampered by the small size of the mouse heart. In this report, we adapted optical coherence tomography (OCT) for imaging fixed adult mouse hearts, and applied tools from computational anatomy to perform morphometric analyses. The hearts were first optically cleared and imaged from multiple perspectives. The acquired volumes were then corrected for refractive distortions, and registered and stitched together to form a single, high-resolution OCT volume of the whole heart. From this volume, various structures such as the valves and myofibril bundles were visualized. The volumetric nature of our dataset also allowed parameters such as wall thickness, ventricular wall masses, and luminal volumes to be extracted. Finally, we applied the entire acquisition and processing pipeline in a preliminary study comparing the cardiac morphology of wild-type mice and a transgenic mouse model of MFS.
Manual segmentation of anatomy in brain MRI data taken to be the closest to the “gold standard” in quality is often used in automated registration-based segmentation paradigms for transfer of template labels onto the unlabeled MRI images. This study presents a library of template data with 16 subcortical structures in the central brain area which were manually labeled for MRI data from 22 children (8 male, mean age=8±0.6 years). The lateral ventricle, thalamus, caudate, putamen, hippocampus, cerebellum, third vevntricle, fourth ventricle, brainstem, and corpuscallosum were segmented by two expert raters. Cross-validation experiments with randomized template subset selection were conducted to test for their ability to accurately segment MRI data under an automated segmentation pipeline. A high value of the dice similarity coefficient (0.86±0.06, min=0.74, max=0.96) and small Hausdorff distance (3.33±4.24, min=0.63, max=25.24) of the automated segmentation against the manual labels was obtained on this template library data. Additionally, comparison with segmentation obtained from adult templates showed significant improvement in accuracy with the use of an age-matched library in this cohort. A manually delineated pediatric template library such as the one described here could provide a useful benchmark for testing segmentation algorithms.
KEYWORDS: Image segmentation, Optical coherence tomography, 3D scanning, 3D metrology, Imaging systems, Prototyping, 3D image processing, Retina, Visualization, 3D acquisition
Retinal imaging with optical coherence tomography (OCT) has rapidly advanced in ophthalmic applications with the broad availability of Fourier domain (FD) technology in commercial systems. The high sensitivity afforded by FD-OCT has enabled imaging of the choroid, a layer of blood vessels serving the outer retina. Improved visualization of the choroid and the choroid-sclera boundary has been investigated using techniques such as enhanced depth imaging (EDI), and also with OCT systems operating in the 1060-nm wavelength range. We report on a comparison of imaging the macular choroid with commercial and prototype OCT systems, and present automated 3D segmentation of the choroid-scleral layer using a graph cut algorithm. The thickness of the choroid is an important measurement to investigate for possible correlation with severity, or possibly early diagnosis, of diseases such as age-related macular degeneration.
We demonstrate how compressive sampling can be used to expedite volumetric optical coherence tomography (OCT) image acquisition. We propose a novel method to interpolate OCT volumetric images from data acquired by radial B-scans in the Cartesian coordinate system. Due to the inherent polar symmetry in the human eye, the (r , θ , z ) coordinate system provides a natural domain to perform the interpolation. We demonstrate that the method has minimal effect on image quality even when up to 88% of the data is not acquired. The potential outcome of this work could lead to significant reductions in OCT volume acquisition time in clinical practice.
We apply the initial momentum shape representation of diffeomorphic metric mapping from a template region of interest
(ROI) to a given ROI as a morphometic marker in Parkinson's disease. We used a three-step segmentation-registrationmomentum
process to derive feature vectors from ROIs in a group of 42 subjects consisting of 19 Parkinson's Disease
(PD) subjects and 23 normal control (NC) subjects. Significant group differences between PD and NC subjects were
detected in four basal ganglia structures including the caudate, putamen, thalamus and globus pallidus. The magnitude of
regionally significant between-group differences detected ranged between 34-75%. Visualization of the different
structural deformation pattern between-groups revealed that some parts of basal ganglia structure actually hypertrophy,
presumably as a compensatory response to more widespread atrophy. Our results of both hypertrophy and atrophy in the
same structures further demonstrate the importance of morphological measures as opposed to overall volume in the
assessment of neurodegenerative disease.
In this paper we present imaging and morphometric analysis of a myopic Optic Nerve Head (ONH) using an 830nm
wavelength Fourier domain optical coherence tomography system. The thinner prelaminar neural tissue and shallower
optic cup in the myopic subject allows visualization of the tissue structures such as anterior laminar surface and lamina
cribrosa that are often challenging to image due to their depth. From these structures we measured volumetric anatomical
parameters and topographical tissue thickness correlated with glaucomatous structural damage in the ONH.
Optical Coherence Tomography is a powerful tool for diagnostic imaging of the ocular posterior chamber. Recent
advances in OCT technology have facilitated acquisition of high resolution volumetric images of the retina and optic
nerve head. In this report, we investigate optic nerve head imaging in humans using a home-built laboratory grade OCT
system in the 800nm wavelength region. We also introduce the development of a computational model of the optic nerve
head morphology in order to study physiological changes which may be associated with elevated intra-ocular pressure.
We apply a recently developed automated brain segmentation method, FS+LDDMM, to brain MRI scans from
Parkinson's Disease (PD) subjects, and normal age-matched controls and compare the results to manual segmentation
done by trained neuroscientists. The data set consisted of 14 PD subjects and 12 age-matched control
subjects without neurologic disease and comparison was done on six subcortical brain structures (left and right
caudate, putamen and thalamus). Comparison between automatic and manual segmentation was based on Dice
Similarity Coefficient (Overlap Percentage), L1 Error, Symmetrized Hausdorff Distance and Symmetrized Mean
Surface Distance. Results suggest that FS+LDDMM is well-suited for subcortical structure segmentation and
further shape analysis in Parkinson's Disease. The asymmetry of the Dice Similarity Coefficient over shape
change is also discussed based on the observation and measurement of FS+LDDMM segmentation results.
Non-invasive estimation of regional cardiac function is important for assessment of myocardial contractility.
The use of MR tagging technique enables acquisition of intra-myocardial tissue motion by placing a spatially
modulated pattern of magnetization whose deformation with the myocardium over the cardiac cycle can be
imaged. Quantitative computation of parameters such as wall thickening, shearing, rotation, torsion and strain
within the myocardium is traditionally achieved by processing the tag-marked MR image frames to 1) segment
the tag lines and 2) detect the correspondence between points across the time-indexed frames. In this paper,
we describe our approach to solving this problem using the Large Deformation Diffeomorphic Metric Mapping
(LDDMM) algorithm in which tag-line segmentation and motion reconstruction occur simultaneously. Our
method differs from earlier proposed non rigid registration based cardiac motion estimation methods in that
our matching cost incorporates image intensity overlap via the L2 norm and the estimated tranformations are
diffeomorphic. We also present a novel method of generating synthetic tag line images with known ground truth
and motion characteristics that closely follow those in the original data; these can be used for validation of
motion estimation algorithms. Initial validation shows that our method is able to accurately segment tag-lines
and estimate a dense 3D motion field describing the motion of the myocardium in both the left and the right
ventricle.
KEYWORDS: Diffusion, Magnetic resonance imaging, Heart, Tissues, Algorithm development, Medical imaging, Signal attenuation, Visualization, Binary data, 3D image processing
Diffusion tensor MR image data gives at each voxel in the image a symmetric, positive definite matrix that is
denoted as the diffusion tensor at that voxel location. The eigenvectors of the tensor represent the principal
directions of anisotopy in water diffusion. The eigenvector with the largest eigenvalue indicates the local orientation
of tissue fibers in 3D as water is expected to diffuse preferentially up and down along the fiber tracts.
Although there is no anatomically valid positive or negative direction to these fiber tracts, for many applications,
it is of interest to assign an artificial direction to the fiber tract by choosing one of the two signs of the principal
eigenvector in such a way that in local neighborhoods the assigned directions are consistent and vary smoothly
in space.
We demonstrate here an algorithm for realigning the principal eigenvectors by flipping their sign such that it
assigns a locally consistent and spatially smooth fiber direction to the eigenvector field based on a Monte-Carlo
algorithm adapted from updating clusters of spin systems. We present results that show the success of this
algorithm on 11 available unsegmented canine cardiac volumes of both healthy and failing hearts.
KEYWORDS: Heart, Image segmentation, Image processing, 3D magnetic resonance imaging, Binary data, 3D image processing, Magnetic resonance imaging, Chemical elements, Interfaces, Spherical lenses
A method for measuring the thickness of the ventricular heart wall from 3D MRI images is presented. The quantification of thickness could be useful clinically to measure the health of the heart muscle. The method involves extending a Laplace-equation-based definition of thickness between two surfaces to the ventricular heart wall geometry. Based on the functional organization of the heart, it is proposed that the heart be segmented into two volumes, the left ventricular wall which completely encloses the left ventricle and the right ventricular wall which attaches to the left ventricular wall to enclose the right ventricle, and that the thickness of these two volumes be calculated separately. An algorithm for performing this segmentation automatically is presented. The results of the automatic segmentation algorithm were compared to the results of manual segmentations of both normal and failing hearts and an average of 99.28% of ventricular wall voxels were assigned the same label in both the automatic and the manual segmentations. The thickness of eleven hearts, seven normal and four failing was measured.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.