PurposeDiffusion-weighted magnetic resonance imaging (DW-MRI) is a critical imaging method for capturing and modeling tissue microarchitecture at a millimeter scale. A common practice to model the measured DW-MRI signal is via fiber orientation distribution function (fODF). This function is the essential first step for the downstream tractography and connectivity analyses. With recent advantages in data sharing, large-scale multisite DW-MRI datasets are being made available for multisite studies. However, measurement variabilities (e.g., inter- and intrasite variability, hardware performance, and sequence design) are inevitable during the acquisition of DW-MRI. Most existing model-based methods [e.g., constrained spherical deconvolution (CSD)] and learning-based methods (e.g., deep learning) do not explicitly consider such variabilities in fODF modeling, which consequently leads to inferior performance on multisite and/or longitudinal diffusion studies.ApproachIn this paper, we propose a data-driven deep CSD method to explicitly constrain the scan–rescan variabilities for a more reproducible and robust estimation of brain microstructure from repeated DW-MRI scans. Specifically, the proposed method introduces a three-dimensional volumetric scanner-invariant regularization scheme during the fODF estimation. We study the Human Connectome Project (HCP) young adults test–retest group as well as the MASiVar dataset (with inter- and intrasite scan/rescan data). The Baltimore Longitudinal Study of Aging dataset is employed for external validation.ResultsFrom the experimental results, the proposed data-driven framework outperforms the existing benchmarks in repeated fODF estimation. By introducing the contrastive loss with scan/rescan data, the proposed method achieved a higher consistency while maintaining higher angular correlation coefficients with the CSD modeling. The proposed method is assessing the downstream connectivity analysis and shows increased performance in distinguishing subjects with different biomarkers.ConclusionWe propose a deep CSD method to explicitly reduce the scan–rescan variabilities, so as to model a more reproducible and robust brain microstructure from repeated DW-MRI scans. The plug-and-play design of the proposed approach is potentially applicable to a wider range of data harmonization problems in neuroimaging.
KEYWORDS: Diffusion, Voxels, Deep learning, Education and training, Data modeling, Spherical harmonics, Tolerancing, White matter, Spherical lenses, Reconstruction algorithms
Diffusion weighted magnetic resonance imaging (DW-MRI) captures tissue microarchitecture at a millimeter scale. With recent advantages in data sharing, large-scale multi-site DW-MRI datasets are being made available for multi-site studies. However, DW-MRI suffers from measurement variability (e.g., inter- and intra-site variability, hardware performance, and sequence design), which consequently yields inferior performance on multi-site and/or longitudinal diffusion studies. In this study, we propose a novel, deep learning-based method to harmonize DW-MRI signals for a more reproducible and robust estimation of microstructure. Our method introduces a data-driven scanner-invariant regularization scheme to model a more robust fiber orientation distribution function (FODF) estimation. We study the Human Connectome Project (HCP) young adults test-retest group as well as the MASiVar dataset (with inter- and intra-site scan/rescan data). The 8 th order spherical harmonics coefficients are employed as data representation. The results show that the proposed harmonization approach maintains higher angular correlation coefficients (ACC) with the ground truth signals (0.954 versus 0.942), while achieves higher consistency of FODF signals for intra-scanner data (0.891 versus 0.826), as compared with the baseline supervised deep learning scheme. Furthermore, the proposed data-driven framework is flexible and potentially applicable to a wider range of data harmonization problems in neuroimaging.
The purpose of this paper is to introduce a practical framework of using proxy data in automatic hyperparameter
optimization for 3D multi-organ segmentation. The automated segmentation of abdominal organs from CT
volumes is a main task in the medical image analysis field. Much research has been investigated to handle this task
based on the immense experience of machine learning. Deep learning approaches require enormous experiments
to design the optimal configurations for the best performance. Automatic machine learning (AutoML) using
hyperparameter optimization to search the optimal training strategy makes it possible to find the appropriate
settings without much deep experience. However, biases of training data can be highly related to the AutoML
performance and efficiency. In this paper, we propose an AutoML framework that uses pre-selected proxy data
to represent the entire dataset which has the potential to reduce the computation time needed for efficient
hyperparameter optimization in searching learning. Both quantitative and qualitative results showed that our
framework can effectively build more powerful segmentation models than manually designed deep-learning-based
methods and AutoML, which use carefully tuned hyperparameters and randomly selected training subsets,
respectively. The average Dice score for 10-class abdominal organ segmentation was 85.9%.
KEYWORDS: Tissues, Medical research, Brain, Network architectures, Deconvolution, Magnetic resonance imaging, Diffusion, Diffusion tensor imaging, In vivo imaging
Diffusion-weighted magnetic resonance imaging (DW-MRI) is the only non-invasive approach for estimation of intravoxel tissue microarchitecture and reconstruction of in vivo neural pathways for the human brain. With improvement in accelerated MRI acquisition technologies, DW-MRI protocols that make use of multiple levels of diffusion sensitization have gained popularity. A well-known advanced method for reconstruction of white matter microstructure that uses multishell data is multi-tissue constrained spherical deconvolution (MT-CSD). MT-CSD substantially improves the resolution of intra-voxel structure over the traditional single shell version, constrained spherical deconvolution (CSD). Herein, we explore the possibility of using deep learning on single shell data (using the b=1000 s/mm2 from the Human Connectome Project (HCP)) to estimate the information content captured by 8th order MT-CSD using the full three shell data (b=1000, 2000, and 3000 s/mm2 from HCP). Briefly, we examine two network architectures: 1.) Sequential network of fully connected dense layers with a residual block in the middle (ResDNN), 2.) Patch based convolutional neural network with a residual block (ResCNN). For both networks an additional output block for estimation of voxel fraction was used with a modified loss function. Each approach was compared against the baseline of using MT-CSD on all data on 15 subjects from the HCP divided into 5 training, 2 validation, and 8 testing subjects with a total of 6.7 million voxels. The fiber orientation distribution function (fODF) can be recovered with high correlation (0.77 vs 0.74 and 0.65) and low root mean squared error ResCNN:0.0124, ResDNN:0.0168 and sCSD:0.0323 as compared to the ground truth of MT-CST, which was derived from the multi-shell DW-MRI acquisitions. The mean squared error between the MT-CSD estimates for white matter tissue fraction and for the predictions are ResCNN:0.0249 vs ResDNN:0.0264. We illustrate the applicability of high definition fiber tractography on a single testing subject with arcuate and corpus callosum Tractography. In summary, the proposed approach provides a promising framework to estimate MT-CSD with limited single shell data. Source code and models have been made publicly available.
Diffusion weighted magnetic resonance imaging (DW-MRI) is interpreted as a quantitative method that is sensitive to tissue microarchitecture at a millimeter scale. However, the sensitization is dependent on acquisition sequences (e.g., diffusion time, gradient strength, etc.) and susceptible to imaging artifacts. Hence, comparison of quantitative DW-MRI biomarkers across field strengths (including different scanners, hardware performance, and sequence design considerations) is a challenging area of research. We propose a novel method to estimate microstructure using DW-MRI that is robust to scanner difference between 1.5T and 3T imaging. We propose to use a null space deep network (NSDN) architecture to model DW-MRI signal as fiber orientation distributions (FOD) to represent tissue microstructure. The NSDN approach is consistent with histologically observed microstructure (on previously acquired ex vivo squirrel monkey dataset) and scan-rescan data. The contribution of this work is that we incorporate identical dual networks (IDN) to minimize the influence of scanner effects via scan-rescan data. Briefly, our estimator is trained on two datasets. First, a histology dataset was acquired on three squirrel monkeys with corresponding DW-MRI and confocal histology (512 independent voxels). Second, 37 control subjects from the Baltimore Longitudinal Study of Aging (67-95 y/o) were identified who had been scanned at 1.5T and 3T scanners (b-value of 700 s/mm2 , voxel resolution at 2.2mm, 30-32 gradient volumes) with an average interval of 4 years (standard deviation 1.3 years). After image registration, we used paired white matter (WM) voxels for 17 subjects and 440 histology voxels for training and 20 subjects and 72 histology voxels for testing. We compare the proposed estimator with super-resolved constrained spherical deconvolution (CSD) and a previously presented regression deep neural network (DNN). NSDN outperformed CSD and DNN in angular correlation coefficient (ACC) 0.81 versus 0.28 and 0.46, mean squared error (MSE) 0.001 versus 0.003 and 0.03, and general fractional anisotropy (GFA) 0.05 versus 0.05 and 0.09. Further validation and evaluation with contemporaneous imaging are necessary, but the NSDN is promising avenue for building understanding of microarchitecture in a consistent and deviceindependent manner.
Machine learning models are becoming commonplace in the domain of medical imaging, and with these methods comes an ever-increasing need for more data. However, to preserve patient anonymity it is frequently impractical or prohibited to transfer protected health information (PHI) between institutions. Additionally, due to the nature of some studies, there may not be a large public dataset available on which to train models. To address this conundrum, we analyze the efficacy of transferring the model itself in lieu of data between different sites. By doing so we accomplish two goals: 1) the model gains access to training on a larger dataset that it could not normally obtain and 2) the model better generalizes, having trained on data from separate locations. In this paper, we implement multi-site learning with disparate datasets from the National Institutes of Health (NIH) and Vanderbilt University Medical Center (VUMC) without compromising PHI. Three neural networks are trained to convergence on a computed tomography (CT) brain hematoma segmentation task: one only with NIH data, one only with VUMC data, and one multi-site model alternating between NIH and VUMC data. Resultant lesion masks with the multi-site model attain an average Dice similarity coefficient of 0.64 and the automatically segmented hematoma volumes correlate to those done manually with a Pearson correlation coefficient of 0.87, corresponding to an 8% and 5% improvement, respectively, over the single-site model counterparts.
Coronary artery calcium (CAC) is biomarker of advanced subclinical coronary artery disease and predicts myocardial infarction and death prior to age 60 years. The slice-wise manual delineation has been regarded as the gold standard of coronary calcium detection. However, manual efforts are time and resource consuming and even impracticable to be applied on large-scale cohorts. In this paper, we propose the attention identical dual network (AID-Net) to perform CAC detection using scan-rescan longitudinal non-contrast CT scans with weakly supervised attention by only using per scan level labels. To leverage the performance, 3D attention mechanisms were integrated into the AID-Net to provide complementary information for classification tasks. Moreover, the 3D Gradient-weighted Class Activation Mapping (Grad-CAM) was also proposed at the testing stage to interpret the behaviors of the deep neural network. 5075 non-contrast chest CT scans were used as training, validation and testing datasets. Baseline performance was assessed on the same cohort. From the results, the proposed AID-Net achieved the superior performance on classification accuracy (0.9272) and AUC (0.9627).
Diffusion weighted MRI (DW-MRI) depends on accurate quantification signal intensities that reflect directional apparent diffusion coefficients (ADC). Signal drift and fluctuations during imaging can cause systematic non-linearities that manifest as ADC changes if not corrected. Here, we present a case study on a large longitudinal dataset of typical diffusion tensor imaging. We investigate observed variation in the cerebral spinal fluid (CSF) regions of the brain, which should represent compartments with isotropic diffusivity. The study contains 3949 DW-MRI acquisitions of the human brain with 918 subjects and 542 with repeated scan sessions. We provide an analysis of the inter-scan, inter-session, and intra-session variation and an analysis of the associations with the applied diffusion gradient directions. We investigate a hypothesis that CSF models could be used in lieu of an interspersed minimally diffusion-weighted image (b0) correction. Variation in CSF signal is not largely attributable to within-scan dynamic anatomical changes (3.6%), but rather has substantial variation across scan sessions (10.6%) and increased variation across individuals (26.6%). Unfortunately, CSF intensity is not solely explained by a main drift model or a gradient model, but rather has statistically significant associations with both possible explanations. Further exploration is necessary for CSF drift to be used as an effective harmonization technique.
KEYWORDS: Diffusion, Gold, Signal to noise ratio, Data modeling, Magnetic resonance imaging, Biological research, Brain, Spherical lenses, In vivo imaging, Data acquisition
The diffusion tensor model is nonspecific in regions where micrometer structural patterns are inconsistent at the millimeter scale (i.e., brain regions with pathways that cross, bend, branch, fan, etc.). Numerous models have been proposed to represent crossing fibers and complex intravoxel structure from in vivo diffusion weighted magnetic resonance imaging (e.g., high angular resolution diffusion imaging—HARDI). Here, we present an empirical comparison of two HARDI approaches—persistent angular structure MRI (PAS-MRI) and Q-ball—using a newly acquired reproducibility dataset. Briefly, a single subject was scanned 11 times with 96 diffusion weighted directions and 10 reference volumes for each of two b values (1000 and 3000 s / mm2 for a total of 2144 volumes). Empirical reproducibility of intravoxel fiber fractions (number/strength of peaks), angular orientation, and fractional anisotropy was compared with metrics from a traditional tensor analysis approach, focusing on b values of 1000 and 3000 s / mm2. PAS-MRI is shown to be more reproducible than Q-ball and offers advantages at low b values. However, there are substantial and biologically meaningful differences between the intravoxel structures estimated both in terms of analysis method as well as by b value. The two methods suggest a fundamentally different microarchitecture of the human brain; therefore, it is premature to perform meta-analysis or combine results across HARDI studies using a different analysis model or acquisition sequences.
An understanding of the bias and variance of diffusion weighted magnetic resonance imaging (DW-MRI) acquisitions across scanners, study sites, or over time is essential for the incorporation of multiple data sources into a single clinical study. Studies that combine samples from various sites may be introducing confounding factors due to site-specific artifacts and patterns. Differences in bias and variance across sites may render the scans incomparable, and, without correction, inferences obtained from these data may be misleading. We present an analysis of the bias and variance of scans of the same subjects across different sites and evaluate their impact on statistical analyses. In previous work, we presented a simulation extrapolation (SIMEX) technique for bias estimation as well as a wild bootstrap technique for variance estimation in metrics obtained from a Q-ball imaging (QBI) reconstruction of empirical high angular resolution diffusion imaging (HARDI) data. We now apply those techniques to data acquired from 5 healthy volunteers on 3 independent scanners under closely matched acquisition protocols. The bias and variance of GFA measurements were estimated on a voxel-wise basis for each scan and compared across study sites to identify site-specific differences. Further, we provide model recommendations that can be used to determine the extent of the impact of bias and variance as well as aspects of the analysis to account for these differences. We include a decision tree to help researchers determine if model adjustments are necessary based on the bias and variance results.
High Angular Resolution Diffusion Imaging (HARDI) models are used to capture complex intra-voxel microarchitectures. The magnetic resonance imaging sequences that are sensitized to diffusion are often highly accelerated and prone to motion, physiologic, and imaging artifacts. In diffusion tensor imaging, robust statistical approaches have been shown to greatly reduce these adverse factors without human intervention. Similar approaches would be possible with HARDI methods, but robust versions of each distinct HARDI approach would be necessary. To avoid the computational and pragmatic burdens of creating individual robust HARDI analysis variants, we propose a robust outlier imputation model to mitigate outliers prior to traditional HARDI analysis. This model uses a weighted spherical harmonic fit of diffusion weighted magnetic resonance imaging scans to estimate the values which had been corrupted during acquisition to restore them. Briefly, spherical harmonics of 6th order were used to generate basis function which were weighted by diffusion signal for detection of outliers. For validation, a single healthy volunteer was scanned for a single session comprising of two scans one without head movement and the other with deliberate head movement at a b-value of 3000 s/mm2 with 64 diffusion weighted directions with a single b0 (5 averages) per scan. The deliberate motion from the volunteer created natural artifacts in the acquisition of one of the scans. The imputation model shows reduction in root mean squared error of the raw signal intensities and improvement for the HARDI method Q-ball in terms of the Angular Correlation Coefficient. The results reveal that there is quantitative and qualitative improvement. The proposed model can be used as general pre-processing model before implementing any HARDI model in general to restore the artifacts which are created because of the outlier diffusion signal in certain gradient volumes.
Crossing fibers are prevalent in human brains and a subject of intense interest for neuroscience. Diffusion tensor imaging (DTI) can resolve tissue orientation but is blind to crossing fibers. Many advanced diffusion-weighted magnetic resolution imaging (MRI) approaches have been presented to extract crossing-fibers from high angular resolution diffusion imaging (HARDI), but the relative sensitivity and specificity of approaches remains unclear. Here, we examine two leading approaches (PAS and q-ball) in the context of a large-scale, single subject reproducibility study. A single healthy individual was scanned 11 times with 96 diffusion weighted directions and 10 reference volumes for each of five b-values (1000, 1500, 2000, 2500, 3000 s/mm2) for a total of 5830 volumes (over the course of three sessions). We examined the reproducibility of the number of fibers per voxel, volume fraction, and crossing-fiber angles. For each method, we determined the minimum resolvable angle for each acquisition. Reproducibility of fiber counts per voxel was generally high (~80% consensus for PAS and ~70% for q-ball), but there was substantial bias between individual repetitions and model estimated with all data (~10% lower consensus for PAS and ~15% lower for q-ball). Both PAS and q-ball predominantly discovered fibers crossing at near 90 degrees, but reproducibility was higher for PAS across most measures. Within voxels with low anisotropy, q-ball finds more intra-voxel structure; meanwhile, PAS resolves multiple fibers at greater than 75 degrees for more voxels. These results can inform researchers when deciding between HARDI approaches or interpreting findings across studies.
High-angular-resolution diffusion-weighted imaging (HARDI) MRI acquisitions have become common for use with
higher order models of diffusion. Despite successes in resolving complex fiber configurations and probing
microstructural properties of brain tissue, there is no common consensus on the optimal b-value and number of
diffusion directions to use for these HARDI methods. While this question has been addressed by analysis of the
diffusion-weighted signal directly, it is unclear how this translates to the information and metrics derived from the
HARDI models themselves. Using a high angular resolution data set acquired at a range of b-values, and repeated 11
times on a single subject, we study how the b-value and number of diffusion directions impacts the reproducibility
and precision of metrics derived from Q-ball imaging, a popular HARDI technique. We find that Q-ball metrics
associated with tissue microstructure and white matter fiber orientation are sensitive to both the number of diffusion
directions and the spherical harmonic representation of the Q-ball, and often are biased when under sampled. These
results can advise researchers on appropriate acquisition and processing schemes, particularly when it comes to
optimizing the number of diffusion directions needed for metrics derived from Q-ball imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.