Machine learning and deep learning are ubiquitous across a wide variety of scientific disciplines, including medical imaging. An overview of multiple application areas along the imaging chain where deep learning methods are utilized in discovery and clinical quantitative imaging trials is presented. Example application areas along the imaging chain include quality control, preprocessing, segmentation, and scoring. Within each area, one or more specific applications is demonstrated, such as automated structural brain MRI quality control assessment in a core lab environment, super-resolution MRI preprocessing for neurodegenerative disease quantification in translational clinical trials, and multimodal PET/CT tumor segmentation in prostate cancer trials. The quantitative output of these algorithms is described, including their impact on decision making and relationship to traditional read-based methods. Development and deployment of these techniques for use in quantitative imaging trials presents unique challenges. The interplay between technical and scientific domain knowledge required for algorithm development is highlighted. The infrastructure surrounding algorithm deployment is critical, given regulatory, method robustness, computational, and performance considerations. The sensitivity of a given technique to these considerations and thus complexity of deployment is task- and phase-dependent. Context is provided for the infrastructure surrounding these methods, including common strategies for data flow, storage, access, and dissemination as well as application-specific considerations for individual use cases.
The utitlity of pulmonary functional imaging techniques, such as hyperpolarized 3He MRI, has encouraged their inclusion in research studies for longitudinal assessment of disease progression and the study of treatment effects. We present methodology for performing voxelwise statistical analysis of ventilation maps derived from hyper polarized 3He MRI which incorporates multivariate template construction using simultaneous acquisition of IH and 3He images. Additional processing steps include intensity normalization, bias correction, 4-D longitudinal segmentation, and generation of expected ventilation maps prior to voxelwise regression analysis. Analysis is demonstrated on a cohort of eight individuals with diagnosed cystic fibrosis (CF) undergoing treatment imaged five times every two weeks with a prescribed treatment schedule.
The recent discovery of methodological flaws in experimental design and analysis in neuroscience research has raised concerns over the validity of certain techniques used in routine analyses and their corresponding findings. Such concerns have centered around selection bias whereby data is inadvertently manipulated such that the resulting analysis produces falsely increased statistical significance, i.e. type I errors. This has been illustrated recently in flv1RI studies, with excessive flexibility in data collection, and general experimental design issues. Current work from our group has shown how this problem extends to generic voxel-based analysis (and certain technique derivatives such as tract- based spatial statistics) using fractional anisotropy images derived from diffusion tensor imaging. In this work, we demonstrate how this circularity principle can potentially extend to the well-known optimized voxel-based morphometry technique for assessing cortical density differences whereby the principal cause of experimental corruption is due to normalization strategy. Specifically, the popular sum of-squared-differences (SSD) metric explicitly optimizes statistical findings potentially inflating type I errors. Additional experimentation demonstrates that this problem is not restricted to the SSD metric but extends to other commonly used metrics such as mutual information, neighborhood cross correlation, and Demons.
Numerous studies have explored the relationship between cortical structure and brain development, cognitive function, and functional connectivity. The highly convoluted cortical topography makes manual measurements arduous and often impractical given the population sizes necessary for sufficient statistical power. Computational techniques have permitted large-scale studies as they provide robust and reliable localized measurements characterizing the cortex with little or no human intervention. Particularly useful to the neuroscience community are publicly available tools, such as the popular surface-based Freesurfer, which facilitate the testing and refinement of hypotheses. In this paper, we introduce the volume-based Advanced Normalization Tools (ANTs) cortical thickness automated pipeline comprising well-vetted components such as SyGN (multivariate template construction), SyN (image registration), N4 (bias correction), Atropos (n-tissue segmentation), and DiReCT (cortical thickness) all developed as part of the ANTs open science effort. Complementing the open source aspect of ANTs we demonstrate its utility using the publicly available IXI data set.
Although 3He MRI permits compelling visualization of the pulmonary air spaces, quantitation of absolute ventilation is
difficult due to confounds such as field inhomogeneity and relative intensity differences between image acquisition; the
latter complicating longitudinal investigations of ventilation variation with respiratory alterations. To address these
potential difficulties, we present a 4-D segmentation and normalization approach for intra-subject quantitative analysis of
lung hyperpolarized 3He MRI. After normalization, which combines bias correction and relative intensity scaling
between longitudinal data, partitioning of the lung volume time series is performed by iterating between modeling of the
combined intensity histogram as a Gaussian mixture model and modulating the spatial heterogeneity tissue class
assignments through Markov random field modeling. Evaluation of the algorithm was retrospectively applied to a cohort
of 10 asthmatics between 19-25 years old in which spirometry and 3He MR ventilation images were acquired both
before and after respiratory exacerbation by a bronchoconstricting agent (methacholine). Acquisition was repeated under
the same conditions from 7 to 467 days (mean ± standard deviation: 185 ± 37.2) later. Several techniques were
evaluated for matching intensities between the pre and post-methacholine images with the 95th percentile value
histogram matching demonstrating superior correlations with spirometry measures. Subsequent analysis evaluated
segmentation parameters for assessing ventilation change in this cohort. Current findings also support previous research
that areas of poor ventilation in response to bronchoconstriction are relatively consistent over time.
We develop a softassign method for application to curve matching. Softassign uses deterministic annealing to iteratively optimize the parameters of an energy function. It also incorporates outlier rejection by converting the energy into a stochastic matrix with entries for rejection probability. Previous applications of the method focused on finding transformations between unordered point sets. Thus, no topological constraints were required. In our application, we must consider the topology of the matching between the reference and the target curve. Our energy function also depends upon the rotation and scaling between the curves. Thus, we develop a topologically correct algorithm to update the arc length correspondence, which is then used to update the similarity transformation. We further enhance robustness by using a scale-space description of the curves. This results in a curve-matching tool that, given an approximate initialization, is invariant to similarity transformations. We demonstrate the reliability of the technique by applying it to open and closed curves extracted from real patient images (cortical sulci in three dimensions and corpora callosa in two dimensions). The set of transformations is then used to compute anatomical atlases.