Linear registration to a standard space is a crucial early step in the processing of magnetic resonance images (MRIs) of the human brain. Thus an accurate registration is essential for subsequent image processing steps, as well as downstream analyses. Registration failures are not uncommon due to poor image quality, irregular head shapes, and bad initialization. Traditional quality assurance (QA) for registration requires a substantial manual assessment of the registration results. In this paper, we propose an automatic quality assurance method for the rigid registration of brain MRIs. Without using any manual annotations in the model training, our proposed QA method achieved 99.1% sensitivity and 86.7% specificity in a pilot study on 537 T1-weighted scans acquired from multiple imaging centers.
Image quality control (IQC) can be used in automated magnetic resonance (MR) image analysis to exclude erroneous results caused by poorly acquired or artifact-laden images. Existing IQC methods for MR imaging generally require human effort to craft meaningful features or label large datasets for supervised training. The involvement of human labor can be burdensome and biased, as labeling MR images based on their quality is a subjective task. In this paper, we propose an automatic IQC method that evaluates the extent of artifacts in MR images without supervision. In particular, we design an artifact encoding network that learns representations of artifacts based on contrastive learning. We then use a normalizing flow to estimate the density of learned representations for unsupervised classification. Our experiments on large-scale multi-cohort MR datasets show that the proposed method accurately detects images with high levels of artifacts, which can inform downstream analysis tasks about potentially flawed data.
The cranial meninges are membranes enveloping the brain. The space between these membranes contains mainly cerebrospinal fluid. It is of interest to study how the volumes of this space change with respect to normal aging. In this work, we propose to combine convolutional neural networks (CNNs) with nested topology-preserving geometric deformable models (NTGDMs) to reconstruct meningeal surfaces from magnetic resonance (MR) images. We first use CNNs to predict implicit representations of these surfaces then refine them with NTGDMs to achieve sub-voxel accuracy while maintaining spherical topology and the correct anatomical ordering. MR contrast harmonization is used to match the contrasts between training and testing images. We applied our algorithm to a subset of healthy subjects from the Baltimore Longitudinal Study of Aging for demonstration purposes and conducted longitudinal statistical analysis of the intracranial volume (ICV) and subarachnoid space (SAS) volume. We found a statistically significant decrease in the ICV and an increase in the SAS volume with respect to normal aging.
KEYWORDS: Super resolution, Magnetism, Magnetic resonance imaging, Lawrencium, Image resolution, Image segmentation, Signal to noise ratio, Performance modeling, Medical imaging
Robust and accurate segmentation results from high resolution (HR) 3D Magnetic Resonance (MR) images are desirable in many clinical applications. State-of-the-art deep learning methods for image segmentation require external HR atlas image and label pairs for training. However, the availability of such HR labels is limited due to the annotation accuracy and the time required to manually label. In this paper, we propose a 3D label super resolution (LSR) method which does not use an external image or label from a HR atlas data and can reconstruct HR annotation labels only reliant on a LR image and corresponding label pairs. In our method, we present a Deformable U-net, which uses synthetic data with multiple deformation for training and an iterative topology check during testing, to learn a label slice evolving process. This network requires no external HR data because a deformed version of the input label slice acquired from the LR data itself is used for training. The trained Deformable U-net is then applied to through-plane slices to estimate HR label slices. The estimated HR label slices are further combined by label a fusion method to obtain the 3D HR label. Our results show significant improvement compared to competing methods, in both 2D and 3D scenarios with real data.
Deep learning approaches have been used extensively for medical image segmentation tasks. Training deep networks for segmentation, however, typically requires manually delineated examples which provide a ground truth for optimization of the network. In this work, we present a neural network architecture that segments vascular structures in retinal OCTA images without the need of direct supervision. Instead, we propose a variational intensity cross channel encoder that finds vessel masks by exploiting the common underlying structure shared by two OCTA images of the the same region but acquired on different devices. Experimental results demonstrate significant improvement over three existing methods that are commonly used.
Monitoring retinal thickness of persons with multiple sclerosis (MS) provides important bio-markers for disease progression. However, changes in retinal thickness can be small and concealed by noise in the acquired data. Consistent longitudinal retinal layer segmentation methods for optical coherence tomography (OCT) images are crucial for identifying the real longitudinal retinal changes of individuals with MS. In this paper, we propose an iterative registration and deep learning based segmentation method for longitudinal 3D OCT scans. Since 3D OCT scans are usually anisotropic with large slice separation, we extract B-scan features using 2D deep networks and utilize inter-B-scan context with convolutional long-short-term memory (LSTM). To incorporate longitudinal information, we perform fundus registration and interpolate the smooth retinal surfaces of the previous visit to use as a prior on the current visit.
Deep networks provide excellent image segmentation results given copious amounts of supervised training data (source data). However, when a trained network is applied to data acquired at a different clinical center or on a different imaging device (target data), a significant drop in performance can occur due to the domain shift between the test data and the network training data. To solve this problem, unsupervised domain adaptation methods retrain the model with labeled source data and unlabeled target data. In real practice, retraining the model is time consuming and the labeled source data may not be available for people deploying the model. In this paper, we propose a straightforward unsupervised domain adaptation method for multi-device retinal OCT image segmentation which does not require labeled source data and does not require retraining of the segmentation model. The segmentation network is trained with labeled Spectralis images and tested on Cirrus images. The core idea is to use a domain adaptor to convert target domain images (Cirrus) to a domain that can be segmented well by the already trained segmentation network. Unlabeled Spectralis and Cirrus images are used to train this domain adaptor. The domain adaptation block is used before the trained network and a discriminator is used to differentiate the segmentation results from Spectralis and Cirrus. The domain adaptation portion of our network is fully unsupervised and does not change the previously trained segmentation network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.