Open Access
4 January 2018 Special Section Guest Editorial: Radiomics and Deep Learning
Author Affiliations +
Abstract
This guest editorial introduces and summarizes the JMI Special Section on Radiomics and Deep Learning.

Over the past decades, advances in imaging analytics from computer-aided diagnosis and quantitative imaging, leading to radiomics, have provided the ability to extract clinically useful quantitative measures from medical imaging data with the goal to augment interpretation in diagnosis, risk assessment, and response prediction.13 In addition, deep learning methods, such as convolutional neural networks (CNN), continue to advance and demonstrate success in learning directly from input image data in various medical tasks. The utilization of CNNs in medical image analysis was first introduced by Zhang et al. in 1994 (initially called “shift-invariant neural network”), which later was translated as a clinical product for the detection of microcalcifications on digital mammograms.4 Now such imaging analytics and machine learning techniques, fueled by additional technological advances in efficient training algorithms and computational resources as well as larger datasets, are offering an unprecedented opportunity to rapidly extract and process vast amounts of information (i.e., radiomics) from medical images as well as utilize CNNs with large number of layers and connections. Such quantitative imaging information, especially when coupled with other biomedical data and high-dimensional machine learning tools, can yield methods for ultimate use in clinical decision making as well as contribute to discovery, offering new insights into genetic traits and molecular subtyping of disease, particularly in cancer, that can be used as precision medicine imaging biomarkers of disease prognosis and response to treatment.

This special section of the Journal of Medical Imaging presents contributions on the subject of radiomics and deep learning that highlight a wide spectrum of research areas, including quantitative image analysis, high-dimensional feature extraction, convolutional neural networks and deep learning, computer-assisted diagnosis and prognosis, machine learning and classification, and imaging genomics.

Machine Learning & Radiomics

St-Pierre et al. present a mathematical approach to reduce the dimensionality of the input data for classification, while preserving important volumetric features from reconstructed three-dimensional optical imaging data. Comparing a range of different classifiers, including support vector machines, they demonstrate that their algorithm can optimally explore the original feature space, while resulting into accurate and robust classification of healthy fallopian tubes versus ovarian cancer cells, as well as further differentiating between high-grade serous, endometroid, and clear cells cancers.

Bakr et al. conducted a radiomics study to predict microvascular invasion, a predictor of poor prognosis, in patients with primary liver cancer. The authors extracted from CT scans computational features describing tumor shape, image intensities, texture and difference features across phases of intravenous contrast enhancement, and found that particular features were accurate predictors of microvascular invasion.

Deep Learning for Detection and Classification

Li et al. evaluated the role of CNNs and transfer learning in breast cancer risk assessment where digital screening mammograms were fed directly into a CNN architecture and performance was compared to that of hand-crafted features extracted with conventional parenchymal texture analysis. They showed that CNNs can outperform conventional texture descriptors in classifying cancer cases versus low-risk women, and furthermore that fusing CNNs with hand-crafted texture can achieve best performance.

Utilizing low-dose screening chest CT data, Liu et al. applied a 3D CNN, both as a single architecture network as well as in an ensemble configuration, to classify likelihood of malignancy of pulmonary nodules, aiming to minimize the number of potentially unnecessary follow-up. They compared the two CNN configurations with a series of conventional machine learning models depending on domain-specific feature extraction, showing that the 3D CNNs, especially in their ensemble configuration, achieve the best classification performance.

CNNs were also applied by Ordóñez et al. in fundus retinal images to classify micro-aneurysms, which are early lesions represented by small local features that are difficult to classify in diabetic retinopathy. Their CNN architecture, utilizing data augmentation, achieved high sensitivity and specificity, with can substantially reduce false-positives tests.

Shafiee et al. presented an evolutionary deep radiomic sequencer for pathologically proven lung cancer detection.

Deep Learning in Segmentation

Alex et al. used stacked denoising autoencoders (SDAEs) to achieve accurate low grade glioma segmentation by implementing transfer learning from a network originally trained with high-grade glioma images and was fine-tuned using low-grade glioma training labels. They demonstrated the ability to obtain good segmentation results, which generalized well in independent data, while requiring a minimal number of patient data.

Kovacs demonstrated a deep learning architecture to improve the segmentation of lung in cine MRI by utilizing sequence-specific prior information. Their approach was applied to cine MRIs in the axial, sagittal, and coronal views, where segmentation was used to also extract patterns of lung motion during breathing to assist diagnosis. Comparison to a conventional registration-based method showed robust and superior performance for the deep learning approach.

Khalvati et al. conducted a deep learning study of fully automated segmentation of the prostate transition zone and whole gland on diffusion-weighted MRI. On a dataset of 104 patients, the authors achieved Dice similarity coefficients of 0.88 and 0.93 for the transition zone and whole gland, respectively. The algorithm used two different deep convolutional neural networks, one to determine whether prostate tissue was present and the second to perform the actual segmentation. The segmentation deep network was a highly modified version of the popular U-Net architecture.

Cheng et al. presented a method for the automatic segmentation of prostates on MRI using deep learning. Their algorithm used holistically nested networks that can automatically learning the hierarchical representation of MRI scans of the prostate, and achieved a high segmentation performance in a 5-fold cross-validation.

Summary

Research in quantitative analytics continues to expand through radiomics, machine learning, and deep learning. As in the past with CAD, considerations and challenges exist, for example, with respect to data set size and distribution, appropriate handling of inputs and outputs (to avoid “garbage in, garbage out”), robustness assessment, training, and statistical testing and validation. Publication of the latest methods and findings will continue through JMI in order to disseminate methods and lessons learned as well as expedite the development and translation.

References

1. 

M. L. Giger, H. P. Chan and J. Boone, “Anniversary paper: history and status of CAD and quantitative image analysis: the role of medical physics and AAPM,” Med. Phys., 35 (12), 5799 –5820 (2008). https://doi.org/http:/dx.doi.org/10.1118/1.3013555 Google Scholar

2. 

M. L. Giger, N. Karssemeijer and J. Schnabel, “Breast image analysis for risk assessment, detection, diagnosis, and treatment of cancer,” Annu. Rev. Biomed. Eng., 15 327 –357 (2013). https://doi.org/http:/dx.doi.org/10.1146/annurev-bioeng-071812-152416 Google Scholar

3. 

P. Lambin et al., “Radiomics: extracting more information from medical images using advanced feature analysis,” Eur. J. Cancer, 48 (4), 441 –446 (2012). https://doi.org/http:/dx.doi.org/10.1016/j.ejca.2011.11.036 Google Scholar

4. 

W. Zhang et al., “Computerized detection of clustered microcalcifications in digital mammograms using a shift-invariant artificial neural network,” Med. Phys., 21 517 –524 (1994). https://doi.org/http:/dx.doi.org/10.1118/1.597177 Google Scholar
© 2018 Society of Photo-Optical Instrumentation Engineers (SPIE)
Despina Kontos, Ronald M. Summers M.D., and Maryellen L. Giger "Special Section Guest Editorial: Radiomics and Deep Learning," Journal of Medical Imaging 4(4), 041301 (4 January 2018). https://doi.org/10.1117/1.JMI.4.4.041301
Published: 4 January 2018
Lens.org Logo
CITATIONS
Cited by 16 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Machine learning

Image segmentation

Magnetic resonance imaging

Feature extraction

Medical imaging

Prostate

Analytics

Back to Top