Although deep learning models have been widely used in medical imaging research field to perform lesion segmentation and classification tasks, several challenges remain to optimally apply deep learning models and improve model performance. The objective of this study is to investigate a new novel joint model and assess model performance improvement as increase of training dataset size. Specifically, we select and modify a novel J-Net as a joint model, which includes a two-way CNN architecture that combines a U-net model with an image classification model. A skin cancer dataset with 1200 images along with the annotated lesion masks and ground truth of “mild” and “severe” status is used. From this dataset, 11 subsets are randomly generated from 200 to 1200 images with an incremental rate of 100. Each subset is then divided into training, validation and testing groups using a ratio of 70:20:10, respectively. The performance of the new joint model is compared with two independent models to separately perform lesion segmentation and classification. The study results show when training the models using data subsets of 200 to 1200 images, accuracy levels increase from 0.80 to 0.92, or 0.86 to 0.95 in lesion segmentation. The lesion classification increases from 0.80 to 0.90, or 0.82 to 0.93 using two single models and one joint J-Net model, respectively. Thus, this study demonstrates that applying this new JNet joint model enables to achieve higher lesion segmentation and classification accuracy than two single models. Additionally, model performance also increases as increase of training dataset size.
Applying computer-aided detection (CAD) generated quantitative image markers has demonstrated significant advantages than using subjectively qualitative assessment in supporting translational clinical research. However, although many advanced CAD schemes have been developed, due to heterogeneity of medical images, achieving high scientific rigor of “black-box” type CAD schemes trained using small datasets remains a big challenge. In order to support and facilitate research effort and progress of physician researchers using quantitative imaging markers, we investigated and tested an interactive approach by developing CAD schemes with interactive functions and visual-aid tools. Thus, unlike fully automated CAD schemes, our interactive CAD tools allow users to visually inspect image segmentation results and provide instruction to correct segmentation errors if needed. Based on users’ instruction, CAD scheme automatically correct segmentation errors, recompute image features and generate machine learning-based prediction scores. We have installed three interactive CAD tools in clinical imaging reading facilities to date, which support and facilitate oncologists to acquire image markers to predict progression-free survival of ovarian cancer patients undergoing angiogenesis chemotherapies, and neurologists to compute image markers and prediction scores to assess prognosis of patients diagnosed with aneurysmal subarachnoid hemorrhage and acute ischemic stroke. Using these ICAD tools, clinical researchers have conducted several translational clinical studies by analyzing several diverse study cohorts, which have resulted in publishing seven peer-reviewed papers in clinical journals in the last three years. Additionally, feedbacks from physician researchers also indicate their increased confidence in using new quantitative image markers and help medical imaging researchers further improve or optimize interactive CAD tools.
Radiomics and deep transfer learning have been attracting broad research interest in developing and optimizing CAD schemes of medical images. However, these two technologies are typically applied in different studies using different image datasets. Advantages or potential limitations of applying these two technologies in CAD applications have not been well investigated. This study aims to compare and assess these two technologies in classifying breast lesions. A retrospective dataset including 2,778 digital mammograms is assembled in which 1,452 images depict malignant lesions and 1,326 images depict benign lesions. Two CAD schemes are developed to classify breast lesions. First, one scheme is applied to segment lesions and compute radiomics features, while another scheme applies a pre-trained residual net architecture (ResNet50) as a transfer learning model to extract automated features. Next, the same principal component algorithm (PCA) is used to process both initially computed radiomics and automated features to create optimal feature vectors by eliminating redundant features. Then, several support vector machine (SVM)-based classifiers are built using the optimized radiomics or automated features. Each SVM model is trained and tested using a 10-fold cross-validation method. Classification performance is evaluated using area under ROC curve (AUC). Two SVMs trained using radiomics and automated features yield AUC of 0.77±0.02 and 0.85±0.02, respectively. In addition, SVM trained using the fused radiomics and automated features does not yield significantly higher AUC. This study indicates that (1) using deep transfer learning yields higher classification performance, and (2) radiomics and automated features contain highly correlated information in lesion classification.
Applications of artificial intelligence (AI) in medical imaging informatics have attracted broad research interest. In ophthalmology, for example, automated analysis of retinal fundus photography helps diagnose and monitor illnesses like glaucoma, diabetic retinopathy, hypertensive retinopathy, and cancer. However, building a robust AI model requires a large and diverse dataset for training and validation. While large number of fundus photos are available online, collecting them to create a clean, well-structured dataset is a difficult and manually intensive process. In this work, we propose a two-stage deep-learning system to automatically identify clean retinal fundus images and delete images with severe artifacts. In two stages, two transfer-learning models based the ResNet-50 architecture pre-trained using ImageNet data are built with Increased threshold values on SoftMax to reduce false positives. The first stage classifier identifies “easy” images, and the remaining “difficult” (or undetermined) images are further identified by the second stage classifier. Using the Google Search Engine, we initially retrieve 1,227 retinal fundus images. Using this two-stage deep-learning model yields a positive predictive value (PPV) of 98.56% for the target class compared to a single-stage model with a PPV of 95.74%. The two-stage model helps reduce by two-thirds the false positives for the retinal fundus image class. The PPV over all classes increases from 91.9% to 96.6% without compromising the number of images classified by the model. The superior performance of this two-stage model indicates that the building of an optimal training dataset can play an important role in increasing performance of deep-learning models.
Advent of advanced imaging technology and better neuro-interventional equipment have resulted in timely diagnosis and effective treatment for acute ischemic stroke (AIS) due to large vessel occlusion (LVO). However, objective clinicoradiologic correlate to identify appropriate candidates and their respective clinical outcome is largely unknown. The purpose of the study is to develop and test a new interactive decision-making support tool to predict severity of AIS prior to thrombectomy using CT perfusion imaging protocol. CT image data of 30 AIS patients with LVO assessed radiologically for their eligibility to undergo mechanical thrombectomy were retrospectively collected and analyzed in this study. First, a computer-aided scheme automatically categorizes images into multiple sequences followed by indexing each slice to specified brain location. Next, consecutive mapping is used for accurate brain region segmentation from skull. The brain is then split into left and right hemispheres, followed by detecting blood in each hemisphere. Additionally, visual tools including segmentation, blood correction, select sequence and index analyzer are implemented for deeper analysis. Last, comparison between blood-volume in each hemisphere over the sequences is made to observe wash-in and wash-out rate of blood flow to assess the extent of damaged and “at risk” brain tissue. By integrating computer-aided scheme into a user graphic interface, the study builds a unique image feature analysis and visualization tool to observe and quantify the delayed or reduced blood flow (brain “at-risk” to develop AIS) in the corresponding hemisphere, which has potential to assist radiologists to quickly visualize and more accurately assess extent of AIS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.