PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Proceedings Volume Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, 1131801 (2020) https://doi.org/10.1117/12.2570206
This PDF file contains the front matter associated with SPIE Proceedings Volume 11318, including the Title Page, Copyright information, Table of Contents, Author and Conference Committee lists.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, 1131802 (2020) https://doi.org/10.1117/12.2550415
Augmented reality (AR) can enable physicians to “see” inside of patients by projecting cross-sectional imaging directly onto the patient during procedures. In order to maintain workflow, imaging must be quickly and accurately registered to the patient. We describe a method for automatically registering a CT image set projected from an augmented reality headset to a set of points in the real world as a first step towards real-time registration of medical images to patients. Sterile, radiopaque fiducial markers with unique optical identifiers were placed on a patient prior to acquiring a CT scan of the abdomen. For testing purposes, the same fiducial markers were then placed on a tabletop as a representation of the patient. Our algorithm then automatically located the fiducial markers in the CT image set, optically identified the fiducial markers on the tabletop, registered the markers in the CT image set with the optically detected markers and finally projected the registered CT image set onto the real-world markers using the augmented reality headset.The registration time for aligning the image set using 3 markers was 0.9 ± 0.2 seconds with an accuracy of 5 ± 2 mm. These findings demonstrate the feasibility of fast and accurate registration using unique radiopaque markers for aligning patient imaging onto patients for procedural planning and guidance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, 1131803 (2020) https://doi.org/10.1117/12.2549575
Purpose: Intracranial aneurysm (IA) treatment using flow diverters (FDs) has become a widely used endovascular therapy with occlusion rates between 70 to 90 percent resulting in reduced mortality and morbidity. This significant variation in occlusion rates could be due to variations in patient anatomy, which causes different flow regimes in the IA dome. We propose to perform detailed in-vitro studies to observe the relation between the FD geometrical properties and IA hemodynamics changes. Materials and Methods: Idealized and patient-specific phantoms were 3D-printed, treated with FDs, and connected into a flow loop where intracranial hemodynamics were simulated using a programmable pump. Pressure measurements were acquired before and after treatment in the main arteries and IA domes for optimal and sub-optimal diameter sizing of the FD when compared with the main artery. The 3D-printed phantoms were scanned using a micro-CT to measure the ostium coverage, calculate the theoretical FD hydraulic resistance, and study its effect on flow. Results: The pressure differences between arteries and the IA dome for optimal FDs’ diameter with a hydraulic resistance of 3.4 were ~7 mmHg. When the FD was undersized, the hydraulic resistance was 4.2 and pressure difference increased to ~11 mmHg. Conclusion: 3D-printing allows development of very precise benchtop experiments where pressure sensors can be embedded in vascular phantoms to study hemodynamic changes due to various therapies such as IA treatment with FDs. In addition, precise imaging, such as micro-CT can be used in order to evaluate complex deployment geometries and study their correlation with flow.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, 1131805 (2020) https://doi.org/10.1117/12.2549140
Purpose: 3D-printing of patient-specific phantoms such as the mitral valve (MV) is challenging due to inability of current imaging systems to reconstruct fine moving features and 3D printing constraints. We investigated methods to 3D-print MV structures using ex-vivo micro-CT. Materials and Methods: A dissected porcine MV was imaged using micro-CT in diastole, using a special fixation holder. The holder design was based on a patient ECG gated cardiac CT scan using as reference points the papillary muscles and annulus. Next the micro-CT volume was segmented and 3D-printed in various elastic materials. We tested different postprocessing techniques for support material removal and surface coatings to preserve the MV integrity. To test the error a Cloud Comparison of the porcine valve-mesh file and the valve-mesh file from the patient ECG gated cardiac CT scan was performed. Results: Best results for the 3D-printed models were achieved using TangoPlus poly-jet material with a Objet Eden printer. The error computation yielded a 2.6mm deviation-distance between the two aligned valves indicating adequate alignment. The post-processing methods for support removal were challenging and required 24+ hours sample-emersion in slow agitating sodium hydroxide baths. Conclusions: The most challenging part for MV manufacturing is 3D volume acquisition and the post-printing methods during support cleaning. We developed methods to circumvent both, the imaging and the 3D-printing challenges and to ensure that the final phantom includes the fine chordae and valve geometry. Using these solutions, we were able to create complete MV structures which could benefit medical research and device testing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, 1131806 (2020) https://doi.org/10.1117/12.2548213
Purpose: 3D printed Patient-Specific Neurovascular Phantoms (3DP-PSNP) containing significant portions of the neurovasculature can be used to develop and test new diagnostic tools. The purpose of this research is to assess the use of 3DP-PSNP to study the correlation between Angiographic Parametric Imaging (API) features and the severity of carotid artery disease. Materials and Methods: We developed 3DP-PSNP for twenty patients with carotid atherosclerosis and performed two studies. In the first study, we used three phantoms with complete Circle of Willis (COW) with none, moderate and severe stenosis respectively. In the second experiment, all phantoms regardless of the COW structure were used. 3DP-PSNPs were connected in a simulated physiological pulsatile flow loop and Digital Subtractive Angiography (DSA) was performed by injecting 10 ml of contrast at 10 mL/s. An API software calculated imaging biomarkers: time-to-peak TTP, mean transit time MTT, time to arrival TTA, peak height PH, and area under the curve AUC for both carotids. Results: For none, moderate, and severe stenosis respectively, absolute mean percent differences between diseased vs. contralateral carotids were: 1.9%, 8.1%, 32.3% TTP; 6.8%, 7.5%, 41.0% TTA; 13.2%, 12.4%, 6.4% MTT; 8.7%, 11.3%, 97.6% PH; and 10.3%, 22.6%, 100% AUC. Injection parameters did not cause significant change to the difference between diseased and contralateral carotids. The second experiment showed strong correlation between the TTA and location of the stenosis, regardless of COW configuration. Conclusions: Overall, API was assessed in 3DP-PSNPs and shown to have increasing differences between diseased and contralateral carotid arteries with increasing severity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, 1131807 (2020) https://doi.org/10.1117/12.2549722
Medical imaging, a key component in clinical diagnosis of and research on numerous medical conditions, is very costly and can generate massive datasets. For instance, a single scanned subject produces hundreds of thousands of images and millions of key-value metadata pairs that must be verified to ensure instrument and research protocol compliance. Many projects lack funds to reacquire images if data quality issues are detected later. Data quality assurance (QA) requires continuous involvement by all stakeholders and use of specific quality control (QC) methods to identify data issues likely to require post-processing correction or real-time re-acquisition. While many useful QC methods exist, they are often designed for specific use-cases with limited scope and documentation, making integration with other setups difficult. We present the Scalable Quality Assurance for Neuroimaging (SQAN), an open-source software suite developed by Indiana University for protocol quality control and instrumental validation on medical imaging data. SQAN includes a comprehensive QC Engine that ensures adherence to a research study’s protocol. A modern, intuitive web portal serves a wide range of users including researchers, scanner technologists and data scientists, each of whom approach QC with unique priorities, expertise, insights and expectations. Since Fall 2017, a fully operational SQAN instance has supported 50+ research projects, and has QC’d ∼3.5 million images and over 700 million metadata tags. SQAN is designed to scale to any imaging center’s QC needs, and to extend beyond protocol QC toward image-level QC and integration with pipeline and non-imaging database systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, 1131808 (2020) https://doi.org/10.1117/12.2541540
Advances in computer hard- and software have enabled the automated extraction of biomarkers from large scale imaging studies by means of image processing pipelines. For large cohort studies, ample storage- and computing resources are required: pipelines are typically executed in parallel on one or more High Performance Computing Clusters (HPC). As processing is distributed, it becomes more cumbersome to obtain detailed progress and status information of large-scale experiments. Especially in a research-oriented environment, where image processing pipelines are often in an experimental stage, debugging is a crucial part of the development process that relies heavily on a tight collaboration between pipeline developers and clinical researchers. Debugging a running pipeline is a challenging and time-consuming process for seasoned pipeline developers, and nearly impossible for clinical researchers, often involving parsing of complex logging systems and text files, and requires special knowledge of the HPC environment. In this paper, we present the Pipeline Inspection and Monitoring web application (PIM). The goal of PIM is to make it more straightforward and less time-consuming to inspect complex, long running image processing pipelines, irrespective of the level of technical expertise and the workflow engine. PIM provides an interactive, visualization-based web application to intuitively track progress, view pipeline structure and debug running image processing pipelines. The level of detail is fully customizable, supporting a wide variety of tasks (e.g. quick inspection and thorough debugging) and thereby facilitating both clinical researchers and pipeline developers in monitoring and debugging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, 1131809 (2020) https://doi.org/10.1117/12.2548371
Clinically acquired, multimodal and multi-site MRI datasets are widely used for neuro-oncology research. However, manual preprocessing of such data is extremely tedious and error prone due to high intrinsic heterogeneity. Automatic standardization of such datasets is therefore important for data-hungry applications like deep learning. Despite rapid advances in MRI data acquisition and processing algorithms, only limited effort was dedicated to automatic methodologies for standardization of such data. To address this challenge, we augment our previously developed Multimodal Glioma Analysis (MGA) pipeline with automation tools to achieve processing scale suitable for big data applications. This new pipeline implements a natural language processing (NLP) based scan-type classifier, with features constructed from DICOM metadata based on bag-ofwords model. The classifier automatically assigns one of 18 pre-defined scan types to all scans in MRI study. Using the described data model, we trained three types of classifiers: logistic regression, linear SVM, and multi-layer artificial neural network (ANN) on the same dataset. Their performance was validated on four datasets from multiple sources. ANN implementation achieved the highest performance, yielding an average classification accuracy of over 99%. We also built a Jupyter notebook based graphical user interface (GUI) which is used to run MGA in semi-automatic mode for progress tracking purposes and quality control to ensure reproducibility of the analyses based thereof. MGA has been implemented as a Docker container image to ensure portability and easy deployment. The application can run in a single or batch study mode, using either local DICOM data or XNAT cloud storage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, 113180A (2020) https://doi.org/10.1117/12.2549565
The overall lower survival rate of patients with rare cancers can be explained, among other factors, by the limitations resulting from the scarce available information about them. Large biomedical data repositories, such as PubMed Central Open Access (PMC-OA), have been made freely available to the scientific community and could be exploited to advance the clinical assessment of these diseases. A multimodal approach using visual deep learning and natural language processing methods was developed to mine out 15,028 light microscopy human rare cancer images. The resulting data set is expected to foster the development of novel clinical research in this field and help researchers to build resources for machine learning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, 113180B (2020) https://doi.org/10.1117/12.2549622
Understanding of stroke etiology and its genetic pathways is critical for planning, implementation, and evaluation of stroke patient treatments. However, this knowledge discovery requires phenotyping stroke and integration of multiple demographic, clinical, genetic and imaging phynotypes by developing and running sophisticated processing pipelines at massive scale. The Stroke Neuroimaging Phenotype Repository (SNIPR) was developed in 2018 as a large multi-center centralized imaging repository of clinical CT and MRI scans from stroke patients worldwide, based on the Extensible Neuroimaging Archive Toolkit (XNAT). The aims of this repository are to: (i) Create a central retrospective repository to host and provide secure access to data from anonymized acute stroke patients with serial clinical imaging; (ii) Facilitate integration of independent stroke phenotypic studies via data aggregation techniques; and (iii) Expedite the development of containerized deep learning pipelines to perform large-scale analysis of complications after stroke. Currently, SNIPR hosts 8 projects, 1877 subjects and 5281 imaging sessions from Washington University Medical Center’s clinical image archive as well as contributions from collaborators in different countries, including US, Finland, Poland, and Spain. Moreover, we have used XNAT’s standard XML Schema extension mechanism to create data type extensions to support stroke phenotypic studies, including clinical phenotypes like NIHSS and imaging phenotypes like infarct and Cerebrospinal fluid (CSF) volume. We have developed deep learning pipelines to facilitate image processing and analysis and deployed these pipelines through XNAT’s container service. The container service enables these pipelines to execute at large scale with Docker Swarm on an attached compute cluster. Our pipelines include a scan-type classifier which includes a convolutional neural network (CNN) approach and a natural language processing approach to automatically categorize uploaded CT sequences into defined classes to facilitate selection for further analysis. We deployed this containerized classifier within a broader pipeline to facilitate big data analysis of cerebral edema after stroke, and we got 99.4 % test accuracy on 10000 scans. SNIPR enables the developed automatic pipelines to use this automatic scan selection, develop and validate imaging phenotypes and couple them with clinical and genetic data with the overarching aim of enabling a broad understanding of stroke progression and outcomes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, 113180C (2020) https://doi.org/10.1117/12.2551279
Crowdsourcing is a concept to encourage humans all over the world to generate ground truth for classification data such as images. While frameworks for binary and multi-label classification exist, crowdsourcing of medical image segmentation is covered only by few work. In this paper, we present a web-based platform supporting scientists of various domains to obtain segmentations, which are close to ground-truth references. The system is composed of frontend, authentication, management, processing, and persistence layers which are implemented combining various javascript tools, the django web framework, an asynchronous celery task, and a PostgreSQL database, respectively. It is deployed on a kubernetes cluster. A set of image data accompanied by a task instruction can be uploaded. Users can be invited or subscribe to join in. After passing a guided tutorial of pre- segmented example images, segmentations can be obtained from non-expert users from all over the world. The Simultaneous Truth and Performance Level Estimation (STAPLE) algorithm generates estimated ground truth segmentation masks and evaluates the users performance continuously in the backend. As a proof of concept, a test-study with 75 photographs of human eyes was performed by 44 users. In just a few days, 2,060 segmentation masks with a total of 52,826 vertices along the mask contour have been collected.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, 113180D (2020) https://doi.org/10.1117/12.2551376
Digital mammography (DM) was the most common image guided diagnostic tool in breast cancer detection up till recent years. However, digital breast tomosynthesis (DBT) imaging, which presents more accurate results than DM, is going to replace DM in clinical practice. As in many medical image processing applications, Artificial Intelligence (AI) has been shown promising in reducing radiologists reading time with enhanced cancer diagnostic accuracy. In this paper, we implemented a 3D network using deep learning algorithms to detect breast cancer malignancy using DBT craniocaudal (CC) view images. We created a multi-sub-volume approach, in which the most representative slice (MRS) for malignancy scans is manually selected/defined by expert radiologists. We specifically compared the effects on different selections of the MRS by two radiologists and the resulting model performance variations. The results indicate that our scheme is relatively robust for all three experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, 113180E (2020) https://doi.org/10.1117/12.2548526
Imaging methods by using computer techniques provide doctors assistance at any time and relieve their workload, especially for iterative processes like identifying objects of interest such as lesions and anatomical structures from the image. Decetion of microaneurysms (MAs) as a one of the lesions in the retina is considered to be a crucial step in some retinal image analysis algorithms for identification of diabetic retinopathy (DR) as the second largest eye diseases in developed countries. The objective of this study is to compare effect of two preprocessing methods, Illumination Equalization, and Top-hat transformation, on retinal images to detect MAs using combination of Matching based approach and deep learning methods either in the normal fundus images or in the presence of DR. The steps for the detection are as following: 1) applying preprocessing, 2) vessel segmentation and masking, and 3) MAs detection using combination of Matching based approach and deep learning. From the accuracy view point, we compared the method to manual detection performed by ophthalmologists for our big retinal image databases (more than 2200 images). Using first preprocessing method, Illumination equalization and contrast enhancement, the accuracy of MAs detection was about 90% for all databases (one local and two publicly retinal databases). The performance of the MAs detection methods using top-hat preprocessing (the second preprocessing method) was more than 80% for all databases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, 113180F (2020) https://doi.org/10.1117/12.2549651
In the study of complex mental disorders like schizophrenia (SZ), while imaging genetics has achieved great success, imaging epigenetics is attracting increasing attention as it considers the impact of environmental factors on gene expression and resulting phenotypic changes. In this study, we aimed to fill the gap by jointly analyzing imaging and epigenetics data to study SZ. More specifically, we proposed a novel structure-enforced collaborative regression model (SCoRe) to extract co-expressed discriminative features related to SZ from fMRI and DNA methylation data. SCoRe can utilize phenotypical information while enforce an agreement between multiple data views. Moreover, it also considers the group structure within each view of data. The brain network based on fMRI data can be divided into 116 regions of interests (ROIs) based on anatomical structures of the brain and the DNA methylation data can be grouped based on pathway information, which are used as prior knowledge to be incorporated into the learning model. After validation through simulation test, we applied the model to SZ study with data collected by MIND Clinical Imaging Consortium (MCIC). Through integrating fMRI and DNA methylation data of 184 participants (104 SZ and 80 healthy subjects), we succeeded in identifying 8 important brain regions and 3 genes associated with SZ. This study can shed light on the understanding of SZ from both brain imaging and epigenomics, complementary to imaging genomics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, 113180G (2020) https://doi.org/10.1117/12.2547635
Approximately two million pediatric deaths occur every year due to Pneumonia. Detection and diagnosis of Pneumonia plays an important role in reducing these deaths. Chest radiography is one of the most commonly used modalities to detect pneumonia. In this paper, we propose a novel two-stage deep learning architecture to detect pneumonia and classify its type in chest radiographs. This architecture contains one network to classify images as either normal or pneumonic, and another deep learning network to classify the type as either bacterial or viral. In this paper, we study and compare the performance of various stage one networks such as AlexNet, ResNet, VGG16 and Inception-v3 for detection of pneumonia. For these networks, we employ transfer learning to exploit the wealth of information available from prior training. For the second stage, we find that transfer learning with these same networks tends to overfit the data. For this reason we propose a simpler CNN architecture for classification of pneumonic chest radiographs and show that it overcomes the overfitting problem. We further enhance the performance of our system in a novel way by incorporating lung segmentation using a U-Net architecture. We make use of a publicly available dataset comprising 5856 images (1583 – Normal, 4273 – Pneumonic). Among the pneumonia patients, 2780 patients are identified as bacteria type and the rest belongs to virus category. We test our proposed algorithm(s) on a set of 624 images and we achieve an area under the receiver operating characteristic curve of 0.996 for pneumonia detection. We also achieve an accuracy of 97.8% for classification of pneumonic chest radiographs thereby setting a new benchmark for both detection and diagnosis. We believe the proposed two-stage classification of chest radiographs for pneumonia detection and its diagnosis would enhance the workflow of radiologists.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, 113180H (2020) https://doi.org/10.1117/12.2548550
In many medical image applications, high-resolution images are needed to facilitate early diagnosis. However, due to technical limitations, it may not be easy to obtain an image with ideal resolution especially for the diffusion weighted imaging (DWI). Super-resolution (SR) technology is developed to solve this problem by generating high-resolution (HR) images from low-resolution (LR) images. The purpose of this study is to obtain the SR-DWI from the original LR image through deep super-resolution network. The effectiveness of the SR image is assessed by radiomic analysis in predicting the histological grade of breast cancer. To this end, a dataset of 144 breast cancer cases were collected, including 83 cases who diagnosed as high-grade malignant (Grade 3) breast cancer, and 61 who were median-grade malignant (Grade 2). For each case, the dynamic enhanced magnetic resonance imaging (DCE-MRI), and the apparent diffusion coefficients (ADC) map derived from DWI were obtained. Lesion segmentation was performed on each of the original ADC and the SR-ADC, in which 30 texture and 10 statistical features were extracted. Deep SR model was established by an end-to-end training from the LR DCE-MRI and the HR counterparts and was applied to the ADC images to obtain SR-ADCs. Univariate and multivariate logistic regression classifier was implemented to evaluate the performance of the individual feature and collective features, respectively. The model performance was evaluated by the area under the curve (AUC) under leave one-out cross-validation (LOOCV). For the individual feature analysis, the performance in terms of AUC was significantly better based on the SR-ADC image than that based on the original ADC image. For multivariate analysis, the classifier performance in terms of AUCs were 0.848±0.061 and 0.878±0.051 for the original ADC and the SR ADC, respectively. The results suggested that the enhanced resolution of ADC image had the potential to more accurately predict histological grade in breast cancer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The cybersecurity landscape continues to rapidly evolve across all industries including healthcare. Unique to healthcare, however, is the fact that patients lives can be impacted by simply altering or withholding information. This is session we will look at the evolution of attacks on personal information, to personal health information to personal heath. In addition potential methods to balance sharing of data with protecting patients' identities will be explored to better understand concepts around federated databases and other anonymization techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, 113180J (2020) https://doi.org/10.1117/12.2549372
Over the past few years, the use of the off-the-shelf video game platform as a rehabilitation tool has gained much interest in physiotherapy. In this paper, we describe an avenue for the integration of virtual reality (VR) and artificial intelligence (AI) based game tracking techniques applied for the purposes of improving the effectiveness of home-care hand physical therapy. We provide an overview of the software and hardware implementation of the prototype based on a LEAP motion sensor input device, which provides two 850nm wavelength infrared (IR) tracking cameras, and an Oculus virtual reality headset. In this initial study, an interactive game is developed on the Unity VR gaming platform that dynamically adjusts the levels of the game to the player performance based on adaptive hand gesture tracking and analysis Al algorithms. A preliminary game evaluation study is conducted on a human subject that showcases the efficiency of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, 113180K (2020) https://doi.org/10.1117/12.2551297
The Simultaneous Truth and Performance Level Estimation (STAPLE) algorithm is frequently used in medical image segmentation without available ground truth (GT). In this paper, we investigate the number of inexperi- enced users required to establish a reliable STAPLE-based GT and the number of vertices the user’s shall place for a point-based segmentation. We employ “WeLineation”, a novel web-based system for crowdsourcing seg- mentations. Within the study, 2,060 masks have been delivered by 44 users on 75 different photographic images of the human eye, where users had to segment the sclera. For all masks, GT was estimated using STAPLE. Then, STAPLE is computed using fewer user contributions and results are compared to the GT. Requiring an error rate lower than 2%, same segmentation performance is obtained with 13 experienced and 22 rather inexperienced users. More than 10 vertices shall be placed on the delineation contour in order to reach an accuracy larger than 95%. In average, a vertex along the segmentation contour shall be placed every 81 pixels. The results indicate that knowledge about the users performance can reduce the number of segmentation masks per image, which are needed to estimate reliable GT. Therefore, gathering performance parameters of users during a crowdsourcing study and applying this information to the assignment process is recommended. In this way, benefits in the cost-effectiveness of a crowdsourcing segmentation study can be achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, 113180L (2020) https://doi.org/10.1117/12.2550933
Dental record plays an important role in dental diagnosis and personal identification. Automatic image preinterpretation can help reducing dentists’ workload and improving diagnostic efficiency. Systematic dental record filing enables effective utilization of accumulated records at dental clinics for forensic identification. We have been investigating a tooth labeling method on dental cone-beam CT images for the purpose of automatic filing of dental charts. In our previous method, two separate networks were employed for detection and classification of teeth. Although detection accuracy was promising, classification performance had a room of improvement. The purpose of this study was to investigate the use of the relation network to utilize information of positional relationship between teeth for the detection and classification. Using the proposed method, both detection and classification performance improved. Especially, the tooth type classification accuracy improved. The proposed method can be useful in automatic filing of the dental chart.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, 113180M (2020) https://doi.org/10.1117/12.2549806
For any deep learning (DL) based task, model generalization and prediction performance improve as a function of training data set size and variety. However, its application to medical imaging is still challenging because of the limited availability of high-quality and sufficiently diverse annotated data. Data augmentation techniques can improve the model performance when the available dataset size is limited. Anatomy region localization from medical images can be automated with deep learning and is important for tasks such as organ segmentation and lesion detection. Different data augmentation methods were compared for DL based anatomy region localization with computed tomography images. The impact of different neural network architectures was also explored. The prediction accuracy on an independent test set improved from 88% to 97% with optimal selection of data augmentation and architecture while using the same training dataset. Data augmentation steps such as zoom, translation and flips had incremental effect on classifier performance whereas samplewise mean shift appeared to degrade the classifier performance. Global average pooling improved classifier accuracy compared to fully-connected layer when limited data augmentation was used. All model architectures converged to an optimal performance with the right combination of augmentation steps. Prediction inaccuracies were mostly observed in the boundary regions between anatomies. The networks also successfully localized anatomy for Positron Emission Tomography studies reaching an accuracy of up to 97%. Similar impact of data augmentation and pooling layer was also observed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, 113180N (2020) https://doi.org/10.1117/12.2549552
In the AI training, the data set is always divided into training set and test set at random, but the clinical image data from hospitals is different from the public data set. The division of public data set is reasonably divided and evenly distributed after many experiments. Accurate understanding of the data distribution directly affects the training model quality. So we proposed a new method of dividing clinical data set based on distance metric learning of the Gaussian mixture model to obtain more reasonable data set divisions. The distance metric learning based on deep neural network, first embeds data into a new metric space, then in the metric space uses in-depth mining based on data characteristics, calculates the distance between samples, finally compares the differences. The method can accurately know the data distribution characteristics to a certain extent. Under the condition of understanding the data distribution characteristics, more reasonable divisions can be obtained. That can greatly affect the accuracy and generalization performance of the models we trained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, 113180O (2020) https://doi.org/10.1117/12.2548723
This paper aimed to investigate if deep image features extracted via sparse autoencoder (SAE) could be used to preoperatively predict histologic grade in pancreatic neuroendocrine tumors (pNETs). In this study, a total of 114 patients from two institutions were involved. The deep image features were extracted based on the sparse autoencoder network via a 2000-time iteration. Considering the possible prediction error due to the small patient data size, we performed 10-fold cross-validation. To find the optimal hidden size, we set the size as a range of 6-10. The maximum relevance minimum redundancy (mRMR) features selection algorithm was used to select the most histologic graderelated image features. Then the radiomics signature was generated by using the selected features with Support Vector Machine (SVM), multivariable logistic regression (MLR) and artificial neural networks (ANN) methods. The prediction performance was evaluated using AUC value.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, 113180Q (2020) https://doi.org/10.1117/12.2566332
Automated segmentation of medical imaging is of broad interest to clinicians and machine learning researchers alike. The goal of segmentation is to increase efficiency and simplicity of visualization and quantification of regions of interest within a medical image. Image segmentation is a difficult task because of multiparametric heterogeneity within the images, an obstacle that has proven especially challenging in efforts to automate the segmentation of brain lesions from non-contrast head computed tomography (CT). In this research, we have experimented with multiple available deep learning architectures to segment different phenotypes of hemorrhagic lesions found after moderate to severe traumatic brain injury (TBI). These include: intraparenchymal hemorrhage (IPH), subdural hematoma (SDH), epidural hematoma (EDH), and traumatic contusions. We were able to achieve an optimal Dice Coefficient1 score of 0.94 using UNet++ 2D Architecture with Focal Tversky Loss Function, an increase from 0.85 using UNet 2D with Binary Cross-Entropy Loss Function in intraparenchymal hemorrhage (IPH) cases. Furthermore, using the same setting, we were able to achieve the Dice Coefficient score of 0.90 and 0.86 in cases of Extra-Axial bleeds and Traumatic contusions, respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, 113180R (2020) https://doi.org/10.1117/12.2548803
From children to the elderly, X-ray imaging of the head, chest, abdomen, spine, limbs, and joints is widely used. Radiation technologists often refer to image coverage and locations from past x-ray images of the same patient. Quick and accurate positioning and imaging conditions setting reduce the burden on patients and technicians. We develop a patient-specific X-ray image reference support system. The system uses a classification table for patient-specific image references based on the Ministry of Health, Labor and Welfare standard JJ1017. This system is also useful for sharing image information between multiple facilities. This system has been evaluated in facilities for children and people with severe disabilities. For one radiography, 31 patients are evaluated before use and 20 patients after use. For two radiographs, evaluate 15 patients before use and 16 patients after use. Two radiation technologists will evaluate both methods. The experience of the two radiation technologists is 22 and 11 years. The comparison of performance before and after use is the average time to process work and the ease of use of the system. The usefulness of the system has been clarified.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, 113180S (2020) https://doi.org/10.1117/12.2548921
This paper proposed a new generation PACS (Picture Archiving and Communication System) based on artificial intelligent visualization. It is developed from our GRIDPACS (patent number: US8805890), which combined with IHE XDS-I profile, to implement images communication, storage and display. It also uses 3D anatomical visualization model to extract multi-source data from PACS/RIS/HIS/EMR, to express patient disease location, size and severity, which was introduced as Visual Patent (VP) at previous SPIE Medical Imaging (SPIE MI 2018). It can integrate the training model of AI Imaging Diagnosis, to mark the focus and display the disease trends. The system not only has the original PACS functions, but also realizes the man-machine interaction (images and electronic medical record information between radiologist and patient) in a personalized, fast, comprehensive, quantitative and easy-to-understand way. It can be used in various medical institutions, image diagnostic centers, and imaging cloud, to support the healthy development of imaging technology in China.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, 113180T (2020) https://doi.org/10.1117/12.2543521
Deep Learning-based medical imaging research has been actively conducted thanks to its high diagnostic accuracy comparable to that of expert physicians. However, to apply developed Computer Aided Diagnosis (CAD) systems to various data collected from different hospitals, we should prepare sufficient training data in terms of quality/quantity; unfortunately, especially in Japan, we need to overcome each hospital’s different ethical codes to obtain such multi-institutional data. Therefore, we built a cloud platform for (i) collecting multi-modal large- scale medical images from hospitals through medical societies and (ii) conducting various Deep Learning-based CAD research via collaboration between Japanese medical societies and institutes of informatics. Each hospital first provides the data to the corresponding medical society among 6 societies (e.g., Japan Radiological Society and Japanese Society of Pathology) based on their modality among 8 modalities (e.g., Computed Tomography and Whole Slide Imaging (WSI)); then, each society uploads them, possibly with annotation, to our cloud plat- form established in November 2017. We have collected over 80 million medical images by December 2019, and over 60 registered researchers have conducted CAD research on the platform. We presented the achieved results at major international conferences/in medical journals; their ongoing clinical applications include remote WSI diagnosis. We plan to further increase the number of images/modalities and apply our research results to a clinical environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, 113180U (2020) https://doi.org/10.1117/12.2549582
QuantMed is a platform consisting of software components enabling clinical deep learning, together forming the QuantMed infrastructure. It addresses numerous challenges: systematic generation and accumulation of training data; the validation and utilization of quantitative diagnostic software based on deep learning; and thus, providing support for more reliable, accurate, and efficient clinical decisions. QuantMed provides learning and expert correction capabilities on large, heterogeneous datasets. The platform supports collaboration to extract medical knowledge from large amounts of clinical data among multiple partner institutions via a two- stage learning approach: the sensitive patient data remains on premises and is analyzed locally in a first step in so-called QuantMed nodes. Support for GPU clusters accelerates the learning process. The knowledge is then accumulated through the QuantMed hub, and can be re-distributed afterwards. The resulting knowledge modules – algorithmic solution components which contain trained deep learning networks as well as specifications of input data and output parameters - do not contain any personalized data, and thus, are safe to share under data protection law. This way, our modular infrastructure makes it possible to efficiently carry out translational research in the context of deep learning, and deploy results seamlessly into prototypes or third-party software.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, 113180V (2020) https://doi.org/10.1117/12.2549888
In the current study, we aimed to develop an easy-to-use toolbox for data processing in Radiomics analysis. The toolbox was design to conduct data processing of classification (eg. classifying benign and malignant tumor) and prognosis (eg. prediction of 3 years survival rate) analysis in Radiomics researches. The toolbox was composed of data preprocessing, feature extraction, feature selection, Radiomics signature construction, clinical variables selection, combined model construction and performance evaluation. The Radiomics signature was obtained with the procedure of data preprocessing, features extraction, features selection, and signature construction. The valuable clinical variables were selected by Akaike information criterion (AIC) or Bayesian Information Criterion (BIC). The combined model was constructed by integrating Radiomics signature with the selected clinical variables. The toolbox provided an evaluation part to assess the performance of the combined model. For the classification analysis, the toolbox provided the classification evaluation metrics including area under receiver operating characteristic curve (AUC), classification accuracy (ACC), true and false positive rate (TPR and FPR), positive and negative predictive values (PPV and NPV), together with other evaluation approaches including receiver operating characteristic curve (ROC), calibration curve and decision curve analysis (DCA). For the overall survival analysis, the toolbox provided the evaluation approach of C-index and K-M (Kaplan -Meier) curve. For the survival analysis of certain time point, the toolbox provided the evaluation metrics including time dependent-area under receiver operating characteristic curve (TD-AUC), time dependent-classification accuracy (TD-ACC), together with other evaluation approaches including time dependent-receiver operating characteristic curve (TD-ROC), calibration curve and DCA.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, 113180W (2020) https://doi.org/10.1117/12.2542984
The tracking-based semi-automatic software is a user-friendly tool to draw bounding box around focal liver lesions (FLLs) in Contrast-enhanced Ultrasound (CEUS) cine-loop based on MATLAB 2019b. The software development based on the consideration that deep learning has a broad prospect in processing FLLs CEUS cine-loops while extracting enormous ground-truth for detecting, tracking as well as further data analysis is necessary. Until now, there is no public software committing to successive extracting region of interests (ROIs) in CEUS cine-loops. The tracking algorithm build on point-based registration techniques (PBRTs) which are widely used for motion compensation in 2-D FLL CEUS imaging. The software need user draw a bounding box firstly for every tracking sequence and allows user to fine-tune or delete some bad tracking results in time. The software can show the dual bounding box when the cine-loop is in double display mode, all the functions for examples bounding box drawing and tracking can work directly in ultrasound in case that CEUS is unrecognizable. We also add some interactive items and sequence analysis algorithms to make the processing more efficient and you can download the software in https://github.com/Yuqi-Zest/CEUS-tracking-software.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, 113180X (2020) https://doi.org/10.1117/12.2567083
Digital Imaging and Communications in Medicine (DICOM)-based picture archiving and communication systems (PACS) primarily collect data that health-care professionals acquire in hospitals for diagnostics or therapeutics, but integration of data from continuous health monitoring is not yet included. Smart wearables or smart clothes, but also smart environments such as apartments, homes, or vehicles collect such data. While cars already generate automatic alerts, for example the eCall system in Europe, smart homes or smart wearables will generate such alerts in near future, too. However, the response to automatic alerts still is operator-based. There is no technical link between the information technology (IT) systems operated by the rescue service, the emergency departments, and the hospitals. We suggest an international standard accident number (ISAN) that is created by the alerting system and supports communication between the systems of the rescue chain and secure data sharing. In this paper, we draw a scenario in which we enhance smart vehicles and smart homes with health-related unobtrusive sensing devices for vital signs and capture environmental, behavioral, and physiological parameters simultaneously from the shell-like private environment that cars and homes provide the humans. Via the ISAN, the data is communicated safely and securely to the hospital. We further analyze recent DICOM extensions on suitability to capture such data in the PACS of the hospital. The Vital Sign Template (TID 3510) extends DICOM Structured Reporting but captures such measures only at a particular point of time, rater than continuously. DICOM Waveform Data (DICOM 3.0 Supplement 30, added in 2000) was designed particularly for ECG data. It can store continuous recording of up to five sequence items of up to 13 channels. However, it does not cope with any information describing the capturing device, position, or other semantic information. In conclusion, vital sign monitoring cannot sufficiently handled with DICOM and its extensions of today.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, 113180Z (2020) https://doi.org/10.1117/12.2551372
Breast magnetic resonance imaging (MRI) plays an important role in high-risk breast cancer screening, clinical problemsolving, and imaging-based outcome prediction. Breast tumor segmentation in MRI is an essential step for quantitative radiomics analysis, where automated and accurate tumor segmentation is needed but very challenging. Automated breast tumor segmentation methods have been proposed and can achieve promising results. However, these methods still need a pre-defined a region of interest (ROI) before performing segmentation, which makes them hard to run fully automatically. In this paper, we investigated automated localization and segmentation method for breast tumor in breast Dynamic Contrast-Enhanced MRI (DCE-MRI) scans. The proposed method takes advantage of kinetic prior and deep learning for automatic tumor localization and segmentation. We implemented our method and evaluated its performance on a dataset consisting of 74 breast MR images. We quantitatively evaluated the proposed method by comparing the segmentation with the manual annotation from an expert radiologist. Experimental results showed that the automated breast tumor segmentation method exhibits promising performance with an average Dice Coefficient of 0.89±0.06.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, 1131810 (2020) https://doi.org/10.1117/12.2549267
For automated evaluation of changes on uterine cervix, the external os (here simply os) is a primary anatomical landmark in locating the transformation zone (T-zone). Any abnormal tissue changes typically occur at or within the T-zone. This makes localizing the os on cervical images of great interest for detecting and classifying changes. However, there has been very limited work reported on segmentation of the os region in digitized cervix images, and to our knowledge no work has been done on sets of cervix images acquired from independent data collections exhibiting variabilities due to collection devices, environments, and procedures. In this paper, we present a process pipeline which consists of deep learning os region segmentation over such multiple datasets, followed by comprehensive evaluation of the performance. First, we evaluate of two state-of-the-art deep learning-based localization and classification algorithms, viz., Mask R-CNN and MaskX R-CNN, on multiple datasets. Second, in consideration of the os being small and irregularly-shaped, and of the variabilities in image quality, we use performance measurements beyond the commonly used DICE/IoU scores. We obtain higher performance, on a larger dataset, as compared with the work reported in the literature, and achieve a highest detection rate of 99.1% and an average minimal distance of 1.02 pixels. Furthermore, the network models we obtained in this study show potential use of quality control for data acquisition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, 1131811 (2020) https://doi.org/10.1117/12.2549547
Liquid Based Cytology (LBC) is an effective technique for cervical cancer screening through the Papanicolaou (Pap) test. Currently, most LBC screening is done by cytologists, which is very time consuming and expensive. Reliable automated methods are needed to assist cytologists to quickly locate abnormal cells. State of the art in cell classification assumes that cells have already been segmented. However, clustered cells are very challenging to segment. We noticed that in contrast to cells, nuclei are relatively easier to segment, and according to The Bethesda System (TBS), the gold standard for cervical cytology reporting, cervical cytology abnormalities are often closely correlated with nucleus abnormalities. We propose a two-step algorithm, which avoids cell segmentation. We train a Mask R-CNN model to segment nuclei, and then classify cell patches centered at the segmented nuclei in roughly the size of a healthy cell. Evaluation with a dataset of 25 high resolution NDPI whole slide images shows that nuclei segmentation followed by cell patch classification is a promising approach to build practically useful automated Pap test applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, 1131812 (2020) https://doi.org/10.1117/12.2550368
Automatic segmentation of the coronary artery in coronary computed tomographic angiography (CCTA) is important for clinicians in evaluating patients with coronary artery disease (CAD). Tradition visual interpretation of coronary artery stenosis is observer-dependent and time-consuming. In this work, we proposed to use a 3D attention fully convolution network (FCN) method to automatically segment the coronary artery for CCTA. FCN was used to perform end-to-end mapping from CCTA image to the binary segmentation of coronary artery. Deep attention strategy was integrated into the FCN model to highlight the informative semantic features extracted from CCTA image and thus to enhance the accuracy of segmentation. The proposed method was tested on 30 patients’ CCTA data. Dice similarity coefficient (DSC), precision and recall indices between manually delineated coronary artery contour and segmented contour were used to quantify the segmentation accuracy of the proposed method. The DSC, precision, and recall were 83%±4%, 84%±4% and 87%±3%, which demonstrated the segmentation accuracy of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, 1131813 (2020) https://doi.org/10.1117/12.2550374
Pancreatic cancer continues showing poor prognosis with a 5-year overall survival (OS) of 9% [1]. Stereotactic body radiotherapy (SBRT) has been increasingly adopted as the treatment option for locally advanced pancreatic cancer (LAPC). Accurate and robust segmentation of the abdominal organs on CT is essential to minimize excessive doses to organs-at-risk (OARs) such as stomach and duodenum. However, this task is tedious and time-consuming. In this work, we aimed to develop a 3D deep attention U-Net based network to automatically segment the pancreatic SBRT OARs that can significantly expedite the treatment planning process, while maintain high segmentation accuracy comparable to the ones manually contoured by the experienced physicians. 30 patients previously treated with pancreatic SBRT were included. Their CT and OAR contours including small bowel, large bowel, liver, stomach, spinal cord, left kidney, right kidney and duodenum were used as the training dataset. Attention gates (AGs) were incorporated in the U-net based network to effectively differentiate the organ boundaries. The mean Dice similarity coefficient (DSC) for large bowel, small bowel, duodenum, left kidney, right kidney, liver, spinal cord, and stomach were 0.89±0.05, 0.86±0.04, 0.79±0.04,0.86±0.04, 0.87±0.06, 0.86±0.02, 0.75±0.04 and 0.88±0.06, respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, 1131814 (2020) https://doi.org/10.1117/12.2550146
In the last few years, Deep Learning (DL) has been showing superior performance in different modalities of bio-medical image analysis. Several DL architectures have been proposed for classification, segmentation, and detection tasks in medical imaging and computational pathology. In this paper, we propose a new DL architecture, the NABLA-N network (∇N-Net), with better feature fusion techniques in decoding units for dermoscopic image segmentation tasks. The ∇N-Net has several advances for segmentation tasks. First, this model ensures better feature representation for semantic segmentation with a combination of low to high-level feature maps. Second, this network shows better quantitative and qualitative results with the same or fewer network parameters compared to other methods. In addition, the Inception Recurrent Residual Convolutional Neural Network (IRRCNN) model is used for skin cancer classification. The proposed ∇N-Net network and IRRCNN models are evaluated for skin cancer segmentation and classification on the benchmark datasets from the International Skin Imaging Collaboration 2018 (ISIC-2018). The experimental results show superior performance on segmentation tasks compared to the Recurrent Residual U-Net (R2U-Net). The classification model shows around 87% testing accuracy for dermoscopic skin cancer classification on ISIC2018 dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, 1131815 (2020) https://doi.org/10.1117/12.2548969
The advent of Machine Learning (ML) is proving extremely beneficial in many healthcare applications. In pediatric oncology, retrospective studies that investigate the relationship between treatment and late adverse effects still rely on simple heuristics. To capture the effects of radiation treatment, treatment plans are typically simulated on virtual surrogates of patient anatomy called phantoms. Currently, phantoms are built to represent categories of patients based on reasonable yet simple criteria. This often results in phantoms that are too generic to accurately represent individual anatomies. We present a novel approach that combines imaging data and ML to build individualized phantoms automatically. We design a pipeline that, given features of patients treated in the pre-3D planning era when only 2D radiographs were available, as well as a database of 3D Computed Tomography (CT) imaging with organ segmentations, uses ML to predict how to assemble a patient-specific phantom. Using 60 abdominal CTs of pediatric patients between 2 to 6 years of age, we find that our approach delivers significantly more representative phantoms compared to using current phantom building criteria, in terms of shape and location of two considered organs (liver and spleen), and shape of the abdomen. Furthermore, as interpretability is often central to trust ML models in medical contexts, among other ML algorithms we consider the Gene-pool Optimal Mixing Evolutionary Algorithm for Genetic Programming (GP-GOMEA), that learns readable mathematical expression models. We find that the readability of its output does not compromise prediction performance as GP-GOMEA delivered the best performing models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, 1131816 (2020) https://doi.org/10.1117/12.2549130
The purpose of this study is to assess feasibility of developing a new case-based computer-aided diagnosis (CAD) scheme of mammograms based on a tree-based analysis of SSIM characteristics of the matched bilateral local areas of left and right breasts to predict likelihood of cases being malignant. We assembled a dataset involving screening mammograms acquired from 1000 patients. Among them, 500 cases were positive with cancer detected and verified, while other 500 cases had benign masses. Both CC and MLO view of the mammograms were used for feature extraction in this study. A CAD scheme was applied to preprocess the bilateral mammograms of the left and right breasts, generate image maps in the special domain, compute SSIM-based image features between the matched bilateral mammograms, and apply a support vector machine model to classify between malignant and benign cases. For performance evaluation, CAD scheme was trained and tested using a 10-fold cross-validation method. The area under a receiving operating characteristic curve (AUC) was computed as an index of performance evaluation. Using the poll of 12 extracted SSIM features, the CAD scheme yielded a performance level of AUC = 0.84±0.016, which is significantly higher than using each individual SSIM feature for the classification purpose (p < 0.05), and an odds ratio of 19.0 with 95% confidence interval of [15.3, 29.8]. Thus, this study supports the feasibility of applying an innovative method to develop a new case-based CAD scheme without lesion segmentation and demonstrates higher performance of new CAD scheme to classify between malignant and benign mammographic cases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, 1131817 (2020) https://doi.org/10.1117/12.2549306
Schizophrenia (SZ) is a chronic and severe mental disorder that affects how a person thinks, feels, and behaves. It has been widely acknowledged that SZ is related to disrupted brain connectivity; however, the underlying neuromechanism has not been fully understood. In the current literature, various methods have been proposed to estimate the association networks of the brain using functional Magnetic Resonance Imaging (fMRI). Approaches that characterize statistical associations are likely a good starting point for estimating brain network interactions. With in-depth research, it is natural to shift to causal interactions. Therefore, we use the fMRI image from the Mind Clinical Imaging Consortium (MCIC) to study the causal brain network of SZ patients. Existing methods have focused on estimating a single directed graphical model but ignored the similarities from related classes. We, thus, design a two-step Bayesian network analysis for this case-control study, which we assume their brain networks are distinct but related. We reveal that compared to healthy people, SZ patients have a diminished ability to combine specialized information from distributed brain regions. Particularly, we have identified 6 hub brain regions in the aberrant connectivity network, which are at the frontal-parietal lobe (Supplementary motor area, Middle frontal gyrus, Inferior parietal gyrus), insula and putamen of the left hemisphere.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, 1131818 (2020) https://doi.org/10.1117/12.2549462
We have developed a magnetic resonance (MR) image-based radiomic biopsy approach for estimation of malignancy grade in parotid gland cancer (PGC). Preoperative T1- and T2-weighted MR images of 39 PGC patients with 20 highand 19 intermediate-/low-malignancy grades were employed. High- versus intermediate-/low-malignancy grades were estimated using MR-radiomic biopsy approaches, i.e. 972 hand-crafted feature and transfer learning of five pre-trained deep learning (DL) architectures (AlexNet, GoogLeNet, VGG-16, ResNet-101, DenseNet-201). The 39 patients were divided into 70% for training datasets and 30% for test datasets. The hand-crafted features were extracted from cancer regions in T1- and T2-weighted MR images. Three features were selected as a radiomic signature by using a least absolute shrinkage and selection operator (LASSO), whose coefficients of three features were used for constructing the radiomic score (Rad-score). The two grade malignancy was estimated by using an optimal cut-off value of Rad-score. On the other hand, last three layers of the DL architectures were replaced with new three layers for the estimation task. The DL architectures were fine-tuned with training datasets and were evaluated with test datasets. The performances of the MR-radiomic biopsy approaches were assessed by using the accuracy and the area under the receiver operating characteristic curve (AUC). The VGG-16 demonstrated the best performance (accuracy=85.4%, AUC=0.906), but the other approaches showed worse performances (Rad-score: 83.3%, 0.830, AlexNet: 84.4%, 0.915, GoogLeNet: 84.9%, 0.884, ResNet-101: 84.9%, 0.918, DenseNet-201: 84.4%, 0.869) than the VGG-16. The VGG-16-based MR-radiomic biopsy could be feasible for the malignancy grade estimation of PGC.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, 1131819 (2020) https://doi.org/10.1117/12.2549920
Early detection of glaucoma is important to slow down progression of the disease and to prevent total vision loss. When the retinal nerve is damaged, the thickness of the nerve fiber layer decreases. It is difficult, however, to detect subtle change in early disease stages on retinal fundus photographs. Although an optical coherence tomography (OCT) is generally more sensitive and can evaluate the thicknesses of retinal layers, it is performed as a diagnostic exam rather than screening exam. Retinal fundus photographs are frequently performed for diagnosis and follow-ups at ophthalmology visits and for general health checkups. It will be useful if suspicious regions can be detected on retinal photographs. The purpose of this study is to estimate the regions of defected nerves on retinal photographs using the deep learning model trained by OCT data. The network is based on the fully convolutional network. The region including an optic disc is extracted from the retinal photographs and is used as the input data. The OCT image of the same patient is registrated to the retinal image based on the blood vessel networks, and the deviation map specifying the regions with decreased nerve layer thickness is used as teacher data. The proposed method achieved 76% accuracy in assessing the defected and non-defected regions. It can be useful as a screening tool and for visual assistance in glaucoma diagnosis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, 113181B (2020) https://doi.org/10.1117/12.2551374
Women who are diagnosed with breast cancer are referred to Neoadjuvant Chemotherapy Treatment (NACT) before surgery when treatment guidelines indicate that. Achieving complete response in this treatment is correlated with improved overall survival compared with those experiencing a partial or no response at all. In this paper, we explore multi modal clinical and radiomics metrics including quantitative features from medical imaging, to assess in advance complete response to NACT. Our dataset consists of a cohort from Institut Curie with 1383 patients; from which 528 patients have mammogram imaging. We analyze the data via image processing, machine learning and deep learning algorithms to increase the set of discriminating features and create effective models. Our results show ability to classify the data in this problem settings, using the clinical data. We then show the possible improvement we may achieve in combining clinical and mammogram data measured by the AUC, sensitivity and specificity. We show that for our cohort the overall model achieves sensitivity 0.954 while keeping good specificity of 0.222. This means that almost all patients that achieved pathologic complete response will also be correctly classified by our model. At the same time, for 22% of the patients, the model could correctly predict in advance that they won’t achieve pathologic complete response, enabling them to reassess in advance this treatment. We also describe our system architecture that includes the Biomedical Framework, a platform to create configurable reusable pipelines and expose them as micro-services on-premise or in-thecloud.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, 113181C (2020) https://doi.org/10.1117/12.2551389
Computer-aided diagnosis plays an important role in clinical image diagnosis. Current clinical image classification tasks usually focus on binary classification, which need to collect samples for both the positive and negative classes in order to train a binary classifier. However, in many clinical scenarios, there may have many more samples in one class than in the other class, which results in the problem of data imbalance. Data imbalance is a severe problem that can substantially influence the performance of binary-class machine learning models. To address this issue, one-class classification, which focuses on learning features from the samples of one given class, has been proposed. In this work, we assess the one-class support vector machine (OCSVM) to solve the classification tasks on two highly imbalanced datasets, namely, space-occupying kidney lesions (including renal cell carcinoma and benign) data and breast cancer distant metastasis/non-metastasis imaging data. Experimental results show that the OCSVM exhibits promising performance compared to binary-class and other one-class classification methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, 113181E (2020) https://doi.org/10.1117/12.2549234
CT colonography (CTC) uses abdominal CT scans to examine the colon for cancers and polyps. To visualize the complete region of colon without possibly obstructing residual materials inside the colon, an orally administered contrast agent is used to opacify the residual fecal materials on CT images followed by virtual cleansing of the opacified materials from the images. However, current EC methods can introduce large numbers of residual image artifacts that complicate the interpretation of the virtually cleansed CTC images. Such artifacts can be resolved by use of dual-energy CTC (DE-CTC) that provides more information about the observed materials than does conventional single-energy CTC (SE-CTC). We generalized a 3D generative adversarial network (3D-GAN) model into a self-supervised electronic cleansing (EC) scheme for dual-energy CT colonography (DE-CTC). The 3D-GAN is used to transform the acquired DE-CTC volumes into a representative cleansed CTC volume by use of an iterative self-supervised method that adapts the scheme to the unique conditions of each case. Our preliminary evaluation with an anthropomorphic phantom indicated that the use of the 3DGAN EC scheme with DE-CTC features and the self-supervised scheme generates EC images of higher quality than those obtained by use of SE-CTC or conventional training samples only.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications, 113181F (2020) https://doi.org/10.1117/12.2551369
We developed a novel survival prediction model for images, called pix2surv, based on a conditional generative adversarial network (cGAN), and evaluated its performance based on chest CT images of patients with idiopathic pulmonary fibrosis (IPF). The architecture of the pix2surv model has a time-generator network that consists of an encoding convolutional network, a fully connected prediction network, and a discriminator network. The fully connected prediction network is trained to generate survival-time images from the chest CT images of each patient. The discriminator network is a patchbased convolutional network that is trained to differentiate the “fake pair” of a chest CT image and a generated survivaltime image from the “true pair” of an input CT image and the observed survival-time image of a patient. For evaluation, we retrospectively collected 75 IPF patients with high-resolution chest CT and pulmonary function tests. The survival predictions of the pix2surv model on these patients were compared with those of an established clinical prognostic biomarker known as the gender, age, and physiology (GAP) index by use of a two-sided t-test with bootstrapping. Concordance index (C-index) and relative absolute error (RAE) were used as measures of the prediction performance. Preliminary results showed that the survival prediction by the pix2surv model yielded more than 15% higher C-index value and more than 10% lower RAE values than those of the GAP index. The improvement in survival prediction by the pix2surv model was statistically significant (P < 0.0001). Also, the separation between the survival curves for the low- and high-risk groups was larger with pix2surv than that of the GAP index. These results show that the pix2surv model outperforms the GAP index in the prediction of the survival time and risk stratification of patients with IPF, indicating that the pix2surv model can be an effective predictor of the overall survival of patients with IPF.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.