PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Proceedings Volume Medical Imaging 2017: Imaging Informatics for Healthcare, Research, and Applications, 1013801 (2017) https://doi.org/10.1117/12.2277962
This PDF file contains the front matter associated with SPIE Proceedings Volume 10138, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2017: Imaging Informatics for Healthcare, Research, and Applications, 1013803 (2017) https://doi.org/10.1117/12.2254029
Thyroid segmentation in tracked 2D ultrasound (US) using active contours has a low segmentation accuracy mainly due to the fact that smaller structures cannot be efficiently recognized and segmented. To address this issue, we propose a new similarity indicator with the main objective to provide information to the active contour algorithm concerning the regions that the active contour should continue to expand or should stop. First, a preprocessing step is carried out in order to attenuate the noise present in the US image and to increase its contrast, using histogram equalization and a median filter. In the second step, active contours are used to segment the thyroid in each 2D image of the dataset. After performing a first segmentation, two similarity indicators (ratio of mean square error, MSE and correlation between histograms) are computed at each contour point of the initial segmented thyroid between rectangles located inside and outside the obtained contour. A threshold is used on a final indicator computed from the other two indicators to find the probable regions for further segmentation using active contours. This process is repeated until no new segmentation region is identified. Finally, all the segmented thyroid images passed through a 3D reconstruction algorithm to obtain a 3D volume segmented thyroid. The results showed that including similarity indicators based on histogram equalization and MSE between inside and outside regions of the contour can help to segment difficult areas that active contours have problem to segment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2017: Imaging Informatics for Healthcare, Research, and Applications, 1013804 (2017) https://doi.org/10.1117/12.2247085
Advancements in 3D scanning and volumetric imaging methods have motivated researchers to
tackle new challenges related to storing, retrieving and comparing 3D models, especially in medical
domain. Comparing natural rigid shapes and detecting subtle changes in 3D models of brain structures is
of great importance. Precision in capturing surface details and insensitivity to shape orientation are highly
desirable properties of good shape descriptors. In this paper, we propose a new method, Spherical
Harmonics Distance (SHD), which leverages the power of spherical harmonics to provide more accurate
representation of surface details. At the same time, the proposed method incorporates the features of a
shape distribution method (D2) and inherits its insensitivity to shape orientation. Comparing SHD to a
spherical harmonics based method (SPHARM) shows that the performance of the proposed method is less
sensitive to rotation. Also, comparing SHD to D2 shows that the proposed method is more accurate in
detecting subtle changes. The performance of the proposed method is verified by calculating the Fisher
measure (FM) of extracted feature vectors. The FM of the vectors generated by SHD on average shows 27
times higher values than that of D2. Our preliminary results show that SHD successfully combines
desired features from two different methods and paves the way towards better detection of subtle
dissimilarities among natural rigid shapes (e.g. structures of interest in human brain). Detecting these
subtle changes can be instrumental in more accurate diagnosis, prognosis and treatment planning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2017: Imaging Informatics for Healthcare, Research, and Applications, 1013805 (2017) https://doi.org/10.1117/12.2254094
Aneurysmal subarachnoid hemorrhage (aSAH) is a form of hemorrhagic stroke that affects middle-aged individuals and associated with significant morbidity and/or mortality especially those presenting with higher clinical and radiologic grades at the time of admission. Previous studies suggested that blood extravasated after aneurysmal rupture was a potentially clinical prognosis factor. But all such studies used qualitative scales to predict prognosis. The purpose of this study is to develop and test a new interactive computer-aided detection (CAD) tool to detect, segment and quantify brain hemorrhage and ventricular cerebrospinal fluid on non-contrasted brain CT images. First, CAD segments brain skull using a multilayer region growing algorithm with adaptively adjusted thresholds. Second, CAD assigns pixels inside the segmented brain region into one of three classes namely, normal brain tissue, blood and fluid. Third, to avoid “black-box” approach and increase accuracy in quantification of these two image markers using CT images with large noise variation in different cases, a graphic User Interface (GUI) was implemented and allows users to visually examine segmentation results. If a user likes to correct any errors (i.e., deleting clinically irrelevant blood or fluid regions, or fill in the holes inside the relevant blood or fluid regions), he/she can manually define the region and select a corresponding correction function. CAD will automatically perform correction and update the computed data. The new CAD tool is now being used in clinical and research settings to estimate various quantitatively radiological parameters/markers to determine radiological severity of aSAH at presentation and correlate the estimations with various homeostatic/metabolic derangements and predict clinical outcome.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2017: Imaging Informatics for Healthcare, Research, and Applications, 1013806 (2017) https://doi.org/10.1117/12.2254174
Segmenting the optic disc (OD) is an important and essential step in creating a frame of reference for diagnosing optic nerve head (ONH) pathology such as glaucoma. Therefore, a reliable OD segmentation technique is necessary for automatic screening of ONH abnormalities. The main contribution of this paper is in presenting a novel OD segmentation algorithm based on applying a level set method on a localized OD image. To prevent the blood vessels from interfering with the level set process, an inpainting technique is applied. The algorithm is evaluated using a new retinal fundus image dataset called RIGA (Retinal Images for Glaucoma Analysis). In the case of low quality images, a double level set is applied in which the first level set is considered to be a localization for the OD. Five hundred and fifty images are used to test the algorithm accuracy as well as its agreement with manual markings by six ophthalmologists. The accuracy of the algorithm in marking the optic disc area and centroid is 83.9%, and the best agreement is observed between the results of the algorithm and manual markings in 379 images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2017: Imaging Informatics for Healthcare, Research, and Applications, 1013807 (2017) https://doi.org/10.1117/12.2254368
We present a technique to annotate multiple organs shown in 2-D abdominal/pelvic CT images using CBIR. This annotation task is motivated by our research interests in visual question-answering (VQA). We aim to apply results from this effort in Open-iSM, a multimodal biomedical search engine developed by the National Library of Medicine (NLM). Understanding visual content of biomedical images is a necessary step for VQA. Though sufficient annotational information about an image may be available in related textual metadata, not all may be useful as descriptive tags, particularly for anatomy on the image. In this paper, we develop and evaluate a multi-label image annotation method using CBIR. We evaluate our method on two 2-D CT image datasets we generated from 3-D volumetric data obtained from a multi-organ segmentation challenge hosted in MICCAI 2015. Shape and spatial layout information is used to encode visual characteristics of the anatomy. We adapt a weighted voting scheme to assign multiple labels to the query image by combining the labels of the images identified as similar by the method. Key parameters that may affect the annotation performance, such as the number of images used in the label voting and the threshold for excluding labels that have low weights, are studied. The method proposes a coarse-to-fine retrieval strategy which integrates the classification with the nearest-neighbor search. Results from our evaluation (using the MICCAI CT image datasets as well as figures from Open-i) are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2017: Imaging Informatics for Healthcare, Research, and Applications, 1013808 (2017) https://doi.org/10.1117/12.2254716
Convolutional neural networks (CNNs) are the state-of-the-art deep learning network architectures that can be used in a range of applications, including computer vision and medical image analysis. It exhibits a powerful representation learning mechanism with an automated design to learn features directly from the data. However, the common 2D CNNs only use the two dimension spatial information without evaluating the correlation between the adjoin slices. In this study, we established a method of 3D CNNs to discriminate between malignant and benign breast tumors. To this end, 143 patients were enrolled which include 66 benign and 77 malignant instances. The MRI images were pre-processed for noise reduction and breast tumor region segmentation. Data augmentation by spatial translating, rotating and vertical and horizontal flipping is applied to the cases to reduce possible over-fitting. A region-of-interest (ROI) and a volume-of-interest (VOI) were segmented in 2D and 3D DCE-MRI, respectively. The enhancement ratio for each MR series was calculated for the 2D and 3D images. The results for the enhancement ratio images in the two series are integrated for classification. The results of the area under the ROC curve(AUC) values are 0.739 and 0.801 for 2D and 3D methods, respectively. The results for 3D CNN which combined 5 slices for each enhancement ratio images achieved a high accuracy(Acc), sensitivity(Sens) and specificity(Spec) of 0.781, 0.744 and 0.823, respectively. This study indicates that 3D CNN deep learning methods can be a promising technology for breast tumor classification without manual feature extraction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2017: Imaging Informatics for Healthcare, Research, and Applications, 1013809 (2017) https://doi.org/10.1117/12.2255609
The analysis of large data sets can help to gain knowledge about specific organs or on specific diseases, just as big data analysis does in many non-medical areas. This article aims to gain information from 3D volumes, so the visual content of lung CT scans of a large number of patients. In the case of the described data set, only little annotation is available on the patients that were all part of an ongoing screening program and besides age and gender no information on the patient and the findings was available for this work. This is a scenario that can happen regularly as image data sets are produced and become available in increasingly large quantities but manual annotations are often not available and also clinical data such as text reports are often harder to share. We extracted a set of visual features from 12,414 CT scans of 9,348 patients that had CT scans of the lung taken in the context of a national lung screening program in Belarus. Lung fields were segmented by two segmentation algorithms and only cases where both algorithms were able to find left and right lung and had a Dice coefficient above 0.95 were analyzed. This assures that only segmentations of good quality were used to extract features of the lung. Patients ranged in age from 0 to 106 years. Data analysis shows that age can be predicted with a fairly high accuracy for persons under 15 years. Relatively good results were also obtained between 30 and 65 years where a steady trend is seen. For young adults and older people the results are not as good as variability is very high in these groups. Several visualizations of the data show the evolution patters of the lung texture, size and density with age. The experiments allow learning the evolution of the lung and the gained results show that even with limited metadata we can extract interesting information from large-scale visual data. These age-related changes (for example of the lung volume, the density histogram of the tissue) can also be taken into account for the interpretation of new cases. The database used includes patients that had suspicions on a chest X-ray, so it is not a group of healthy people, and only tendencies and not a model of a healthy lung at a specific age can be derived.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2017: Imaging Informatics for Healthcare, Research, and Applications, 101380A (2017) https://doi.org/10.1117/12.2252459
Tuberculosis (TB) is a severe comorbidity of HIV and chest x-ray (CXR) analysis is a necessary step in screening
for the infective disease. Automatic analysis of digital CXR images for detecting pulmonary abnormalities is
critical for population screening, especially in medical resource constrained developing regions. In this article,
we describe steps that improve previously reported performance of NLM’s CXR screening algorithms and help
advance the state of the art in the field. We propose a local-global classifier fusion method where two complementary
classification systems are combined. The local classifier focuses on subtle and partial presentation of the
disease leveraging information in radiology reports that roughly indicates locations of the abnormalities. In addition,
the global classifier models the dominant spatial structure in the gestalt image using GIST descriptor for
the semantic differentiation. Finally, the two complementary classifiers are combined using linear fusion, where
the weight of each decision is calculated by the confidence probabilities from the two classifiers. We evaluated
our method on three datasets in terms of the area under the Receiver Operating Characteristic (ROC) curve,
sensitivity, specificity and accuracy. The evaluation demonstrates the superiority of our proposed local-global
fusion method over any single classifier.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2017: Imaging Informatics for Healthcare, Research, and Applications, 101380B (2017) https://doi.org/10.1117/12.2254712
The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., “short” processing times and/or “large” datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply “large scale” processing transitions into “big data” and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and nonrelevant for medical imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2017: Imaging Informatics for Healthcare, Research, and Applications, 101380C (2017) https://doi.org/10.1117/12.2254371
Large scale image processing demands a standardized way of not only storage but also a method for job distribution and scheduling. The eXtensible Neuroimaging Archive Toolkit (XNAT) is one of several platforms that seeks to solve the storage issues. Distributed Automation for XNAT (DAX) is a job control and distribution manager. Recent massive data projects have revealed several bottlenecks for projects with <100,000 assessors (i.e., data processing pipelines in XNAT). In order to address these concerns, we have developed a new API, which exposes a direct connection to the database rather than REST API calls to accomplish the generation of assessors. This method, consistent with XNAT, keeps a full history for auditing purposes. Additionally, we have optimized DAX to keep track of processing status on disk (called DISKQ) rather than on XNAT, which greatly reduces load on XNAT by vastly dropping the number of API calls. Finally, we have integrated DAX into a Docker container with the idea of using it as a Docker controller to launch Docker containers of image processing pipelines. Using our new API, we reduced the time to create 1,000 assessors (a sub-cohort of our case project) from 65040 seconds to 229 seconds (a decrease of over 270 fold). DISKQ, using pyXnat, allows launching of 400 jobs in under 10 seconds which previously took 2,000 seconds. Together these updates position DAX to support projects with hundreds of thousands of scans and to run them in a time-efficient manner.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Edward Kim, Sai Lakshmi Deepika Mente, Andrew Keenan, Vijay Gehlot
Proceedings Volume Medical Imaging 2017: Imaging Informatics for Healthcare, Research, and Applications, 101380D (2017) https://doi.org/10.1117/12.2254491
In the field of digital pathology, there is an explosive amount of imaging data being generated. Thus, there is an ever growing need to create assistive or automatic methods to analyze collections of images for screening and classification. Machine learning, specifically deep learning algorithms, developed for digital pathology have the potential to assist in this way. Deep learning architectures have demonstrated great success over existing classification models but require massive amounts of labeled training data that either doesn’t exist or are cost and time prohibitive to obtain. In this project, we present a framework for representing, collecting, validating, and utilizing cytopathology features for improved neural network classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2017: Imaging Informatics for Healthcare, Research, and Applications, 101380E (2017) https://doi.org/10.1117/12.2254239
Breast cancer is one of the most malignancies among women in worldwide. Neoadjuvant Chemotherapy (NACT) has gained interest and is increasingly used in treatment of breast cancer in recent years. Therefore, it is necessary to find a reliable non-invasive assessment and prediction method which can evaluate and predict the response of NACT. Recent studies have highlighted the use of MRI for predicting response to NACT. In addition, molecular subtype could also effectively identify patients who are likely have better prognosis in breast cancer. In this study, a radiomic analysis were performed, by extracting features from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) and immunohistochemistry (IHC) to determine subtypes. A dataset with fifty-seven breast cancer patients were included, all of them received preoperative MRI examination. Among them, 47 patients had complete response (CR) or partial response (PR) and 10 had stable disease (SD) to chemotherapy based on the RECIST criterion. A total of 216 imaging features including statistical characteristics, morphology, texture and dynamic enhancement were extracted from DCE-MRI. In multivariate analysis, the proposed imaging predictors achieved an AUC of 0.923 (P = 0.0002) in leave-one-out crossvalidation. The performance of the classifier increased to 0.960, 0.950 and 0.936 when status of HER2, Luminal A and Luminal B subtypes were added into the statistic model, respectively. The results of this study demonstrated that IHC determined molecular status combined with radiomic features from DCE-MRI could be used as clinical marker that is associated with response to NACT.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2017: Imaging Informatics for Healthcare, Research, and Applications, 101380F (2017) https://doi.org/10.1117/12.2254618
We examine imaging and electronic medical records (EMR) of 588 subjects over five major disease groups that affect optic nerve function. An objective evaluation of the role of imaging and EMR data in diagnosis of these conditions would improve understanding of these diseases and help in early intervention. We developed an automated image processing pipeline that identifies the orbital structures within the human eyes from computed tomography (CT) scans, calculates structural size, and performs volume measurements. We customized the EMR-based phenome-wide association study (PheWAS) to derive diagnostic EMR phenotypes that occur at least two years prior to the onset of the conditions of interest from a separate cohort of 28,411 ophthalmology patients. We used random forest classifiers to evaluate the predictive power of image-derived markers, EMR phenotypes, and clinical visual assessments in identifying disease cohorts from a control group of 763 patients without optic nerve disease. Image-derived markers showed more predictive power than clinical visual assessments or EMR phenotypes. However, the addition of EMR phenotypes to the imaging markers improves the classification accuracy against controls: the AUC improved from 0.67 to 0.88 for glaucoma, 0.73 to 0.78 for intrinsic optic nerve disease, 0.72 to 0.76 for optic nerve edema, 0.72 to 0.77 for orbital inflammation, and 0.81 to 0.85 for thyroid eye disease. This study illustrates the importance of diagnostic context for interpretation of image-derived markers and the proposed PheWAS technique provides a flexible approach for learning salient features of patient history and incorporating these data into traditional machine learning analyses.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2017: Imaging Informatics for Healthcare, Research, and Applications, 101380G (2017) https://doi.org/10.1117/12.2254521
To meet the special demands in China and the particular needs for the radiotherapy department, a MOSAIQ Integration Platform CHN (MIP) based on the workflow of radiation therapy (RT) has been developed, as a supplement system to the Elekta MOSAIQ. The MIP adopts C/S (client-server) structure mode, and its database is based on the Treatment Planning System (TPS) and MOSAIQ SQL Server 2008, running on the hospital local network. Five network servers, as a core hardware, supply data storage and network service based on the cloud services. The core software, using C# programming language, is developed based on Microsoft Visual Studio Platform. The MIP server could offer network service, including entry, query, statistics and print information for about 200 workstations at the same time. The MIP was implemented in the past one and a half years, and some practical patient-oriented functions were developed. And now the MIP is almost covering the whole workflow of radiation therapy. There are 15 function modules, such as: Notice, Appointment, Billing, Document Management (application/execution), System Management, and so on. By June of 2016, recorded data in the MIP are as following: 13546 patients, 13533 plan application, 15475 RT records, 14656 RT summaries, 567048 billing records and 506612 workload records, etc. The MIP based on the RT workflow has been successfully developed and clinically implemented with real-time performance, data security, stable operation. And it is demonstrated to be user-friendly and is proven to significantly improve the efficiency of the department. It is a key to facilitate the information sharing and department management. More functions can be added or modified for further enhancement its potentials in research and clinical practice.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2017: Imaging Informatics for Healthcare, Research, and Applications, 101380H (2017) https://doi.org/10.1117/12.2254568
This paper proposes an approach to facilitate the process of individualization of patients from their medical images, without compromising the inherent confidentiality of medical data. The identification of a patient from a medical image is not often the goal of security methods applied to image records. Usually, any identification data is removed from shared records, and security features are applied to determine ownership. We propose a method for embedding a QR-code containing information that can be used to individualize a patient. This is done so that the image to be shared does not differ significantly from the original image. The QR-code is distributed in the image by changing several pixels according to a threshold value based on the average value of adjacent pixels surrounding the point of interest. The results show that the code can be embedded and later fully recovered with minimal changes in the UIQI index - less than 0.1% of different.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2017: Imaging Informatics for Healthcare, Research, and Applications, 101380I (2017) https://doi.org/10.1117/12.2254163
Accurately assessing the potential benefit of chemotherapy to cancer patients is an important prerequisite to developing precision medicine in cancer treatment. The previous study has shown that total psoas area (TPA) measured on preoperative cross-section CT image might be a good image marker to predict long-term outcome of pancreatic cancer patients after surgery. However, accurate and automated segmentation of TPA from the CT image is difficult due to the fuzzy boundary or connection of TPA to other muscle areas. In this study, we developed a new interactive computer-aided detection (ICAD) scheme aiming to segment TPA from the abdominal CT images more accurately and assess the feasibility of using this new quantitative image marker to predict the benefit of ovarian cancer patients receiving Bevacizumab-based chemotherapy. ICAD scheme was applied to identify a CT image slice of interest, which is located at the level of L3 (vertebral spines). The cross-sections of the right and left TPA are segmented using a set of adaptively adjusted boundary conditions. TPA is then quantitatively measured. In addition, recent studies have investigated that muscle radiation attenuation which reflects fat deposition in the tissue might be a good image feature for predicting the survival rate of cancer patients. The scheme and TPA measurement task were applied to a large national clinical trial database involving 1,247 ovarian cancer patients. By comparing with manual segmentation results, we found that ICAD scheme could yield higher accuracy and consistency for this task. Using a new ICAD scheme can provide clinical researchers a useful tool to more efficiently and accurately extract TPA as well as muscle radiation attenuation as new image makers, and allow them to investigate the discriminatory power of it to predict progression-free survival and/or overall survival of the cancer patients before and after taking chemotherapy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ximing Wang, Bokkyu Kim, Ji Hoon Park, Erik Wang, Sydney Forsyth, Cody Lim, Ragini Ravi, Sarkis Karibyan, Alexander Sanchez, et al.
Proceedings Volume Medical Imaging 2017: Imaging Informatics for Healthcare, Research, and Applications, 101380J (2017) https://doi.org/10.1117/12.2256242
Quantitative imaging biomarkers are used widely in clinical trials for tracking and evaluation of medical interventions. Previously, we have presented a web based informatics system utilizing quantitative imaging features for predicting outcomes in stroke rehabilitation clinical trials. The system integrates imaging features extraction tools and a web-based statistical analysis tool. The tools include a generalized linear mixed model(GLMM) that can investigate potential significance and correlation based on features extracted from clinical data and quantitative biomarkers. The imaging features extraction tools allow the user to collect imaging features and the GLMM module allows the user to select clinical data and imaging features such as stroke lesion characteristics from the database as regressors and regressands. This paper discusses the application scenario and evaluation results of the system in a stroke rehabilitation clinical trial. The system was utilized to manage clinical data and extract imaging biomarkers including stroke lesion volume, location and ventricle/brain ratio. The GLMM module was validated and the efficiency of data analysis was also evaluated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have designed and developed a multiple sclerosis eFolder system for patient data storage, image viewing, and automatic lesion quantification results to allow patient tracking. The web-based system aims to be integrated in DICOM-compliant clinical and research environments to aid clinicians in patient treatments and data analysis. The system quantifies lesion volumes, identify and register lesion locations to track shifts in volume and quantity of lesions in a longitudinal study. We aim to evaluate the two most important features of the system, data mining and longitudinal lesion tracking, to demonstrate the MS eFolder’s capability in improving clinical workflow efficiency and outcome analysis for research. In order to evaluate data mining capabilities, we have collected radiological and neurological data from 72 patients, 36 Caucasian and 36 Hispanic matched by gender, disease duration, and age. Data analysis on those patients based on ethnicity is performed, and analysis results are displayed by the system’s web-based user interface. The data mining module is able to successfully separate Hispanic and Caucasian patients and compare their disease profiles. For longitudinal lesion tracking, we have collected 4 longitudinal cases and simulated different lesion growths over the next year. As a result, the eFolder is able to detect changes in lesion volume and identifying lesions with the most changes. Data mining and lesion tracking evaluation results show high potential of eFolder’s usefulness in patientcare and informatics research for multiple sclerosis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2017: Imaging Informatics for Healthcare, Research, and Applications, 101380L (2017) https://doi.org/10.1117/12.2254706
This study proposed a near-term breast cancer risk assessment model based on local region bilateral asymmetry features in Mammography. The database includes 566 cases who underwent at least two sequential FFDM examinations. The ‘prior’ examination in the two series all interpreted as negative (not recalled). In the “current” examination, 283 women were diagnosed cancers and 283 remained negative. Age of cancers and negative cases completely matched. These cases were divided into three subgroups according to age: 152 cases among the 37-49 age-bracket, 220 cases in the age-bracket 50- 60, and 194 cases with the 61-86 age-bracket. For each image, two local regions including strip-based regions and difference-of-Gaussian basic element regions were segmented. After that, structural variation features among pixel values and structural similarity features were computed for strip regions. Meanwhile, positional features were extracted for basic element regions. The absolute subtraction value was computed between each feature of the left and right local-regions. Next, a multi-layer perception classifier was implemented to assess performance of features for prediction. Features were then selected according stepwise regression analysis. The AUC achieved 0.72, 0.75 and 0.71 for these 3 age-based subgroups, respectively. The maximum adjustable odds ratios were 12.4, 20.56 and 4.91 for these three groups, respectively. This study demonstrate that the local region-based bilateral asymmetry features extracted from CC-view mammography could provide useful information to predict near-term breast cancer risk.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2017: Imaging Informatics for Healthcare, Research, and Applications, 101380M (2017) https://doi.org/10.1117/12.2257029
The primary goal in radiation therapy is to target the tumor with the maximum possible radiation dose while limiting the radiation exposure of the surrounding healthy tissues. However, in order to achieve an optimized treatment plan, many constraints, such as gender, age, tumor type, location, etc. need to be considered. The location of the malignant tumor with respect to the vital organs is another possible important factor for treatment planning process which can be quantified as a feature making it easier to analyze its effects. Incorporation of such features into the patient’s medical history could provide additional knowledge that could lead to better treatment outcomes. To show the value of features such as relative locations of tumors and surrounding organs, the data is first processed in order to calculate the features and formulate a feature matrix. Then these feature are matched with retrospective cases with similar features to provide the clinician with insight on delivered dose in similar cases from past. This process provides a range of doses that can be delivered to the patient while limiting the radiation exposure of surrounding organs based on similar retrospective cases. As the number of patients increase, there will be an increase in computations needed for feature extraction as well as an increase in the workload for the physician to find the perfect dose amount. In order to show how such algorithms can be integrated we designed and developed a system with a streamlined workflow and interface as prototype for the clinician to test and explore. Integration of the tumor location feature with the clinician’s experience and training could play a vital role in designing new treatment algorithms and better outcomes. Last year, we presented how multi-institutional data into a decision support system is incorporated. This year the presentation is focused on the interface and demonstration of the working prototype of informatics system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Abigail L. Hong, Benjamin T. Newman, Arbab Khalid, Olivia M. Teter, Elizabeth A. Kobe, Malika Shukurova, Rohit Shinde, Daniel Sipzner, Robert J. Pignolo, et al.
Proceedings Volume Medical Imaging 2017: Imaging Informatics for Healthcare, Research, and Applications, 101380O (2017) https://doi.org/10.1117/12.2254475
Current methods of bone graft treatment for critical size bone defects can give way to several clinical complications such as limited available bone for autografts, non-matching bone structure, lack of strength which can compromise a patient’s skeletal system, and sterilization processes that can prevent osteogenesis in the case of allografts. We intend to overcome these disadvantages by generating a patient-specific 3D printed bone graft guided by high-resolution medical imaging. Our synthetic model allows us to customize the graft for the patients’ macro- and microstructure and correct any structural deficiencies in the re-meshing process. These 3D-printed models can presumptively serve as the scaffolding for human mesenchymal stem cell (hMSC) engraftment in order to facilitate bone growth. We performed highresolution CT imaging of a cadaveric human proximal femur at 0.030-mm isotropic voxels. We used these images to generate a 3D computer model that mimics bone geometry from micro to macro scale represented by STereoLithography (STL) format. These models were then reformatted to a format that can be interpreted by the 3D printer. To assess how much of the microstructure was replicated, 3D-printed models were re-imaged using micro-CT at 0.025-mm isotropic voxels and compared to original high-resolution CT images used to generate the 3D model in 32 sub-regions. We found a strong correlation between 3D-printed bone volume and volume of bone in the original images used for 3D printing (R2 = 0.97). We expect to further refine our approach with additional testing to create a viable synthetic bone graft with clinical functionality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2017: Imaging Informatics for Healthcare, Research, and Applications, 101380P (2017) https://doi.org/10.1117/12.2253902
Following new trends in precision medicine, Juxatarenal Abdominal Aortic Aneurysm (JAAA) treatment has been
enabled by using patient-specific fenestrated endovascular grafts. The X-ray guided procedure requires precise
orientation of multiple modular endografts within the arteries confirmed via radiopaque markers. Patient-specific 3D
printed phantoms could familiarize physicians with complex procedures and new devices in a risk-free simulation
environment to avoid periprocedural complications and improve training. Using the Vascular Modeling Toolkit
(VMTK), 3D Data from a CTA imaging of a patient scheduled for Fenestrated EndoVascular Aortic Repair (FEVAR)
was segmented to isolate the aortic lumen, thrombus, and calcifications. A stereolithographic mesh (STL) was generated
and then modified in Autodesk MeshMixer for fabrication via a Stratasys Eden 260 printer in a flexible photopolymer to
simulate arterial compliance. Fluoroscopic guided simulation of the patient-specific FEVAR procedure was performed
by interventionists using all demonstration endografts and accessory devices. Analysis compared treatment strategy
between the planned procedure, the simulation procedure, and the patient procedure using a derived scoring scheme.
Results: With training on the patient-specific 3D printed AAA phantom, the clinical team optimized their procedural
strategy. Anatomical landmarks and all devices were visible under x-ray during the simulation mimicking the clinical
environment. The actual patient procedure went without complications.
Conclusions: With advances in 3D printing, fabrication of patient specific AAA phantoms is possible. Simulation with
3D printed phantoms shows potential to inform clinical interventional procedures in addition to CTA diagnostic imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Zbigniew Starosolski, David S. Ezon, Rajesh Krishnamurthy, Nicholas Dodd, Jeffrey Heinle, Dean E. Mckenzie, Ananth Annapragada
Proceedings Volume Medical Imaging 2017: Imaging Informatics for Healthcare, Research, and Applications, 101380Q (2017) https://doi.org/10.1117/12.2253961
We developed a technology that allows a simple desktop 3D printer with dual extruder to fabricate 3D flexible models of Major AortoPulmonary Collateral Arteries. The study was designed to assess whether the flexible 3D printed models could help during surgical planning phase. Simple FDM 3D printers are inexpensive, versatile in use and easy to maintain, but complications arise when the designed model is complex and has tubular structures with small diameter less than 2mm. The advantages of FDM printers are cost and simplicity of use. We use precisely selected materials to overcome the obstacles listed above. Dual extruder allows to use two different materials while printing, which is especially important in the case of fragile structures like pulmonary vessels and its supporting structures. The latter should not be removed by hand to avoid a truncation of the model. We utilize the water soluble PVA as a supporting structure and Poro-Lay filament for flexible model of AortoPulmonary collateral arteries. Poro-Lay filament is different as compared to all the other flexible ones like polymer-based. Poro-Lay is rigid while printing and this allows printing of structures small in diameter. It achieves flexibility after washing out of printed model with water. It becomes soft in touch and gelatinous. Using both PVA and Poro-Lay gives a huge advantage allowing to wash out the supporting structures and achieve flexibility in one washing operation, saving time and avoiding human error with cleaning the model. We evaluated 6 models for MAPCAS surgical planning study. This approach is also cost-effective – an average cost of materials for print is less than $15; models are printed in facility without any delays. Flexibility of 3D printed models approximate soft tissues properly, mimicking Aortopulmonary collateral arteries. Second utilization models has educational value for both residents and patients' family. Simplification of 3D flexible process could help in other models of soft tissue pathologies like aneurysms, ventricular septal defects and other vascular anomalies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2017: Imaging Informatics for Healthcare, Research, and Applications, 101380R (2017) https://doi.org/10.1117/12.2253711
3D printing has been used to create complex arterial phantoms to advance device testing and physiological condition
evaluation. Stereolithographic (STL) files of patient-specific cardiovascular anatomy are acquired to build cardiac
vasculature through advanced mesh-manipulation techniques. Management of distal branches in the arterial tree is
important to make such phantoms practicable.
We investigated methods to manage the distal arterial flow resistance and pressure thus creating physiologically and
geometrically accurate phantoms that can be used for simulations of image-guided interventional procedures with
new devices. Patient specific CT data were imported into a Vital Imaging workstation, segmented, and exported as
STL files. Using a mesh-manipulation program (Meshmixer) we created flow models of the coronary tree. Distal
arteries were connected to a compliance chamber. The phantom was then printed using a Stratasys Connex3 multimaterial
printer: the vessel in TangoPlus and the fluid flow simulation chamber in Vero. The model was connected
to a programmable pump and pressure sensors measured flow characteristics through the phantoms. Physiological
flow simulations for patient-specific vasculature were done for six cardiac models (three different vasculatures
comparing two new designs). For the coronary phantom we obtained physiologically relevant waves which
oscillated between 80 and 120 mmHg and a flow rate of ~125 ml/min, within the literature reported values. The
pressure wave was similar with those acquired in human patients. Thus we demonstrated that 3D printed phantoms
can be used not only to reproduce the correct patient anatomy for device testing in image-guided interventions, but
also for physiological simulations. This has great potential to advance treatment assessment and diagnosis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2017: Imaging Informatics for Healthcare, Research, and Applications, 101380S (2017) https://doi.org/10.1117/12.2253889
Purpose: Accurate patient-specific phantoms for device testing or endovascular treatment planning can be 3D printed. We
expand the applicability of this approach for cardiovascular disease, in particular, for CT-geometry derived benchtop
measurements of Fractional Flow Reserve, the reference standard for determination of significant individual coronary
artery atherosclerotic lesions.
Materials and Methods: Coronary CT Angiography (CTA) images during a single heartbeat were acquired with a
320x0.5mm detector row scanner (Toshiba Aquilion ONE). These coronary CTA images were used to create 4 patientspecific
cardiovascular models with various grades of stenosis: severe, <75% (n=1); moderate, 50-70% (n=1); and mild,
<50% (n=2). DICOM volumetric images were segmented using a 3D workstation (Vitrea, Vital Images); the output was
used to generate STL files (using AutoDesk Meshmixer), and further processed to create 3D printable geometries for flow
experiments. Multi-material printed models (Stratasys Connex3) were connected to a programmable pulsatile pump, and
the pressure was measured proximal and distal to the stenosis using pressure transducers. Compliance chambers were used
before and after the model to modulate the pressure wave. A flow sensor was used to ensure flow rates within physiological
reported values.
Results: 3D model based FFR measurements correlated well with stenosis severity. FFR measurements for each stenosis
grade were: 0.8 severe, 0.7 moderate and 0.88 mild.
Conclusions: 3D printed models of patient-specific coronary arteries allows for accurate benchtop diagnosis of FFR.
This approach can be used as a future diagnostic tool or for testing CT image-based FFR methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2017: Imaging Informatics for Healthcare, Research, and Applications, 101380T (2017) https://doi.org/10.1117/12.2256181
With increasing resolution in image acquisition, the project explores capabilities of printing toward faithfully reflecting detail and features depicted in medical images. To improve safety and efficiency of orthopedic surgery and spatial conceptualization in training and education, this project focused on generating virtual models of orthopedic anatomy from clinical quality computed tomography (CT) image datasets and manufacturing life-size physical models of the anatomy using 3D printing tools. Beginning with raw micro CT data, several image segmentation techniques including thresholding, edge recognition, and region-growing algorithms available in packages such as ITK-SNAP, MITK, or Mimics, were utilized to separate bone from surrounding soft tissue. After converting the resulting data to a standard 3D printing format, stereolithography (STL), the STL file was edited using Meshlab, Netfabb, and Meshmixer. The editing process was necessary to ensure a fully connected surface (no loose elements), positive volume with manifold geometry (geometry possible in the 3D physical world), and a single, closed shell. The resulting surface was then imported into a “slicing” software to scale and orient for printing on a Flashforge Creator Pro. In printing, relationships between orientation, print bed volume, model quality, material use and cost, and print time were considered. We generated anatomical models of the hand, elbow, knee, ankle, and foot from both low-dose high-resolution cone-beam CT images acquired using the soon to be released scanner developed by Carestream, as well as scaled models of the skeletal anatomy of the arm and leg, together with life-size models of the hand and foot.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2017: Imaging Informatics for Healthcare, Research, and Applications, 101380U (2017) https://doi.org/10.1117/12.2254677
This work presents a web system for the training of students or residents (users) interested in the detection of breast density in mammography images. The system consists of a breast imaging database with breast density types classified and demarcated by the specialist (tutor) or online database. The planning was based on ISO / IEC 12207. Through the browser (desktop or notebook), the user will visualize the breast images and in them will realize the markings of the density region and even classify them per the BI-RADS protocol. After marking, this will be compared to the gold standard already existing in the image base, and then the system will inform if the area demarcation has been set or not. The shape of this marking is similar to the paint brush. The evaluation was based on ISO / IEC 1926 or 25010: 2011 by 3 software development specialists and 3 in mammary radiology, evaluating usability, configuration, performance and System interface through the Likert scale-based questionnaire. Where they have totally agreed on usability, configuration, performance and partially on the interface. And as a good thing: the system is able to be accessed anywhere and at any time, the hit or error response is in real time, it can be used in the educational area, the limit of the amount of images will depend on the size of the computer memory, At the end the system sends the results achieved by e-mail to the user, reproduction of the system on any type of screen, complementation of the system with other types of breast structures. Negative points are the need for internet.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2017: Imaging Informatics for Healthcare, Research, and Applications, 101380V (2017) https://doi.org/10.1117/12.2252281
Virtual colonoscopy (VC) allows a physician to virtually navigate within a reconstructed 3D colon model searching
for colorectal polyps. Though VC is widely recognized as a highly sensitive and specific test for identifying
polyps, one limitation is the reading time, which can take over 30 minutes per patient. Large amounts of the
colon are often devoid of polyps, and a way of identifying these polyp-free segments could be of valuable use in
reducing the required reading time for the interrogating radiologist. To this end, we have tested the ability of
the collective crowd intelligence of non-expert workers to identify polyp candidates and polyp-free regions. We
presented twenty short videos flying through a segment of a virtual colon to each worker, and the crowd was
asked to determine whether or not a possible polyp was observed within that video segment. We evaluated our
framework on Amazon Mechanical Turk and found that the crowd was able to achieve a sensitivity of 80.0% and
specificity of 86.5% in identifying video segments which contained a clinically proven polyp. Since each polyp
appeared in multiple consecutive segments, all polyps were in fact identified. Using the crowd results as a first
pass, 80% of the video segments could in theory be skipped by the radiologist, equating to a significant time
savings and enabling more VC examinations to be performed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2017: Imaging Informatics for Healthcare, Research, and Applications, 101380W (2017) https://doi.org/10.1117/12.2254158
Introduction: Medical imaging technology has revolutionized health care over the past 30 years. This is especially true for ultrasound, a modality that an increasing amount of medical personal is starting to use. Purpose: The purpose of this study was to develop and evaluate a platform for improving medical image interpretation skills regardless of time and space and without the need for expensive imaging equipment or a patient to scan. Methods, results and conclusions: A stable web application with the needed functionality for image interpretation training and evaluation has been implemented. The system has been extensively tested internally and used during an international course in ultrasound-guided neurosurgery. The web application was well received and got very good System Usability Scale (SUS) scores.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2017: Imaging Informatics for Healthcare, Research, and Applications, 101380X (2017) https://doi.org/10.1117/12.2254194
Computer-aided diagnosis systems using medical images and three-dimensional models as input data have greatly expanded and developed, but in terms of building suitable image databases to assess them, the challenge remains. Although there are some image databases available for this purpose, they are generally limited to certain types of exams or contain a limited number of medical cases. The objective of this work is to present the concepts and the development of a collaborative platform for sharing medical images and three-dimensional models, providing a resource to share and increase the number of images available for researchers. The collaborative cloud platform, called CATALYZER, aims to increase the availability and sharing of graphic objects, including 3D images, and their reports that are essential for research related to medical images. A survey conducted with researchers and health professionals indicated that this could be an innovative approach in the creation of medical image databases, providing a wider variety of cases together with a considerable amount of shared information among its users.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Purpose:
Due to the generation of a large number of electronic imaging diagnostic records (IDR) year after year in a digital hospital, The IDR has become the main component of medical big data which brings huge values to healthcare services, professionals and administration. But a large volume of IDR presented in a hospital also brings new challenges to healthcare professionals and services as there may be too many IDRs for each patient so that it is difficult for a doctor to review all IDR of each patient in a limited appointed time slot. In this presentation, we presented an innovation method which uses an anatomical 3D structure object visually to represent and index historical medical status of each patient, which is called Visual Patient (VP) in this presentation, based on long term archived electronic IDR in a hospital, so that a doctor can quickly learn the historical medical status of the patient, quickly point and retrieve the IDR he or she interested in a limited appointed time slot.
Method:
The engineering implementation of VP was to build 3D Visual Representation and Index system called VP system (VPS) including components of natural language processing (NLP) for Chinese, Visual Index Creator (VIC), and 3D Visual Rendering Engine.There were three steps in this implementation: (1) an XML-based electronic anatomic structure of human body for each patient was created and used visually to index the all of abstract information of each IDR for each patient; (2)a number of specific designed IDR parsing processors were developed and used to extract various kinds of abstract information of IDRs retrieved from hospital information systems; (3) a 3D anatomic rendering object was introduced visually to represent and display the content of VIO for each patient.
Results:
The VPS was implemented in a simulated clinical environment including PACS/RIS to show VP instance to doctors. We setup two evaluation scenario in a hospital radiology department to evaluate whether radiologists accept the VPS and how the VP impact the radiologists’ efficiency and accuracy in reviewing historic medical records of the patients. We got a statistical results showing that more than 70% participated radiologist would like to use the VPS in their radiological imaging services. In comparison testing of using VPS and RIS/PACS in reviewing historic medical records of the patients, we got a statistical result showing that the efficiency of using VPS was higher than that of using PACS/RIS.
New Technologies and Results to be presented:
This presentation presented an innovation method to use an anatomical 3D structure object, called VP, visually to represent and index historical medical records such as IDR of each patient and a doctor can quickly learn the historical medical status of the patient through VPS. The evaluation results showed that VPS has better performance than RIS-integrated PACS in efficiency of reviewing historic medical records of the patients.
Conclusions:
In this presentation, we presented an innovation method called VP to use an anatomical 3D structure object visually to represent and index historical IDR of each patient and briefed an engineering implementation to build a VPS to implement the major features and functions of VP. We setup two evaluation scenarios in a hospital radiology department to evaluate VPS and achieved evaluation results showed that VPS has better performance than RIS-integrated PACS in efficiency of reviewing historic medical records of the patients.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fei Gao, Yanzhe Xu, Anshuman Panda, Min Zhang, James Hanson, Congzhe Su, Teresa Wu, William Pavlicek, Judy R. James
Proceedings Volume Medical Imaging 2017: Imaging Informatics for Healthcare, Research, and Applications, 101380Z (2017) https://doi.org/10.1117/12.2249712
MRI protocols are instruction sheets that radiology technologists use in routine clinical practice for guidance (e.g., slice
position, acquisition parameters etc.). In Mayo Clinic Arizona (MCA), there are over 900 MR protocols (ranging across
neuro, body, cardiac, breast etc.) which makes maintaining and updating the protocol instructions a labor intensive effort.
The task is even more challenging given different vendors (Siemens, GE etc.). This is a universal problem faced by all
the hospitals and/or medical research institutions. To increase the efficiency of the MR practice, we designed and
implemented a web-based platform (eProtocol) to automate the management of MRI protocols. It is built upon a database
that automatically extracts protocol information from DICOM compliant images and provides a user-friendly interface to
the technologists to create, edit and update the protocols. Advanced operations such as protocol migrations from scanner
to scanner and capability to upload Multimedia content were also implemented. To the best of our knowledge, eProtocol
is the first MR protocol automated management tool used clinically. It is expected that this platform will significantly
improve the radiology operations efficiency including better image quality and exam consistency, fewer repeat
examinations and less acquisition errors. These protocols instructions will be readily available to the technologists during
scans. In addition, this web-based platform can be extended to other imaging modalities such as CT, Mammography, and
Interventional Radiology and different vendors for imaging protocol management.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2017: Imaging Informatics for Healthcare, Research, and Applications, 1013810 (2017) https://doi.org/10.1117/12.2254389
The state-of-the art method of wound assessment is a manual, imprecise and time-consuming procedure. Per- formed by clinicians, it has limited reproducibility and accuracy, large time consumption and high costs. Novel technologies such as laser scanning microscopy, multi-photon microscopy, optical coherence tomography and hyper-spectral imaging, as well as devices relying on the structured light sensors, make accurate wound assessment possible. However, such methods have limitations due to high costs and may lack portability and availability. In this paper, we present a low-cost wound assessment system and architecture for fast and accurate cutaneous wound assessment using inexpensive consumer smartphone devices. Computer vision techniques are applied either on the device or the server to reconstruct wounds in 3D as dense models, which are generated from images taken with a built-in single camera of a smartphone device. The system architecture includes imaging (smartphone), processing (smartphone or PACS) and storage (PACS) devices. It supports tracking over time by alignment of 3D models, color correction using a reference color card placed into the scene and automatic segmentation of wound regions. Using our system, we are able to detect and document quantitative characteristics of chronic wounds, including size, depth, volume, rate of healing, as well as qualitative characteristics as color, presence of necrosis and type of involved tissue.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2017: Imaging Informatics for Healthcare, Research, and Applications, 1013811 (2017) https://doi.org/10.1117/12.2254657
Mobile Radiologist 360, rolled out as part of the voice dictation system upgrade from Nuance Powerscribe 5.0 to PS360 allows an attending radiologist to edit and sign-off a report assigned by a trainee or that has been started by the radiologist on a workstation. The purpose of this study was to evaluate the adoptability and impact of this application. Report turnaround time data was extracted from the RIS (GE-Centricity RIS-IC) for 60 days before- (period-1) and 60 days after- (period-2) the application implementation and then, for 60 days after end of period-2 (period-3). Adoptability of the application was evaluated using two metrics; first, the number of attending radiologists who signed-off reports using the application in periods 2 and 3, and second, the proportion of reports signed-off by the top five users of the mobile application using the application. Impact of the application was evaluated by comparing the time from initial dictation to final sign-off (time_PF) for the top five users of the mobile application to the time_PF by other five radiologists who did not use the application. 41% radiologists used the mobile application at least once during the study period; the proportion of cases signed-off using the mobile application ranged from 1% to 20%. ANOVA revealed no statistically significant effect of the mobile application system on time_PF (p=0.842). In conclusion, there was low initial adoptability and no impact of the mobile dictation and reporting system in reducing the time from initial dictation to final sign-off on a radiology report.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2017: Imaging Informatics for Healthcare, Research, and Applications, 1013812 (2017) https://doi.org/10.1117/12.2251115
Content-Based medical image retrieval (CBMIR) is been highly active research area from past few years. The retrieval
performance of a CBMIR system crucially depends on the feature representation, which have been extensively studied by
researchers for decades. Although a variety of techniques have been proposed, it remains one of the most challenging
problems in current CBMIR research, which is mainly due to the well-known “semantic gap” issue that exists between
low-level image pixels captured by machines and high-level semantic concepts perceived by human[1]. Recent years have
witnessed some important advances of new techniques in machine learning. One important breakthrough technique is
known as “deep learning”. Unlike conventional machine learning methods that are often using “shallow” architectures,
deep learning mimics the human brain that is organized in a deep architecture and processes information through multiple
stages of transformation and representation. This means that we do not need to spend enormous energy to extract features
manually. In this presentation, we propose a novel framework which uses deep learning to retrieval the medical image to
improve the accuracy and speed of a CBIR in integrated RIS/PACS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2017: Imaging Informatics for Healthcare, Research, and Applications, 1013813 (2017) https://doi.org/10.1117/12.2254050
Spectral domain optical coherence tomography (SDOCT) is routinely used in the management and diagnosis of a variety of ocular diseases. This imaging modality also finds widespread use in research, where quantitative measurements obtained from the images are used to track disease progression. In recent years, the number of available scanners and imaging protocols grown and there is a distinct absence of a unified tool that is capable of visualizing, segmenting, and analyzing the data. This is especially noteworthy in longitudinal studies, where data from older scanners and/or protocols may need to be analyzed. Here, we present a graphical user interface (GUI) that allows users to visualize and analyze SDOCT images obtained from two commonly used scanners. The retinal surfaces in the scans can be segmented using a previously described method, and the retinal layer thicknesses can be compared to a normative database. If necessary, the segmented surfaces can also be corrected and the changes applied. The interface also allows users to import and export retinal layer thickness data to an SQL database, thereby allowing for the collation of data from a number of collaborating sites.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2017: Imaging Informatics for Healthcare, Research, and Applications, 1013814 (2017) https://doi.org/10.1117/12.2254137
In this paper, we describe an enhanced DICOM Secondary Capture (SC) that integrates Image Quantification (IQ) results, Regions of Interest (ROIs), and Time Activity Curves (TACs) with screen shots by embedding extra medical imaging information into a standard DICOM header. A software toolkit of DICOM IQSC has been developed to implement the SC-centered information integration of quantitative analysis for routine practice of nuclear medicine. Primary experiments show that the DICOM IQSC method is simple and easy to implement seamlessly integrating post-processing workstations with PACS for archiving and retrieving IQ information. Additional DICOM IQSC applications in routine nuclear medicine and clinic research are also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2017: Imaging Informatics for Healthcare, Research, and Applications, 1013815 (2017) https://doi.org/10.1117/12.2255801
The key step for minimally invasive intracerebral hemorrhage surgery is precisely positioning the hematoma location in the brain before and during the hematoma surgery, which can significantly improves the success rate of puncture hematoma. We designed a 3D computerized surgical plan (CSP) workstation precisely to locate brain hematoma based on Multi-Planar Reconstruction (MPR) visualization technique. We used ten patients’ CT/MR studies to verify our designed CSP intracerebral hemorrhage localization method. With the doctor’s assessment and comparing with the results of manual measurements, the output of CSP WS for hematoma surgery is more precise and reliable than manual procedure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2017: Imaging Informatics for Healthcare, Research, and Applications, 1013816 (2017) https://doi.org/10.1117/12.2254246
This study developed and tested a multi-probe resonance-frequency-based electrical impedance spectroscopy (REIS) system aimed at detection of breast cancer. The REIS system consists of specially designed mechanical supporting device that can be easily lifted to fit women of different height, a seven probe sensor cup, and a computer providing software for system control and management. The sensor cup includes one central probe for direct contact with the nipple, and other six probes uniformly distributed at a distance of 35mm away from the center probe to enable contact with breast skin surface. It takes about 18 seconds for this system to complete a data acquisition process. We utilized this system for examination of breast cancer, collecting a dataset of 289 cases including biopsy verified 74 malignant and 215 benign tumors. After that, 23 REIS based features, including seven frequency, fifteen magnitude features were extracted, and an age feature. To reduce redundancy we selected 6 features using the evolutionary algorithm for classification. The area under a receiver operating characteristic curve (AUC) was computed to assess classifier performance. A multivariable logistic regression method was performed for detection of the tumors. The results of our study showed for the 23 REIS features AUC and ACC, Sensitivity and Specificity of 0.796, 0.727, 0.731 and 0.726, respectively. The AUC and ACC, Sensitivity and Specificity for the 6 REIS features of 0.840, 0.80, 0.703 and 0.833, respectively, and AUC of 0.662 and 0.619 for the frequency and magnitude based REIS features, respectively. The performance of the classifiers using all the 6 features was significantly better than solely using magnitude features (p=3.29e-08) and frequency features (5.61e-07). Smote algorithm was used to expand small samples to balance the dataset, the AUC after data balance of 0.846,increased than the original data classification performance. The results indicated that the REIS system is a promising tool for detection of breast cancer and may be acceptable for clinical implementation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Jan Egger, Markus Gall, Jürgen Wallner, Pedro de Almeida Germano Boechat, Alexander Hann, Xing Li, Xiaojun Chen, Dieter Schmalstieg
Proceedings Volume Medical Imaging 2017: Imaging Informatics for Healthcare, Research, and Applications, 1013817 (2017) https://doi.org/10.1117/12.2263234
Virtual Reality (VR) is an immersive technology that replicates an environment via computer-simulated reality. VR gets a lot of attention in computer games but has also great potential in other areas, like the medical domain. Examples are planning, simulations and training of medical interventions, like for facial surgeries where an aesthetic outcome is important. However, importing medical data into VR devices is not trivial, especially when a direct connection and visualization from your own application is needed. Furthermore, most researcher don’t build their medical applications from scratch, rather they use platforms, like MeVisLab, Slicer or MITK. The platforms have in common that they integrate and build upon on libraries like ITK and VTK, further providing a more convenient graphical interface to them for the user. In this contribution, we demonstrate the usage of a VR device for medical data under MeVisLab. Therefore, we integrated the OpenVR library into MeVisLab as an own module. This enables the direct and uncomplicated usage of head mounted displays, like the HTC Vive under MeVisLab. Summarized, medical data from other MeVisLab modules can directly be connected per drag-and-drop to our VR module and will be rendered inside the HTC Vive for an immersive inspection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2017: Imaging Informatics for Healthcare, Research, and Applications, 1013818 (2017) https://doi.org/10.1117/12.2263476
Many modalities have been developed as screening tools for breast cancer. A new screening method called acoustic radiation force impulse (ARFI) imaging was created for distinguishing breast lesions based on localized tissue displacement. This displacement was quantitated by virtual touch tissue imaging (VTI). However, VTIs sometimes express reverse results to intensity information in clinical observation. In the study, a fuzzy-based neural network with principle component analysis (PCA) was proposed to differentiate texture patterns of malignant breast from benign tumors. Eighty VTIs were randomly retrospected. Thirty four patients were determined as BI-RADS category 2 or 3, and the rest of them were determined as BI-RADS category 4 or 5 by two leading radiologists. Morphological method and Boolean algebra were performed as the image preprocessing to acquire region of interests (ROIs) on VTIs. Twenty four quantitative parameters deriving from first-order statistics (FOS), fractal dimension and gray level co-occurrence matrix (GLCM) were utilized to analyze the texture pattern of breast tumors on VTIs. PCA was employed to reduce the dimension of features. Fuzzy-based neural network as a classifier to differentiate malignant from benign breast tumors. Independent samples test was used to examine the significance of the difference between benign and malignant breast tumors. The area Az under the receiver operator characteristic (ROC) curve, sensitivity, specificity and accuracy were calculated to evaluate the performance of the system. Most all of texture parameters present significant difference between malignant and benign tumors with p-value of less than 0.05 except the average of fractal dimension. For all features classified by fuzzy-based neural network, the sensitivity, specificity, accuracy and Az were 95.7%, 97.1%, 95% and 0.964, respectively. However, the sensitivity, specificity, accuracy and Az can be increased to 100%, 97.1%, 98.8% and 0.985, respectively if PCA was performed to reduce the dimension of features. Patterns of breast tumors on VTIs can effectively be recognized by quantitative texture parameters, and differentiated malignant from benign lesions by fuzzy-based neural network with PCA.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Huiqun Wu, Yufang Wei, Brent J. Liu, Yujuan Shang, Lili Shi, Kui Jiang, Jiancheng Dong
Proceedings Volume Medical Imaging 2017: Imaging Informatics for Healthcare, Research, and Applications, 1013819 (2017) https://doi.org/10.1117/12.2264642
Diabetic retinopathy (DR) is one of the serious complications of diabetes that could lead to blindness. Digital fundus camera is often used to detect retinal changes but the diagnosis relies too much on ophthalmologist’s experience. Based on our previously developed algorithms for quantifying retinal vessels and lesions, we developed a computer aided detection-structured report (CAD-SR) template and implemented it into picture archiving and communication system (PACS). Furthermore, we mapped our CAD-SR into HL7 CDA to integrate CAD findings into diabetes patient electronic patient record (ePR) system. Such integration could provide more quantitative features from fundus image into ePR system, which is valuable for further data mining researches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.