PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 8674, including the Title Page, Copyright Information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To date, conducting retrospective clinical analyses is rather difficult and time consuming. Especially in radiation oncology, handling voluminous datasets from various information systems and different documentation styles efficiently is crucial for patient care and research. With the example of patients with pancreatic cancer treated with radio-chemotherapy, we performed a therapy evaluation by using analysis tools connected with a documentation system. A total number of 783 patients have been documented into a professional, web-based documentation system. Information about radiation therapy, diagnostic images and dose distributions have been imported. For patients with disease progression after neoadjuvant chemoradiation, we designed and established an analysis workflow. After automatic registration of the radiation plans with the follow-up images, the recurrence volumes are segmented manually. Based on these volumes the DVH (dose-volume histogram) statistic is calculated, followed by the determination of the dose applied to the region of recurrence. All results are stored in the database and included in statistical calculations. The main goal of using an automatic evaluation system is to reduce time and effort conducting clinical analyses, especially with large patient groups. We showed a first approach and use of some existing tools, however manual interaction is still necessary. Further steps need to be taken to enhance automation. Already, it has become apparent that the benefits of digital data management and analysis lie in the central storage of data and reusability of the results. Therefore, we intend to adapt the evaluation system to other types of tumors in radiation oncology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Large volumes of data are common in medical imaging and require special applications that enable fast and reliable processing. High performance computing enables such processing by running applications in parallel. This work presents a cluster architecture for the parallel processing of DICOM datasets, where the functionality is not specified by the application itself but rather by a set of plugins. Results show that processing times reduce with each added node while network latency is kept stable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Anatomical contexts (spatial labels) are critical for interpretation of medical imaging content. Numerous approaches have been devised for segmentation, query, and retrieval within the Picture Archive and Communication System (PACS) framework. To date, application-based methods for anatomical localization and tissue classification have yielded the most successful results, but these approaches typically rely upon the availability of standardized imaging sequences. With the ever expanding scope of PACS archives — including multiple imaging modalities, multiple image types within a modality, and multi-site efforts, it is becoming increasingly burdensome to devise a specific method for each data type. To address the challenge of generalizing segmentations from one modality to another, we consider multi-atlas segmentation to transfer label information from labeled T1-weighted MRI data to unlabeled data collected in a diffusion tensor imaging (DTI) experiment. The label transfer approach is fully automated and enables a generalizable cross-modality segmentation method. Herein, we propose a multi-tier multi-atlas segmentation framework for the segmentation of previously unlabeled imaging modalities (e.g.,B0 images for DTI analysis). We show that this approach can be used to construct informed structure-wise noise estimates for fractional anisotropy (FA) measurements of DTI. Although this label transfer methodology is demonstrated in the context of quality control of DTI images, the proposed framework is applicable to any application where the segmentation of unlabeled modalities is limited due to the current collection of available atlases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Yurui Gao, Scott S. Burns, Carolyn B. Lauzon, Andrew E. Fong, Terry A. James, Joel F. Lubar, Robert W. Thatcher, David A. Twillie, Michael D. Wirt, et al.
Proceedings Volume Medical Imaging 2013: Advanced PACS-based Imaging Informatics and Therapeutic Applications, 867405 (2013) https://doi.org/10.1117/12.2007621
Traumatic brain injury (TBI) is an increasingly important public health concern. While there are several promising avenues of intervention, clinical assessments are relatively coarse and comparative quantitative analysis is an emerging field. Imaging data provide potentially useful information for evaluating TBI across functional, structural, and microstructural phenotypes. Integration and management of disparate data types are major obstacles. In a multi-institution collaboration, we are collecting electroencephalogy (EEG), structural MRI, diffusion tensor MRI (DTI), and single photon emission computed tomography (SPECT) from a large cohort of US Army service members exposed to mild or moderate TBI who are undergoing experimental treatment. We have constructed a robust informatics backbone for this project centered on the DICOM standard and eXtensible Neuroimaging Archive Toolkit (XNAT) server. Herein, we discuss (1) optimization of data transmission, validation and storage, (2) quality assurance and workflow management, and (3) integration of high performance computing with research software.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Volumetric medical images contain an enormous amount of visual information that can discourage the exhaustive use of local descriptors for image analysis, comparison and retrieval. Distinctive features and patterns that need to be analyzed for finding diseases are most often local or regional, often in only very small parts of the image. Separating the large amount of image data that might contain little important information is an important task as it could reduce the current information overload of physicians and make clinical work more efficient. In this paper a novel method for detecting key-regions is introduced as a way of extending the concept of keypoints often used in 2D image analysis. In this way also computation is reduced as important visual features are only extracted from the detected key regions. The region detection method is integrated into a platform-independent, web-based graphical interface for medical image visualization and retrieval in three dimensions. This web-based interface makes it easy to deploy on existing infrastructures in both small and large-scale clinical environments. By including the region detection method into the interface, manual annotation is reduced and time is saved, making it possible to integrate the presented interface and methods into clinical routine and workflows, analyzing image data at a large scale.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a dedicated segmentation system for tumor identification and volumetric quantification in dynamic contrast brain magnetic resonance (MR) scans. Our goal is to offer a practically useful tool at the end of clinicians in order to boost volumetric tumor assessment. The system is designed to work in an interactive mode such that maximizes the integration of computing capacity and clinical intelligence. We demonstrate the main functions of the system in terms of its functional flow and conduct preliminary validation using a representative pilot dataset. The system is inexpensive, user-friendly, easy to deploy and integrate with picture archiving and communication systems (PACS), and possible to be open-source, which enable it to potentially serve as a useful assistant for radiologists and oncologists. It is anticipated that in the future the system can be integrated into clinical workflow so that become routine available to help clinicians make more objective interpretations of treatment interventions and natural history of disease to best advocate patient needs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the biggest challenges in dose monitoring is customization of CT dose estimates to the patient. Patient size remains a highly significant variable. One metric that has previously been used for patient size is patient weight, though this is often criticized as inaccurate. In this work, we compare patients’ weight to their effective diameters obtained from a CT scan of the chest or the abdomen. CT exams of the chest (N=163) and abdomen/pelvis (N=168) performed on adult patients in July 2012 were randomly selected for analysis. The effective diameter of the patient for each exam was determined using the central slice of the scan region for each exam using eXposure™ (Radimetrics, Inc., Toronto, Canada). In some cases, the same patient had both a chest and abdominopelvic CT, so effective diameters from both regions were analyzed. In this small sample size, there appears to be a linear relationship between patient weight and effective diameter when measured in the mid-chest and mid-abdomen of adult patients. However, for each weight, patient effective diameter can vary by 5 cm from the regression line in both the chest and the abdomen. A 5-cm difference corresponds to a difference of approximately 0.2 in the chest and 0.3 in the abdomen/pelvis for the correction factors recommended for size-specific dose estimation by the AAPM. This preliminary data suggests that weight-based CT protocoling may in fact be appropriate for some adults. However, more work is needed to identify those patients in whom weight-based protocoling is not appropriate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Pain is a common complication after spinal cord injury which highly affects the patient lifestyle and well-being. For better treatment, accurate classification of pain becomes very important, which directly depends on the information provided by patients to the physicians. Currently, with the limited knowledge about the pain related information, patients end up taking medications which are not suitable or required. At Loma Linda Proton Treatment and Research Center, technical advances are being made, to treat functional disorders of spinal portion of the central nervous system using radio surgery. This paper presents overall workflow design for the project. Also in this paper, we are introducing a web based pain classifier tool that allows a patient to select multiple pain locations and group them according to the pain properties, such as severity of pain, location of pain, occurrences of the pain etc. This computer-aided pain classifier tool can be integrated with medical imaging so that if physicians want to compare pain information provided by patients with imaging data, they can do it all at the same time. Pain classifier application described here, is going to be a major component for patient recruitment in phase 0 of the study.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
At last year’s SPIE, we presented a multiple sclerosis (MS) eFolder as an integrated imaging-informatics based system to provide several functionalities to both clinical and research environments. The eFolder system combines patients’ clinical data, radiological images and computer-aided lesion detection and quantification results to aid in longitudinal tracking, data mining, decision support, and other clinical and research needs. To demonstrate how this system can be integrated in an existing imaging environment such as a large-scale multi-site MS clinical trial, we present a system infrastructure to streamline imaging and clinical data flow with postprocessing (CAD) steps. The system stores clinical and imaging data, provides CAD postprocessing algorithm and data storage, and a web-based graphical user interface (GUI) to view clinical trial data and monitor workflow. To evaluate the system infrastructure, the MS eFolder is set up in a simulated environment with workflow scenarios, including DICOM store, query, and retrieve, automatic CAD steps, and data mining based on CAD results. This project aims to discuss the methodology of setting up eFolder system simulation with a connection to a CAD server component, simulation performance and test results, and discussion of eFolder system deployment results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As there are urgent demands to bring medical imaging research and clinical service together more closely to solve the problems related to disease discover and medical research, a new imaging informatics infrastructure need to be developed to promote multiple disciplines of medical researchers and clinical physicians working together in a secured and efficient cooperative environment. In this presentation, we outline our work of building Biomedical Imaging Informatics “e-Science” platform integrated with high performance image sharing, collaborating and computing to support multi-disciplines translational biomedical imaging research in multiple affiliated hospitals and academic institutions in Shanghai.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The DICOM standard defines the application layer network protocol used to send and receive medical images. DICOM is defined on top of TCP. DICOM addresses many issues associated with medical image transmission; however, sending and receiving large studies is inefficient because they are transmitted one object at a time. The Multi-Series DICOM (MSD) format has been introduced as a solution to this problem. It can store an entire study in a single object. In addition, the metadata information in the MSD object is free of repetition. In this work, the performance of sending and receiving DICOM studies as MSD objects is investigated. A set of DICOM studies is stored in two formats, traditional Single-Frame DICOM (SFD) and MSD. The times required to send the studies in both formats synchronously and asynchronously are measured. The results show that there is a significant reduction in the time required to synchronously send the studies in the MSD format compared to the SFD format and a small improvement when sending asynchronously. Sending studies synchronously in the SFD format results in a delay waiting for the acknowledgement for each DICOM object sent before sending subsequent ones. With the asynchronous approach, the time reduction is a direct result of the difference in metadata size between the SFD and MSD formats and the lower number of acknowledgements sent back from the receiving application entity to the sender. The results show that it is more efficient to send DICOM studies as MSD objects whether synchronously or asynchronously.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Although image-based measures have become an important surrogate for primary endpoints in controlled clinical trials, electronic data capture (EDC) insufficiently supports image and signal data files. In this paper, we suggest a simple extension of OpenClinica, the world’s largest open source EDC system, to handle image data files, process image and signal data, and fill out the electronic case report forms (eCRF) accordingly. We use the web service server interface that is integrated with OpenClinica. The missing client component is substituted by CRF embedded JavaScript and a PHP proxy on server side. JavaScript is also used to display images within the OpenClinica interface. The counterpart system was developed using the Google Web Toolkit (GWT) and the Java application programming interface (API) for eXtensible Markup Language (XML) web services (JAX-WS). Image processing is implemented in Java using ImageJ libraries. We demonstrate the workflow for CRFs of a conjunctival provocation test, where two photographs of a human eye are captured, transferred into the eCRF, segmented and measured. The secure file transfer protocol (SFTP) is used to transfer the data files between the systems, and web services are used to fill the eCRFs, which also integrate resulting images generated by the analysis process. Both, images as well as computed measures are automatically displayed within the OpenClinica eCRFs and can be evaluated by the study nurse after file upload. This allows re-capturing of images in case of evaluation failure, and avoids elaborative query management. In future, DICOM-based data transfer will be implemented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Every year hundreds of thousands of biomedical images are published in journals and conferences. Consequently, finding images relevant to one's interests becomes an ever daunting task. This vast amount of literature creates a need for intelligent and easy-to-use tools that can help researchers effectively navigate through the content corpus and conveniently locate materials of their interests. Traditionally, literature search tools allow users to query content using topic keywords. However, manual query composition is often time and energy consuming. A better system would be one that can automatically deliver relevant content to a researcher without having the end user manually manifest one's search intent and interests via search queries. Such a computer-aided assistance for information access can be provided by a system that first determines a researcher's interests automatically and then recommends images relevant to the person's interests accordingly. The technology can greatly improve a researcher's ability to stay up to date in their fields of study by allowing them to efficiently browse images and documents matching their needs and interests among the vast amount of the biomedical literature. A prototype system implementation of the technology can be accessed via http://www.smartdataware.com.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Quantitative Analysis and Diagnostics, Knowledge, Search and Data Mining
The goal of our work is to lay the foundation for implementing a personal computer mammography content-based image retrieval (MCBIR) system that can search a small to midsized clinical practice's picture archive and communications system (PACS). For a system to be relevant to clinicians it must be able to operate over a large dataset because: the number of mammograms within a PACS can grow by as many as 8,000 images per month; and, the amount of training data available can impact MCBIR retrieval performance. We therefore elected to use the largest publically available mammography dataset, the Digital Database for Screening Mammography (DDSM). We propose a non-distributed approach to MCBIR. We confirm the feasibility of this approach by applying it to modernizing the DDSM*. Our modernization work includes: encoding the dataset's images in the DICOM supported PNG lossless compression format; using a combination of an embedded database and compressed files to store textual data; and performing image segmentation to extract the breast regions in the DDSM's 10,411 useable mammograms. Our segmentation algorithm uses a combination of thresholding and seeded region growing. The resulting image masks are stored in compressed files. We implemented ImageJ plug-ins to support our work. Generally MCBIR work employs distributed approaches such as client/server computing, or web services. Our work demonstrates that approaches using a single personal computer are now feasible due to the increases in computing power. Our work on the DDSM has implications for the systems requirements for clinical MCBIR systems. We found that the new dataset requires less than 256GB in storage. We were able to perform rapid automated breast region segmentation with acceptable results in 98.15% of the dataset's10,411 images. Mean processing time for segmentation was 22.1 seconds per image while processing three images concurrently. Due to the DDSM’s inaccessibility researchers often either use a small subset of the available mammograms or abandon the DDSM altogether and use a much smaller, but more useable, dataset. Our work makes the entire DDSM accessible. We use standard open-source/public domain technologies including, ImageJ, and the H2 embedded SQL databases. We also believe that the approach used for the DDSM will be similar to the approaches for MCBIR storage and processing in future clinical PACS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When physicians are searching for articles in the medical literature, images of the articles can help determining relevance of the article content for a specific information need. The visual image representation can be an advantage in effectiveness (quality of found articles) and also in efficiency (speed of determining relevance or irrelevance) as many articles can likely be excluded much quicker by looking at a few representative images. In domains such as medical information retrieval, allowing to determine relevance quickly and accurately is an important criterion. This becomes even more important when small interfaces are used as it is frequently the case on mobile phones and tablets to access scientific data whenever information needs arise. In scientific articles many figures are used and particularly in the biomedical literature only a subset may be relevant for determining the relevance of a specific article to an information need. In many cases clinical images can be seen as more important for visual appearance than graphs or histograms that require looking at the context for interpretation. To get a clearer idea of image relevance in articles, a user test with a physician was performed who classified images of biomedical research articles into categories of importance that can subsequently be used to evaluate algorithms that automatically select images as representative examples. The manual sorting of images of 50 journal articles of BioMedCentral with each containing more than 8 figures by importance also allows to derive several rules that determine how to choose images and how to develop algorithms for choosing the most representative images of specific texts. This article describes the user tests and can be a first important step to evaluate automatic tools to select representative images for representing articles and potentially also images in other contexts, for example when representing patient records or other medical concepts when selecting images to represent RadLex terms in tutorials or interactive interfaces for example. This can help to make the image retrieval process more efficient and effective for physicians.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Journal images represent an important part of the knowledge stored in the medical literature. Figure classification has received much attention as the information of the image types can be used in a variety of contexts to focus image search and filter out unwanted information or ”noise”, for example non–clinical images. A major problem in figure classification is the fact that many figures in the biomedical literature are compound figures and do often contain more than a single figure type. Some journals do separate compound figures into several parts but many do not, thus requiring currently manual separation. In this work, a technique of compound figure separation is proposed and implemented based on systematic detection and analysis of uniform space gaps. The method discussed in this article is evaluated on a dataset of journal figures of the open access literature that was created for the ImageCLEF 2012 benchmark and contains about 3000 compound figures. Automatic tools can easily reach a relatively high accuracy in separating compound figures. To further increase accuracy efforts are needed to improve the detection process as well as to avoid over–separation with powerful analysis strategies. The tools of this article have also been tested on a database of approximately 150’000 compound figures from the biomedical literature, making these images available as separate figures for further image analysis and allowing to filter important information from them.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new example-based mass segmentation algorithm is proposed for breast mass images. The training examples used in the new algorithm are prepared by three medical imaging professionals who manually outlined mass contours of 45 sample breast mass images. These manually segmented mass images are then partitioned into small regular grid cells, which are used as reference samples by the algorithm. Each time when the algorithm is applied to segment a previously unseen breast mass image, it first detects grid cell regions in the image that likely overlap with the underlying mass region. Upon identifying such candidate regions, the algorithm then locates the exact mass contour through an example based segmentation procedure where the algorithm retrieves, transfers, and re-applies the human expert knowledge regarding mass segmentation as encoded in the reference samples. The key advantage of our approach lies in its adaptability in tailoring to the skills and preferences of multiple experts through simply switching to a different corpus of human segmentation samples. To explore the effectiveness of the new approach, we comparatively evaluated the accuracy of the algorithm for mass segmentation against segmentation results both manually produced by several medical imaging professionals and automatically by a state-of-the-art level set based method. The comparison results demonstrate that the new algorithm achieves a higher accuracy than the level set based peer method with statistical significance.2
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image modality classification is an important task toward achieving high performance in biomedical image and article retrieval. Imaging modality captures information about its appearance and use. Examples include X-ray, MRI, Histopathology, Ultrasound, etc. Modality classification reduces the search space in image retrieval. We have developed and evaluated several modality classification methods using visual and textual features extracted from images and text data, such as figure captions, article citations, and MeSH®. Our hierarchical classification method using multimodal (mixed textual and visual) and several class-specific features achieved the highest classification accuracy of 63.2%. The performance was among the best in ImageCLEF2012 evaluation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Therapeutic Applications and Extending Imaging Informatics beyond Radiology
In ophthalmology, various modalities and tests are utilized to obtain vital information on the eye’s structure and function. For example, optical coherence tomography (OCT) is utilized to diagnose, screen, and aid treatment of eye diseases like macular degeneration or glaucoma. Such data are complemented by photographic retinal fundus images and functional tests on the visual field. DICOM isn’t widely used yet, though, and frequently images are encoded in proprietary formats. The eXtensible Neuroimaging Archive Tool (XNAT) is an open-source NIH-funded framework for research PACS and is in use at the University of Iowa for neurological research applications. Its use for ophthalmology was hence desirable but posed new challenges due to data types thus far not considered and the lack of standardized formats. We developed custom tools for data types not natively recognized by XNAT itself using XNAT’s low-level REST API. Vendor-provided tools can be included as necessary to convert proprietary data sets into valid DICOM. Clients can access the data in a standardized format while still retaining the original format if needed by specific analysis tools. With respective project-specific permissions, results like segmentations or quantitative evaluations can be stored as additional resources to previously uploaded datasets. Applications can use our abstract-level Python or C/C++ API to communicate with the XNAT instance. This paper describes concepts and details of the designed upload script templates, which can be customized to the needs of specific projects, and the novel client-side communication API which allows integration into new or existing research applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Super-resolution is the process to construct an image of better quality based on images of low quality. Superresolution of medical images has been studied for many years, but there is not a software that computes it. Super-rivam is a software that is based in 6 different algorithms to compute the super-resolution. Two of them, use only one image to produce the super-resolution and four of them use two or more image to compute the super-resolution images (video). Two of these algorithms were developed by our team of researches. The algorithms based on only one image are useful for X-ray, CT, Tomography, etc. The algortihms based on two or more images are useful for ecographies. The paper explains the algorithms and gives a comparison between them. The results shown are the summary of more than 200 different test to the algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cancer registries are information systems that enable easy and efficient collection, organization and utilization of data related to cancer patients for the purpose of epidemiological research, evidence based medicine and planning of public health policies. Our research focuses on developing a web-based system which incorporates aspects of both cancer registry information systems and medical imaging informatics, in order to provide decision support and quality control in external beam radiation therapy. Integrated within this system is a knowledge base composed of retrospective treatment plan data sets of 42 patients, organized in a systematic fashion to aid query, retrieval and data mining. A major cornerstone of our system is the use of DICOM RT data sets as the building blocks of the database. This offers enormous practical advantages since it establishes a framework that can assimilate data from different treatment planning systems and across institutions by making use of a widely used standard – DICOM. Our system will help clinicians to assess their dose volume constraints for prospective patients. This is done by comparing the anatomical configuration of an incoming patient’s tumor and surrounding organs, to that of retrospective patients in the knowledge base. Treatment plans of previous patients with similar anatomical features are retrieved automatically for review by the clinician. The system helps the clinician decide whether his dose/volume constraints for the prospective patient are optimal based on the constraints of the matched retrospective plans. Preliminary results indicate that this small-scale cancer registry could be a powerful decision support tool in radiation therapy treatment planning in IMRT.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the rapid development of science and technology, large-scale rehabilitation centers and clinical rehabilitation trials usually involve significant volumes of multimedia data. Due to the global aging crisis, millions of new patients with age-related chronic diseases will produce huge amounts of data and contribute to soaring costs of medical care. Hence, a solution for effective data management and decision support will significantly reduce the expenditure and finally improve the patient life quality. Inspired from the concept of the electronic patient record (ePR), we developed a prototype system for the field of rehabilitation engineering. The system is subject or patient-oriented and customized for specific projects. The system components include data entry modules, multimedia data presentation and data retrieval. To process the multimedia data, the system includes a DICOM viewer with annotation tools and video/audio player. The system also serves as a platform for integrating decision-support tools and data mining tools. Based on the prototype system design, we developed two specific applications: 1) DOSE (a phase 1 randomized clinical trial to determine the optimal dose of therapy for rehabilitation of the arm and hand after stroke.); and 2) NEXUS project from the Rehabilitation Engineering Research Center(RERC, a NIDRR funded Rehabilitation Engineering Research Center). Currently, the system is being evaluated in the context of the DOSE trial with a projected enrollment of 60 participants over 5 years, and will be evaluated by the NEXUS project with 30 subjects. By applying the ePR concept, we developed a system in order to improve the current research workflow, reduce the cost of managing data, and provide a platform for the rapid development of future decision-support tools.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the technology evolves, the analog mammography systems are being replaced by digital systems. The digital system uses video monitors as the display of mammographic images instead of the previously used screen-film and negatoscope for analog images. The change in the way of visualizing mammographic images may require a different approach for training the health care professionals in diagnosing the breast cancer with digital mammography. Thus, this paper presents a computational approach to train the health care professionals providing a smooth transition between analog and digital technology also training to use the advantages of digital image processing tools to diagnose the breast cancer. This computational approach consists of a software where is possible to open, process and diagnose a full mammogram case from a database, which has the digital images of each of the mammographic views. The software communicates with a gold standard digital mammogram cases database. This database contains the digital images in Tagged Image File Format (TIFF) and the respective diagnoses according to BI-RADSTM, these files are read by software and shown to the user as needed. There are also some digital image processing tools that can be used to provide better visualization of each single image. The software was built based on a minimalist and a user-friendly interface concept that might help in the smooth transition. It also has an interface for inputting diagnoses from the professional being trained, providing a result feedback. This system has been already completed, but hasn’t been applied to any professional training yet.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Keynote and Digital Operating Room and Knowledge Integration in the OR: Joint Session Conferences 8671 and 8674
Based on current research and development activities, a timeline with five stages of maturity levels for the development of the Digital Operating Room (DOR) during the first quarter of the twenty-first century will be outlined. In particular, there are several areas of technology development for the DOR such as (1) Devices, including signal detection and recording, robotics, navigation systems and simulation technologies, which allow more precision in the delivery of personalized interventional therapy; (2) Information and Communication Technology (ICT) Infrastructure and Standards, including Digital Imaging and Communications in Medicine (DICOM), Integrating the Healthcare Enterprise (IHE), the electronic medical record (EMR), and Therapy Imaging and Model Management System (TIMMS) infrastructure for the storage, integration, processing and transmission of patient specific data in and outside the operating room; and (3) Functionalities, including patient specific modeling for selected interventional processes, optimization of surgical workflow as well as TIMMS engines and repositories for improving the overall quality of surgical interventions. Patient specific modeling, work flow management and standards are key aspects for the development of DOR technologies. They will be the prerequisite for intelligent infrastructures and processes in the digital operating room of the future. Architectural aspects of an intelligent infrastructure, specifically a Therapy Imaging and Model Management System (TIMMS) and the Patient-Specific Model (PSM) as well as Standards and integration mechanisms are therefore briefly discussed in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In 2012, we lost three pioneers: Robert S. Ledley in Biomedical Imaging, Moses Greenfield in Medical Physics, and Hooshang Kangarloo in PACS and Informatics. They had their own respective background, interest, and contribution to science and technology that cemented certain cornerstones of today’s Biomedical Imaging Informatics. Among other accomplishments, this memory focuses on their contributions related to medical imaging, medical physics, PACS and informatics. The evolution of medical imaging informatics can be traced from the footprints of these three pioneers through the time line shown in the last figure in the paper. Their dedication in contributing to this field would be remembered by their students, fellows and colleagues who are now continuously leading the growth in this field of science and technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Monte Carlo (MC) technique has been widely used as the gold standard for interaction of radiation with matter in the fields of medical physics, radiation therapy, and nuclear medicine. However, MC simulation is time consuming and requires a lot of computational resources. Generally, a dedicated high performance computing cluster is use to improve efficiency, but it is costly and lacks of the ability to run routine errands in healthcare facilities. In this study, we proposed a method for rapid deployment of computing platform for MC simulation in the PACS environment using review workstations as computing nodes. The workstations were booted from the network and initialed a RAM disk as the boot sector. The simplified Linux operating system and the Monte Carlo N-Particle Transport Code Version 5 (MCNP5) were transferred from the DRBL (Diskless Remote Boot in Linux) server to each node automatically. The cluster computing environment can be established within four minutes. We compared a commercially available dedicated cluster with the DRBL cluster. The results showed that the commercial cluster had a slightly higher acceleration factor than the DRBL cluster. The simulation time of the commercial and the DRBL clusters for 2×108 particle histories was 37,151 and 40,021 sec, respectively. When the number of rendezvous increased to 20, the maximum time differences between both clusters were 95 and 85 sec for the megabit and the gigabit switches. We conclude that the DRBL cluster can be quickly deployed to the non-workloaded review workstations in the PACS. Thus, the MC technique could be broadly used to enhance the research capability of radiological sciences in healthcare facilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work we present the development of an electronic teaching file (ETF) system. The proposed system is accessible through a standard web-browser, allowing users to perform a variety of tasks, including data entry, dataset quality control, and image visualization. In addition, it is a modular and extensible web-based clinical information management system that integrates features of multimodality and multi-center studies. This integration has been done by using MIRCRSNA schemas and application programming interfaces. The system has provided a secure and user-friendly platform to archive and use a comprehensive set of medical images and related information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The advancements of the last 30 years have made picture archiving and communication system (PACS) an indispensable technology to improve the delivery and management of clinical imaging services. Similarly, the maturation of algorithms and computer aided detection (CAD) systems has enhanced the interpretation and diagnosis of radiographical images. However, the lack of integration between the two systems inhibits the rate of development and application of these recent innovations in reaching the clinical users of PACS. We aim to enhance the clinical efficiency of CAD systems by developing an accessible, fully automated, user-friendly, and integrated linkage of CAD and PACS systems. This is the first integration initiative to take advantage of DICOMDIR file and its ability to index DICOM files, allowing images outside of PACS to be viewed within PACS. In this demonstration, the CAD system evaluates CT chest exams to detect lesions in the ribs and produces whole rib map images, screenshots, and detection report. A script executes the rib CAD system and creates a DICOMDIR file using „DCMTK‟, an open-source DICOM toolkit. We evaluated our system on thirty 5mm slice thickness and thirty 2mm slice thickness image studies and demonstrated a time saving efficiency of 93s±14s and 221s±17s per exam, respectively, compared to the current non-integrated workflow of using CAD systems. The advantages of this system are that it is easy to implement, requires no additional workstation and training, and allows CAD results to be viewed in PACS without disrupting radiology workflow, while maintaining the independence of both technologies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to enable multiple disciplines of medical researchers, clinical physicians and biomedical engineers working together in a secured, efficient, and transparent cooperative environment, we had designed an e-Science platform for biomedical imaging research and application cross multiple academic institutions and hospitals in Shanghai by using grid-based or cloud-based distributed architecture and presented this work in SPIE Medical Imaging conference held in San Diego in 2012. However, when the platform integrates more and more nodes over different networks, the first challenge is that how to monitor and maintain all the hosts and services operating cross multiple academic institutions and hospitals in the e-Science platform, such as DICOM and Web based image communication services, messaging services and XDS ITI transaction services. In this presentation, we presented a system design and implementation of intelligent monitoring and management which can collect system resource status of every node in real time, alert when node or service failure occurs, and can finally improve the robustness, reliability and service continuity of this e-Science platform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed the teleradiology network system with a new information security solution that provided with web medical image conference system. In the teleradiology network system, the security of information network is very important subjects. We are studying the secret sharing scheme and the tokenization as a method safely to store or to transmit the confidential medical information used with the teleradiology network system. Secret sharing scheme is a method of dividing the confidential medical information into two or more tallies. Our method has the function of automatic backup. With automatic backup technology, if there is a failure in a single tally, there is redundant data already copied to other tally. Confidential information is preserved at an individual Data Center connected through internet because individual medical information cannot be decoded by using one tally at all. Therefore, even if one of the Data Centers is struck and information is damaged due to the large area disaster, the confidential medical information can be decoded by using the tallies preserved at the data center to which it escapes damage. Moreover, by using tokenization, the history information of dividing the confidential medical information into two or more tallies is prevented from lying scattered by replacing the history information with another character string. As a result, information is available only to those who have rightful access it and the sender of a message and the message itself are verified at the receiving point. We propose a new information transmission method and a new information storage method with a new information security solution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Pulmonary nodules and ground glass opacities are highly significant findings in high-resolution computed tomography (HRCT) of patients with pulmonary lesion. The appearances of pulmonary nodules and ground glass opacities show a relationship with different lung diseases. According to corresponding characteristic of lesion, pertinent segment methods and quantitative analysis are helpful for control and treat diseases at an earlier and potentially more curable stage. Currently, most of the studies have focused on two-dimensional quantitative analysis of these kinds of deceases. Compared to two-dimensional images, three-dimensional quantitative analysis can take full advantage of isotropic image data acquired by using thin slicing HRCT in space and has better quantitative precision for clinical diagnosis. This presentation designs a computer-aided diagnosis component to segment 3D disease areas of nodules and ground glass opacities in lung CT images, and use AIML (Annotation and image makeup language) to annotate the segmented 3D pulmonary lesions with information of quantitative measurement which may provide more features and information to the radiologists in clinical diagnosis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.