PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 7628, including the Title Page, Copyright information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, there is an increasing need to share medical images for research purpose. In order to respect and preserve
patient privacy, most of the medical images are de-identified with protected health information (PHI) before
research sharing. Since manual de-identification is time-consuming and tedious, so an automatic de-identification
system is necessary and helpful for the doctors to remove text from medical images. A lot of papers have been
written about algorithms of text detection and extraction, however, little has been applied to de-identification of
medical images. Since the de-identification system is designed for end-users, it should be effective, accurate and
fast. This paper proposes an automatic system to detect and extract text from medical images for de-identification
purposes, while keeping the anatomic structures intact. First, considering the text have a remarkable contrast with
the background, a region variance based algorithm is used to detect the text regions. In post processing, geometric
constraints are applied to the detected text regions to eliminate over-segmentation, e.g., lines and anatomic
structures. After that, a region based level set method is used to extract text from the detected text regions. A GUI
for the prototype application of the text detection and extraction system is implemented, which shows that our method can detect most of the text in the images. Experimental results validate that our method can detect and extract text in medical images with a 99% recall rate. Future research of this system includes algorithm improvement, performance evaluation, and computation optimization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The creation of an integrated biomedical information database requires diverse and flexible schemas. Although relational
database systems seem to be an obvious choice for storage, traditional designs of relational schemas cannot support integrated biomedical information in the most effective ways. Therefore, new models for managing diverse and flexible schemas in relational databases are required for such systems. This paper proposes several schema models for integrated biomedical information using relational tables, and presents an experimental evaluation of their efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proprietary approaches for representing annotations and image markup are serious barriers for researchers to share image
data and knowledge. The Annotation and Image Markup (AIM) project is developing a standard based information model
for image annotation and markup in health care and clinical trial environments. The complex hierarchical structures of
AIM data model pose new challenges for managing such data in terms of performance and support of complex queries. In this paper, we present our work on managing AIM data through a native XML approach, and supporting complex image and annotation queries through native extension of XQuery language. Through integration with xService, AIM databases can now be conveniently shared through caGrid.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Data mining of existing radiology and pathology reports within an enterprise health system can be used for clinical
decision support, research, education, as well as operational analyses. In our health system, the database of radiology
and pathology reports exceeds 13 million entries combined. We are building a web-based tool to allow search and data
analysis of these combined databases using freely available and open source tools. This presentation will compare
performance of an open source full-text indexing tool to MySQL's full-text indexing and searching and describe
implementation procedures to incorporate these capabilities into a radiology-pathology search engine.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A major challenge in biomedical Content-Based Image Retrieval (CBIR) is to achieve meaningful mappings
that minimize the semantic gap between the high-level biomedical semantic concepts and the low-level visual
features in images. This paper presents a comprehensive learning-based scheme toward meeting this challenge
and improving retrieval quality. The article presents two algorithms: a learning-based feature selection and
fusion algorithm and the Ranking Support Vector Machine (Ranking SVM) algorithm. The feature selection
algorithm aims to select 'good' features and fuse them using different similarity measurements to provide a better
representation of the high-level concepts with the low-level image features. Ranking SVM is applied to learn
the retrieval rank function and associate the selected low-level features with query concepts, given the ground-truth
ranking of the training samples. The proposed scheme addresses four major issues in CBIR to improve the
retrieval accuracy: image feature extraction, selection and fusion, similarity measurements, the association of the
low-level features with high-level concepts, and the generation of the rank function to support high-level semantic image retrieval. It models the relationship between semantic concepts and image features, and enables retrieval at the semantic level. We apply it to the problem of vertebra shape retrieval from a digitized spine x-ray image set collected by the second National Health and Nutrition Examination Survey (NHANES II). The experimental results show an improvement of up to 41.92% in the mean average precision (MAP) over conventional image similarity computation methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Diagnosis and treatment planning for patients can be significantly improved by comparing with clinical images of
other patients with similar anatomical and pathological characteristics. This requires the images to be annotated
using common vocabulary from clinical ontologies. Current approaches to such annotation are typically manual,
consuming extensive clinician time, and cannot be scaled to large amounts of imaging data in hospitals. On the
other hand, automated image analysis while being very scalable do not leverage standardized semantics and thus
cannot be used across specific applications. In our work, we describe an automated and context-sensitive workflow
based on an image parsing system complemented by an ontology-based context-sensitive annotation tool. An
unique characteristic of our framework is that it brings together the diverse paradigms of machine learning based
image analysis and ontology based modeling for accurate and scalable semantic image annotation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As medical imaging rapidly expands, there is an increasing need to structure and organize image data for
efficient analysis, storage and retrieval. In response, a large fraction of research in the areas of content-based
image retrieval (CBIR) and picture archiving and communication systems (PACS) has focused on structuring
information to bridge the "semantic gap", a disparity between machine and human image understanding. An
additional consideration in medical images is the organization and integration of clinical diagnostic information.
As a step towards bridging the semantic gap, we design and implement a hierarchical image abstraction layer using
an XML based language, Scalable Vector Graphics (SVG). Our method encodes features from the raw image and
clinical information into an extensible "layer" that can be stored in a SVG document and efficiently searched. Any
feature extracted from the raw image including, color, texture, orientation, size, neighbor information, etc., can
be combined in our abstraction with high level descriptions or classifications. And our representation can natively
characterize an image in a hierarchical tree structure to support multiple levels of segmentation. Furthermore, being a world wide web consortium (W3C) standard, SVG is able to be displayed by most web browsers, interacted with by ECMAScript (standardized scripting language, e.g. JavaScript, JScript), and indexed and retrieved by XML databases and XQuery. Using these open source technologies enables straightforward integration into existing systems. From our results, we show that the flexibility and extensibility of our abstraction facilitates effective storage and retrieval of medical images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One fundamental problem remains in the area of medical image analysis and retrieval: how to measure
radiologist's perception of similarity between two images. This paper develops a similarity function that is
learned from medical annotations and built upon extracted medical features in order to capture the perception of similarity between images with cancer. The technique first extracts high-level medical features from the images to determine a local contextual similarity, but these are unordered and unregistered from one image to the next. Second, the feature sets of the images are fed into the learned similarity function to determine the overall similarity for retrieval. This technique avoids arbitrary spatial constraints and is robust in the presence of noise, outliers, and imaging artifacts. We demonstrate that utilizing unordered and noisy higher-level cancer detection features is both possible and productive in measuring image similarity and developing CBIR techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The large and continuously growing amount of medical image data demands access methods with regards to content
rather than simple text-based queries. The potential benefits of content-based image retrieval (CBIR) systems for
computer-aided diagnosis (CAD) are evident and have been approved. Still, CBIR is not a well-established part of daily
routine of radiologists. We have already presented a concept of CBIR integration for the radiology workflow in
accordance with the Integrating the Healthcare Enterprise (IHE) framework. The retrieval result is composed as a Digital
Imaging and Communication in Medicine (DICOM) Structured Reporting (SR) document. The use of DICOM SR
provides interchange with PACS archive and image viewer. It offers the possibility of further data mining and automatic
interpretation of CBIR results. However, existing standard templates do not address the domain of CBIR. We present a
design of a SR template customized for CBIR. Our approach is based on the DICOM standard templates and makes use
of the mammography and chest CAD SR templates. Reuse of approved SR sub-trees promises a reliable design which is
further adopted to the CBIR domain. We analyze the special CBIR requirements and integrate the new concept of similar
images into our template. Our approach also includes the new concept of a set of selected images for defining the
processed images for CBIR. A commonly accepted pre-defined template for the presentation and exchange of results in a standardized format promotes the widespread application of CBIR in radiological routine.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Surgical Process Modeling (SPM) is a powerful method for acquiring data about the evolution of surgical procedures.
Surgical Process Models are used in a variety of use cases including evaluation studies, requirements analysis and
procedure optimization, surgical education, and workflow management scheme design.
This work proposes the use of adaptive, situation-aware user interfaces for observation support software for SPM. We
developed a method to support the modeling of the observer by using an ontological knowledge base. This is used to
drive the graphical user interface for the observer to restrict the search space of terminology depending on the current
situation.
In the evaluation study it is shown, that the workload of the observer was decreased significantly by using adaptive user
interfaces. 54 SPM observation protocols were analyzed by using the NASA Task Load Index and it was shown that the
use of the adaptive user interface disburdens the observer significantly in workload criteria effort, mental demand and
temporal demand, helping him to concentrate on his essential task of modeling the Surgical Process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In CAS literature, one finds numerous examples of the usage of directly measured surfaces. Those surfaces are usually
measured using so called "Surface Scanners" which employ structured light (pattern projection or laser) to measure the
surface. From an integration standpoint, it would be beneficial for many applications to have all patient data in a
common repository and since in many cases radiology images are involved as well, a PACS is a natural option for
storage of this data. DICOM - the major standard used for storage and transmission of data within a PACS - has recently
been extended by the option to store surface meshes using a newly introduced data structure. This new Surface Mesh
Module can serve as a basis for storage of data generated by an optical surface scanner. Nonetheless, a new Information
Object Definition for this kind of data has to be introduced to reflect the specific needs: Device specific parameters have
to be stored and, in addition to the Surface Mesh Module, there must be the possibility to store textures as well. This
paper gives an overview about the specific requirements and an outline of a Work Item leading to an Optical Surface
Scan Information Object Definition (IOD).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Last year we presented a paper that describes the design and clinical implementation of an ePR (Electronic Patient Record) system for Image-Assisted Minimally Invasive Spinal Surgery (IA-MISS). The goal of this ePR is to improve the workflow efficiency by providing all the necessary data of a surgical procedure from the preparation stage until the recovery stage. The mentioned ePR has been implemented and installed clinically and it has been in use for more than 16 months. In this paper, we will describe the migration process from a prototype version of the system to a more stable and easily-to-replicate alpha version.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The electronic patient record (ePR) has been developed for prostate cancer patients treated with proton therapy. The ePR
has functionality to accept digital input from patient data, perform outcome analysis and patient and physician profiling,
provide clinical decision support and suggest courses of treatment, and distribute information across different platforms
and health information systems. In previous years, we have presented the infrastructure of a medical imaging informatics based ePR for PT with functionality to accept digital patient information and distribute this information
across geographical location using Internet protocol. In this paper, we present the ePR decision support tools which
utilize the imaging processing tools and data collected in the ePR. The two decision support tools including the treatment
plan navigator and radiation toxicity tool are presented to evaluate prostate cancer treatment to improve proton therapy
operation and improve treatment outcomes analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multiple sclerosis (MS) is a demyelinating disease of the central nervous system. The chronic nature of MS necessitates
multiple MRI studies to track disease progression. Currently, MRI assessment of multiple sclerosis requires manual
lesion measurement and yields an estimate of lesion volume and change that is highly variable and user-dependent. In
the setting of a longitudinal study, disease trends and changes become difficult to extrapolate from the lesions. In
addition, it is difficult to establish a correlation between these imaged lesions and clinical factors such as treatment
course. To address these clinical needs, an MS specific e-Folder for decision support in the evaluation and assessment of
MS has been developed. An e-Folder is a disease-centric electronic medical record in contrast to a patient-centric electronic health record. Along with an MS lesion computer aided detection (CAD) package for lesion load, location,
and volume, clinical parameters such as patient demographics, disease history, clinical course, and treatment history are
incorporated to make the e-Folder comprehensive. With the integration of MRI studies together with related clinical data
and informatics tools designed for monitoring multiple sclerosis, it provides a platform to improve the detection of
treatment response in patients with MS. The design and deployment of MS e-Folder aims to standardize MS lesion data
and disease progression to aid in decision making and MS-related research.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
System Integration and Visualization I: Decision Support
Timely detection of Acute Intra-cranial Hemorrhage (AIH) in an emergency environment is essential for the triage of
patients suffering from Traumatic Brain Injury. Moreover, the small size of lesions and lack of experience on the
reader's part could lead to difficulties in the detection of AIH. A CT based CAD algorithm for the detection of AIH has
been developed in order to improve upon the current standard of identification and treatment of AIH. A retrospective
analysis of the algorithm has already been carried out with 135 AIH CT studies with 135 matched normal head CT
studies from the Los Angeles County General Hospital/ University of Southern California Hospital System (LAC/USC).
In the next step, AIH studies have been collected from Walter Reed Army Medical Center, and are currently being processed using the AIH CAD system as part of implementing a multi-site assessment and evaluation of the performance of the algorithm. The sensitivity and specificity numbers from the Walter Reed study will be compared with the numbers from the LAC/USC study to determine if there are differences in the presentation and detection due to the difference in the nature of trauma between the two sites. Simultaneously, a stand-alone system with a user friendly GUI has been developed to facilitate implementation in a clinical setting.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Bone Age Assessment (BAA) of children is a clinical procedure frequently performed in pediatric radiology to evaluate
the stage of skeletal maturation based on the left hand x-ray radiograph. The current BAA standard in the US is using the
Greulich & Pyle (G&P) Hand Atlas, which was developed fifty years ago and was only based on Caucasian population
from the Midwest US. To bring the BAA procedure up-to-date with today's population, a Digital Hand Atlas (DHA)
consisting of 1400 hand images of normal children of different ethnicities, age, and gender. Based on the DHA and to
solve inter- and intra-observer reading discrepancies, an automatic computer-aided bone age assessment system has been
developed and tested in clinical environments. The algorithm utilizes features extracted from three regions of interests:
phalanges, carpal, and radius. The features are aggregated into a fuzzy logic system, which outputs the calculated bone
age. The previous BAA system only uses features from phalanges and carpal, thus BAA result for children over age of
15 is less accurate. In this project, the new radius features are incorporated into the overall BAA system. The bone age
results, calculated from the new fuzzy logic system, are compared against radiologists' readings based on G&P atlas, and
exhibits an improvement in reading accuracy for older children.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Diffusion Tensor Imaging (DTI) has become an important MRI procedure to investigate the integrity of white matter in
brain in vivo. DTI is estimated from a series of acquired Diffusion Weighted Imaging (DWI) volumes. DWI data suffers
from inherent low SNR, overall long scanning time of multiple directional encoding with correspondingly large risk to
encounter several kinds of artifacts. These artifacts can be too severe for a correct and stable estimation of the diffusion
tensor. Thus, a quality control (QC) procedure is absolutely necessary for DTI studies. Currently, routine DTI QC
procedures are conducted manually by visually checking the DWI data set in a gradient by gradient and slice by slice
way. The results often suffer from low consistence across different data sets, lack of agreement of different experts, and
difficulty to judge motion artifacts by qualitative inspection. Additionally considerable manpower is needed for this step
due to the large number of images to QC, which is common for group comparison and longitudinal studies, especially
with increasing number of diffusion gradient directions. We present a framework for automatic DWI QC. We developed a tool called DTIPrep which pipelines the QC steps with a detailed protocoling and reporting facility. And it is fully open source. This framework/tool has been successfully applied to several DTI studies with several hundred DWIs in our lab as well as collaborating labs in Utah and Iowa. In our studies, the tool provides a crucial piece for robust DTI analysis in brain white matter study.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multiple Sclerosis (MS) is a common neurological disease affecting the central nervous system characterized by
pathologic changes including demyelination and axonal injury. MR imaging has become the most important tool to
evaluate the disease progression of MS which is characterized by the occurrence of white matter lesions. Currently,
radiologists evaluate and assess the multiple sclerosis lesions manually by estimating the lesion volume and amount of
lesions. This process is extremely time-consuming and sensitive to intra- and inter-observer variability. Therefore, there
is a need for automatic segmentation of the MS lesions followed by lesion quantification. We have developed a fully
automatic segmentation algorithm to identify the MS lesions. The segmentation algorithm is accelerated by parallel
computing using Graphics Processing Units (GPU) for practical implementation into a clinical environment.
Subsequently, characterized quantification of the lesions is performed. The quantification results, which include lesion
volume and amount of lesions, are stored in a structured report together with the lesion location in the brain to establish a
standardized representation of the disease progression of the patient. The development of this structured report in
collaboration with radiologists aims to facilitate outcome analysis and treatment assessment of the disease and will be
standardized based on DICOM-SR. The results can be distributed to other DICOM-compliant clinical systems that support DICOM-SR such as PACS. In addition, the implementation of a fully automatic segmentation and quantification system together with a method for storing, distributing, and visualizing key imaging and informatics data in DICOM-SR for MS lesions improves the clinical workflow of radiologists and visualizations of the lesion segmentations and will provide 3-D insight into the distribution of lesions in the brain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Diagnostic tools supported by digital medical images have increasingly become an
essential aid to medical decisions. However, despite its growing importance, Picture Archiving and Communication Systems (PACS) are typically oriented to support a single healthcare
institution, and the sharing of medical data across institutions is still a difficult process. This
paper describes a proposal to publish and control Digital Imaging Communications in Medicine
(DICOM) services in a wide domain composed of several healthcare institutions. The system
creates virtual bridges between intranets enabling the exchange, search and store of the medical data within the wide domain. The service provider publishes the DICOM services following a token-based strategy. The token advertisements are public and known by all system users. However, access to the DICOM service is controlled through a role association between an access key and the service. Furthermore, in medical diagnoses, time is a crucial factor. Therefore, our system is a turnkey solution, capable of exchanging medical data across firewalls and Network Address Translation (NAT), avoiding bureaucratic issues with local network security. Security is also an important concern - in any transmission across different domains, data is encrypted by Transport Layer Security (TLS).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The standard format for medical imaging storage and transmission is Digital Imaging and Communications in Medicine
(DICOM). Nowadays, and specifically with large amounts of medical images acquired by modern modalities, the need
for fast data transmission between DICOM application entities is evident. In some applications, particularly those aiming
to provide real-time services, this demand is critical. This paper introduces a method which provides a fast and simple
way of image transmission by utilizing the DICOM protocol. The current implementations of DICOM protocol usually
care more about connecting DICOM application entities. In the process of connecting two DICOM application entities,
the format of the transmission (Transfer Syntax) is agreed upon. In this crucial step, the two entities choose an encoding
that is supported by both and if one entity does not support compression the other one cannot use that option. In the
proposed method, we use a pair of interfaces to deal with this issue and provide a fast method for medical data
transmission between any two DICOM application entities. These interfaces use both compression and multi-threading
techniques to transfer the images. The interfaces can be used without any change to the current DICOM application
entities. In fact, the interfaces listen to the incoming messages from the DICOM application entities, intercept the
messages, and carry out the data transmission. The experimental results show about 22% speed-up in Local Area
Networks (LANs) and about 13-14 times speed-up in Wide Area Networks (WANs).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
System Integration and Visualization II: Translational Research and Large-scale Collaborations
Traumatic Brain Injury (TBI) is a problem of major medical and socioeconomic significance, although the pathogenesis
of its sequelae is not completely understood. As part of a large, multi-center project to study mild and moderate TBI, a
database and informatics system to integrate a wide-range of clinical, biological, and imaging data is being developed.
This database constitutes a systems-based approach to TBI with the goals of developing and validating biomarker panels that might be used to diagnose brain injury, predict clinical outcome, and eventually develop improved therapeutics. This paper presents the architecture for an informatics system that stores the disparate data types and permits easy access to the data for analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Research images and findings reports generated during imaging-based small animal imaging experiments are typically
kept by imaging facilities on workstations or by investigators on burned DVD's. There usually lacks structure and
organization to these data content, and are limited to directory and file names to help users find their data files. A study-centric
database design is a fundamental step towards imaging systems integration and also a research data grid infrastructure for multi-institution collaborations and translational research. This paper will present a novel relational
database model to maintain experimental metadata for studies, raw imaging files, post-processed images, and
quantitative findings, all generated during most imaging-based animal-model studies. The integration of experimental
metadata into a single database can alleviate current investigative dependency on hand-written records for current and
previous experimental data. Furthermore, imaging workstations and systems that are integrated with this database can be
streamlined in their data workflow with automated query services. This novel database model is being implemented in a molecular imaging data grid for evaluation with animal-model imaging studies provided from the Molecular Imaging Center at USC.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The utilization of breast MRI is increasing in the diagnostic evaluation of suspicious breast findings. As more imaging
centers implement dedicated breast MR, the need for managing data on a large scale, nationally and even some times
internationally, has become more apparent. Our design proposal is to utilize the data grid for managing the storage of the
medical images and an ePR that provides the interface to manage the health data of Breast Cancer Patients. In this
paper, we present the data grid for DICOM images and DICOM-SR data and the image-intensive web-based ePR system technologies utilizing the simulation of a three-site dedicated breast MR outpatient centers as the clinical application. The implementation of the two technologies the ePR system together with the Breast Imaging Data Grid (BIDG) can provide a global solution that is portable for the Breast Cancer patient including aggregation of Mammo, US, MR, CAD, BI-RADS, and clinical related reports data to form a powerful platform for data mining and outcomes research.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the Region Vastra Gotaland (VGR), Sweden, sharing of data from 4 PACS system has been done through the Radiology Information Infrastructure that where deployed in 2007, and during 2008 and 2009 also including the information obtained from three different RIS systems installed in the region. The RIS information stored in the Radiology Information Infrastructure is Structured Reports (SR) objects that derivatives from the regional information model. In practice, the Enterprise solution now offers new ways of social collaboration through information sharing within a region.
Interoperability was developed according to the IHE mission, i.e. applying standards such as digital imaging and communication in medicine (DICOM) and Health Level 7 (HL7) to address specific
clinical communication needs and support optimal patient care.
Applying standards and information has shown to be suitable for interoperability, but not appropriate for implementing social collaboration i.e. first and second opinion, as there is no user services related to the standards. The need for social interaction leads to a common negotiated interface and in contrary with interoperability the approach will be a common defined semantic model.
Radiology informatics is the glue between the technical standards, information models,semantics, social ruleworks and regulations used within radiology and their customers to share information and services.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In 2003, the Society for Imaging Informatics in Medicine (SIIM) recognized the problem of rapidly increasing number of images in a radiology study as well as the growing number of studies per patient and the increasing number of patients. This produces a developing issue for radiologists, there was simply no way to efficiently manage the number of images that were produced per day with the available tools. SIIM members organized to help encourage research and development in areas that would transform the radiological interpretation process and trademarked the initiative TRIP™. Since the initiative was started, technology and development has advanced, but has it solved the problem? This paper reviews the literature published in the Journal of Digital Imaging from 2003 until now and analyzes the advances that have been made and what still needs to be done.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Radiologists often recommend further imaging, laboratory or clinical follow-up as part of a study interpretation, but rarely receive feedback as to the results of these additional tests. In most cases, the radiologist has to actively pursue this information by searching through the multiple electronic medical records at our institution. In this work, we seek to determine if it would be possible to automate the feedback process by analyzing how radiologists phrase recommendations for clinical, laboratory or radiologic follow-up. We surveyed a dozen attending radiologists to create a set of phrases conventionally used to indicate the need for follow-up. Next, we mined dictated reports over a 1-year period to quantify the appearance of each of these phrases. We are able to isolate 5 phrases that appear in over 21,000 studies performed during the 1-year period, and classify them by modality. We also validated the query by evaluating one day's worth of reports for follow-up recommendations and assessing the comparative performance of the follow-up query. By automatically mining imaging reports for these key phrases and tracking these patients' electronic medical records for additional imaging or pathology, we can begin to provide radiologists with automated feedback regarding studies they have interpreted. Furthermore, we can analyze how often these recommendations lead to a definitive diagnosis and enable radiologists to adjust their practice and decision-making accordingly and ultimately improve patient care.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When a patient is accepted in the emergency room suspected of stroke, time is of the utmost importance. The infarct
brain area suffers irreparable damage as soon as three hours after the onset of stroke symptoms. A CT scan is one of
standard first line of investigations with imaging and is crucial to identify and properly triage stroke cases. The
availability of an expert Radiologist in the emergency environment to diagnose the stroke patient in a timely manner
only increases the challenges within the clinical workflow. Therefore, a truly zero-footprint web-based system with
powerful advanced visualization tools for volumetric imaging including 2D. MIP/MPR, 3D display can greatly
facilitate this dynamic clinical workflow for stroke patients. Together with mobile technology, the proper
visualization tools can be delivered at the point of decision anywhere and anytime. We will present a small pilot
project to evaluate the use of mobile technologies using devices such as iPhones in evaluating stroke patients. The
results of the evaluation as well as any challenges in setting up the system will also be discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Despite its high significance, the clinical utilization of image registration remains limited because of its lengthy
execution time and a lack of easy access. The focus of this work was twofold. First, we accelerated our course-to-fine,
volume subdivision-based image registration algorithm by a novel parallel implementation that maintains the accuracy of
our uniprocessor implementation. Second, we developed a thin-client computing model with a user-friendly interface to
perform rigid and nonrigid image registration. Our novel parallel computing model uses the message passing interface
model on a 32-core cluster. The results show that, compared with the uniprocessor implementation, the parallel
implementation of our image registration algorithm is approximately 5 times faster for rigid image registration and
approximately 9 times faster for nonrigid registration for the images used. To test the viability of such systems for
clinical use, we developed a thin client in the form of a plug-in in OsiriX, a well-known open source PACS workstation
and DICOM viewer, and used it for two applications. The first application registered the baseline and follow-up MR brain images, whose subtraction was used to track progression of multiple sclerosis. The second application registered pretreatment PET and intratreatment CT of radiofrequency ablation patients to demonstrate a new capability of multimodality imaging guidance. The registration acceleration coupled with the remote implementation using a thin client should ultimately increase accuracy, speed, and access of image registration-based interpretations in a number of diagnostic and interventional applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multiple Sclerosis (MS) is a progressive neurological disease affecting myelin pathways in the brain. Multiple lesions in
the white matter can cause paralysis and severe motor disabilities of the affected patient. To solve the issue of
inconsistency and user-dependency in manual lesion measurement of MRI, we have proposed a 3-D automated lesion
quantification algorithm to enable objective and efficient lesion volume tracking. The computer-aided detection (CAD)
of MS, written in MATLAB, utilizes K-Nearest Neighbors (KNN) method to compute the probability of lesions on a
per-voxel basis. Despite the highly optimized algorithm of imaging processing that is used in CAD development, MS
CAD integration and evaluation in clinical workflow is technically challenging due to the requirement of high
computation rates and memory bandwidth in the recursive nature of the algorithm. In this paper, we present the
development and evaluation of using a computing engine in the graphical processing unit (GPU) with MATLAB for
segmentation of MS lesions. The paper investigates the utilization of a high-end GPU for parallel computing of KNN in the MATLAB environment to improve algorithm performance. The integration is accomplished using NVIDIA's CUDA developmental toolkit for MATLAB. The results of this study will validate the practicality and effectiveness of the prototype MS CAD in a clinical setting. The GPU method may allow MS CAD to rapidly integrate in an electronic patient record or any disease-centric health care system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Medical images provide lots of information for physicians, but the huge amount of data produced by medical image
equipments in a modern Health Institution is not completely explored in its full potential yet. Nowadays medical images
are used in hospitals mostly as part of routine activities while its intrinsic value for research is underestimated. Medical
images can be used for the development of new visualization techniques, new algorithms for patient care and new image
processing techniques. These research areas usually require the use of huge volumes of data to obtain significant results,
along with enormous computing capabilities. Such qualities are characteristics of grid computing systems such as
EELA-2 infrastructure. The grid technologies allow the sharing of data in large scale in a safe and integrated environment and offer high computing capabilities.
In this paper we describe the DicomGrid to store and retrieve medical images, properly anonymized, that can be used by researchers to test new processing techniques, using the computational power offered by grid technology. A prototype of the DicomGrid is under evaluation and permits the submission of jobs into the EELA-2 grid infrastructure while offering a simple interface that requires minimal understanding of the grid operation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One way to improve accuracy of diagnosis and provide better medical treatment to patients is to recall or find
records of previous patients with similar disease features from healthcare information systems which already have
confirmed diagnostic results. In most situations, features of disease may be described by other kinds of
information or data types such as numerical reports or a simple or complicated SR (Structure Reports) generated
from Ultrasound Information System (USIS) or from computer assisted detection (CAD) components, or
laboratory information system (LIS). In this presentation, we described a new approach to search and retrieve
numerical reports based on the contents of parameters from large database of numerical reports. We have tested
this approach by using numerical data from an ultrasound information system (USIS) and got desired results both
in accuracy and performance. The system can be wrapped as a web service and is being integrated into a USIS and
EMR for clinical evaluation without interrupting the normal operations of USIS/RIS/PACS. We give the design
architecture and implementation strategy of this novel framework to provide feature based case retrieval capability
in an integrated healthcare information system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Uterine cervix image analysis is of great importance to the study of uterine cervix cancer, which is among the
leading cancers affecting women worldwide. In this paper, we describe our proof-of-concept, Web-accessible
system for automated segmentation of significant tissue regions in uterine cervix images, which also demonstrates
our research efforts toward promoting collaboration between engineers and physicians for medical image analysis
projects. Our design and implementation unifies the merits of two commonly used languages, MATLAB and Java. It
circumvents the heavy workload of recoding the sophisticated segmentation algorithms originally developed in MATLAB into Java while allowing remote users who are not experienced programmers and algorithms developers to apply those processing methods to their own cervicographic images and evaluate the algorithms. Several other practical issues of the systems are also discussed, such as the compression of images and the format of the segmentation results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The average workload per full-time equivalent (FTE) radiologist increased by 70% from1991-1992 to 2006-
2007. The increase is mainly due to the increase (34%) in the number of procedures, particularly in 3D imaging
procedures. New technologies such as picture archiving and communication systems (PACS) and embodied viewing
capability were accredited for an improved workflow environment leading to the increased productivity. However, the
need for further workflow improvement is still in demand as the number of procedures continues growing. Advanced and
streamlined viewing capability in PACS environment could potentially reduce the reading time, thus further increasing
the productivity. With the increasing number of 3D image procedures, radiographic procedures (excluding
mammography) have remained their critical roles in screening and diagnosis of various diseases. Although radiographic
procedures decreased in shares from 70% to 49.5%, the total number has remained the same from 1991-1992 to 2006-
2007. Inconsistency in image quality for radiographic images has been identified as an area of concern. It affects the
ability of clinicians to interpret images effectively and efficiently in areas where diagnosis, for example, in screening
mammography and portable chest radiography, requires a comparison of current images with priors. These priors can
potentially have different image quality. Variations in image acquisition techniques (x-ray exposure), patient and
apparatus positioning, and image processing are the factors attributed to the inconsistency in image quality. The
inconsistency in image quality, for example, in contrast may require manual image manipulation (i.e., windowing and
leveling) of images to accomplish an optimal comparison to detect the subtle changes. We developed a tone-scale image
rendering technique which improves the image consistency of chest images across time and modality. The rendering
controls both the global and local contrast for a consistent look. We expect the improvement could reduce the window
and level manipulation time required for an optimal comparison of priors and current images, thus improving both the
efficiency and effectiveness of image interpretation. This paper presents a technique for improving the consistency of
portable chest radiographic images. The technique is based on regions-of-interest (ROIs) to control both the local and
global contrast consistency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the Vastra Gotaland region (VGR) we use a Radiology Information Infrastructure containing all information produced
within the Radiology departments (1,2,3). All information is stored as Dicom-objects (4). This means that request and
report information is stored as Structured Reports (SR) -objects (5) together with the images if they exist.
At Sahlgrenska University Hospital (SU) in Gothenburg, Sweden we have radiological workstations that can't display
the contents in the SR-objects and have a working RIS-integration at the same time.
We have developed some software in conjunction with the dcmtk-software package (6) developed by the Oldenburg
University to make it possible to display information from SR-objects on the radiological workstations.
The workstations have the ability to use Web-functionality so the solution is based on web-technology.
The following happens when a request is made to display the SR-information:
1. Workstation calls a cgi-script that checks if the archive has any SR-reports for the given study.
2. A c-move request is sent to the archive to send the SR-objects (reports) to a Dicom-receiver on the web-server.
3. The dicom-receiver (storescp) creates html-files with help of a modified version of dsr2html.
4. The cgi-script read the names of the created html-files and returns the names in an javascript-array.
5. The report is displayed on the workstation.
By developing some pieces of software and using open source software we have developed a well functional solution to
display SR-reports stored in a central dicom-archive on workstations that can't show SR-information by themselves.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Medical Imaging Informatics Data Grid project is an enterprise infrastructure solution developed at the University
of Southern California for archiving digital medical images and structured reports. Migration methodology and policies
are needed to maintain continuous data availability as data volumes are being copied and/or moved within a data grid's
multi-site storage devices. In the event a storage device is unavailable, a copy of its contents should be available at a live
secondary storage device within the data grid to provide continuous data availability. In the event a storage device within the data grid is running out of space, select data volumes should be moved seamlessly to a tier-2 storage device for long-term storage, without interruption to front-end users. Thus the database and file migration processes involved must not disrupt the existing workflows in the data grid model. This paper discusses the challenges, policies, and protocols required to provide data persistence through data migration in the Medical Imaging Informatics Data Grid.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Diagnostic MDCT imaging requires a considerable number of images to be read. Moreover, the doctor who
diagnoses a medical image is insufficient in Japan. Because of such a background, we have provided diagnostic
assistance methods to medical screening specialists by developing a lung cancer screening algorithm that automatically
detects suspected lung cancers in helical CT images, a coronary artery calcification screening algorithm that
automatically detects suspected coronary artery calcification and a vertebra body analysis algorithm for quantitative
evaluation of osteoporosis. We also have developed the teleradiology network system by using web medical image
conference system. In the teleradiology network system, the security of information network is very important subjects.
Our teleradiology network system can perform Web medical image conference in the medical institutions of a remote
place using the web medical image conference system. We completed the basic proof experiment of the web medical
image conference system with information security solution. We can share the screen of web medical image conference
system from two or more web conference terminals at the same time. An opinion can be exchanged mutually by using a
camera and a microphone that are connected with the workstation that builds in some diagnostic assistance methods.
Biometric face authentication used on site of teleradiology makes "Encryption of file" and "Success in login" effective.
Our Privacy and information security technology of information security solution ensures compliance with Japanese
regulations. As a result, patients' private information is protected. Based on these diagnostic assistance methods, we have
developed a new computer-aided workstation and a new teleradiology network that can display suspected lesions
three-dimensionally in a short time. The results of this study indicate that our radiological information system without
film by using computer-aided diagnosis workstation and our teleradiology network system can increase diagnostic speed,
diagnostic accuracy and security improvement of medical information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.