PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE
Proceedings Volume 7264, including the Title Page, Copyright
information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Biomedical database systems need not only to address the issues of managing complex data, but also to provide data
security and access control to the system. These include not only system level security, but also instance level access
control such as access of documents, schemas, or aggregation of information. The latter is becoming more important
as multiple users can share a single scientific data management system to conduct their research, while data have to be
protected before they are published or IP-protected. This problem is challenging as users' needs for data security vary
dramatically from one application to another, in terms of who to share with, what resources to be shared, and at what
access level. We develop a comprehensive data access framework for a biomedical data management system SciPort.
SciPort provides fine-grained multi-level space based access control of resources at not only object level (documents and
schemas), but also space level (resources set aggregated in a hierarchy way). Furthermore, to simplify the management
of users and privileges, customizable role-based user model is developed. The access control is implemented efficiently
by integrating access privileges into the backend XML database, thus efficient queries are supported. The secure access
approach we take makes it possible for multiple users to share the same biomedical data management system with flexible
access management and high data security.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to validate CT imaging as a biomarker, it is important to ascertain the variability and artifacts associated with
various forms of advanced visualization and quantification software. The purpose of the paper is to describe the
rationale behind the creation of a free, public resource that contains phantom datasets for CT designed to facilitate
testing, development and standardization of advanced visualization and quantification software. For our research, three
phantoms were scanned at multiple kVp and mAs settings utilizing a 64-channel MDCT scanner at a collimation of 0.75
mm. Images were reconstructed at a slice thickness of 0.75 mm and archived in DICOM format. The phantoms
consisted of precision spheres, balls of different materials and sizes, and slabs of Last-A-Foam(R) at varying densities.
The database of scans is stored in an archive utilizing software developed for the National Cancer Imaging Archive and
is publically available. The scans were completed successfully and the datasets are available for free and unrestricted
download. The CT images can be accessed in DICOM format via http or FTP or utilizing caGRID. A DICOM database
of phantom data was successfully created and made available to the public. We anticipate that this database will be
useful as a reference for physicists for quality control purposes, for developers of advanced visualization and
quantification software, and for others who need to test the performance of their systems against a known "gold"
standard. We plan to add more phantom images in the future and expand to other imaging modalities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Increasing use of digital imaging processing leads to an enormous amount of imaging data. The access to picture
archiving and communication systems (PACS), however, is solely textually, leading to sparse retrieval results because
of ambiguous or missing image descriptions. Content-based image retrieval (CBIR) systems can improve the clinical
diagnostic outcome significantly. However, current CBIR systems are not able to integrate their results with clinical
workflow and PACS. Existing communication standards like DICOM and HL7 leave many options for implementation
and do not ensure full interoperability. We present a concept of the standardized integration of a CBIR system for the
radiology workflow in accordance with the Integrating the Healthcare Enterprise (IHE) framework. This is based on the
IHE integration profile 'Post-Processing Workflow' (PPW) defining responsibilities as well as standardized
communication and utilizing the DICOM Structured Report (DICOM SR). Because nowadays most of PACS and RIS
systems are not yet fully IHE compliant to PPW, we also suggest an intermediate approach with the concepts of the
CAD-PACS Toolkit. The integration is independent of the particular PACS and RIS system. Therefore, it supports the
widespread application of CBIR in radiological routine. As a result, the approach is exemplarily applied to the Image
Retrieval in Medical Applications (IRMA) framework.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Content-based visual information (or image) retrieval (CBIR) has been an extremely active research domain within medical imaging over the past ten years, with the goal of improving the management of visual medical information. Many technical solutions have been proposed, and application scenarios for image retrieval as well as image classification have been set up. However, in contrast to medical information retrieval using textual methods, visual retrieval has only rarely been applied in clinical practice. This is despite the large amount and variety of visual information produced in hospitals every day. This information overload imposes a significant
burden upon clinicians, and CBIR technologies have the potential to help the situation. However, in order for CBIR to become an accepted clinical tool, it must demonstrate a higher level of technical maturity than it has to date. Since 2004, the ImageCLEF benchmark has included a task for the comparison of visual information
retrieval algorithms for medical applications. In 2005, a task for medical image classification was introduced and both tasks have been run successfully for the past four years. These benchmarks allow an annual comparison of visual retrieval techniques based on the same data sets and the same query tasks, enabling the meaningful
comparison of various retrieval techniques. The datasets used from 2004-2007 contained images and annotations from medical teaching files. In 2008, however, the dataset used was made up of 67,000 images (along with their associated figure captions and the full text of their corresponding articles) from two Radiological Society of North America (RSNA) scientific journals.
This article describes the results of the medical image retrieval task of the ImageCLEF 2008 evaluation campaign. We compare the retrieval results of both visual and textual information retrieval systems from 15 research groups on the aforementioned data set. The results show clearly that, currently, visual retrieval
alone does not achieve the performance necessary for real-world clinical applications. Most of the common visual retrieval techniques have a MAP (Mean Average Precision) of around 2-3%, which is much lower than that achieved using textual retrieval (MAP=29%). Advanced machine learning techniques, together with good training data, have been shown to improve the performance of visual retrieval systems in the past. Multimodal retrieval (basing retrieval on both visual and textual information) can achieve better results than purely visual,
but only when carefully applied. In many cases, multimodal retrieval systems performed even worse than purely textual retrieval systems. On the other hand, some multimodal retrieval systems demonstrated significantly increased early precision, which has been shown to be a desirable behavior in real-world systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One fundamental problem remains in the area of image analysis and retrieval: how to measure perceptual similarity between two images. Most researchers employ a Minkowski-type metric, which does not reliably find similarities in objects that are obviously alike. This paper develops a similarity function that is learned in
order to capture the perception of similarity. The technique first extracts high-level landmarks in the images
to determine a local contextual similarity, but these are unordered and unregistered. Second, the point sets of
the two images are fed into the learned similarity function to determine the overall similarity. This technique
avoids arbitrary spatial constraints and is robust in the presence of noise, outliers, and imaging artifacts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
System Infrastructure I: Integration and Standards
Our group provides clinical image processing services to various institutes at NIH. We develop or adapt image
processing programs for a variety of applications. However, each program requires a human operator to select a specific
set of images and execute the program, as well as store the results appropriately for later use. To improve efficiency, we
design a parallelized clinical image processing engine (CIPE) to streamline and parallelize our service. The engine takes
DICOM images from a PACS server, sorts and distributes the images to different applications, multithreads the
execution of applications, and collects results from the applications. The engine consists of four modules: a listener, a
router, a job manager and a data manager. A template filter in XML format is defined to specify the image specification
for each application. A MySQL database is created to store and manage the incoming DICOM images and application
results. The engine achieves two important goals: reduce the amount of time and manpower required to process medical
images, and reduce the turnaround time for responding. We tested our engine on three different applications with 12
datasets and demonstrated that the engine improved the efficiency dramatically.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In Japan, lung cancer death ranks first among men and third among women. Lung cancer death is increasing yearly, thus early detection and treatment are needed. For this reason, CT screening for lung cancer has been introduced. The CT screening services are roughly divided into three sections: office, radiology and diagnosis
sections. These operations have been performed through paper-based or a combination of paper-based and an existing electronic health recording system. This paper describes an operating support system for lung cancer CT screening in order to make the screening services efficient. This operating support system is developed on
the basis of 1) analysis of operating processes, 2) digitalization of operating information, and 3) visualization of operating information. The utilization of the system is evaluated through an actual application and users' survey questionnaire obtained from CT screening centers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There is today a lack of interoperability in healthcare although the need for it is obvious. A new healthcare enterprise environment has been deployed for secure healthcare interoperability in
the Western Region in Sweden (WRS). This paper is an empirical overview of the new enterprise environment supporting regional shared and transparent radiology domain information in the WRS.
The enterprise environment compromises 17 radiology departments, 1,5 million inhabitants, using different RIS and PACS in a joint work-oriented network and additional cardiology, dentistry and clinical physiology departments. More than 160 terabytes of information are
stored in the enterprise repository. Interoperability is developed according to the IHE mission, i.e. applying standards such as Digital Imaging and Communication in Medicine (DICOM) and Health Level 7 (HL7) to address specific clinical communication needs and support optimal patient care. The entire enterprise environment is implemented and used daily in WRS.
The central prerequisites in the development of the enterprise environment in western region of Sweden were: 1) information harmonization, 2) reuse of standardized messages e.g. HL7 v2.x
and v3.x, 3) development of a holistic information domain including both text and images, and 4) to create a continuous and dynamic update functionality. The central challenges in this project were: 1) the many different vendors acting in the region and the negotiations with them to apply communication roles/profiles such as HL7 (CDA, CCR), DICOM, and XML, 2) the question of whom owns the data, and 3) incomplete technical standards.
This study concludes that to create a workflow that runs within an enterprise environment there are a number of central prerequisites and challenges that needs to be in place. This calls
for negotiations on an international, national and regional level with standardization organizations, vendors, health management and health personnel.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Imaging and Informatics-based Therapeutic Applications I
Recent developments in medical imaging informatics have improved clinical workflow in Radiology enterprise but gaps
remain in the clinical workflow from diagnosis to surgical treatment through post-operative follow-up. One solution to
bridge this gap is the development of an electronic patient record (ePR) that integrates key imaging and informatics data
during the pre, intra, and post-operative phases of clinical workflow. We present an ePR system based on standards and
tailored to the clinical application for image-guided minimally invasive spinal surgery (MISS). The ePR system has
been implemented in a clinical environment for a half-year.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the key technical challenges in developing an
extensible image-guided navigation system is that of interfacing with external proprietary hardware. The technical challenges arise from the constraints placed on the navigation system's hardware and software. Extending a navigation system's functionality by interfacing with an external hardware device may require modifications to internal hardware components. In some cases, it would
also require porting the complete code to a different operating system that is compatible with the manufacturer supplied application programming interface libraries and drivers. In this paper we describe our experience extending a multi-platform navigation system, implemented using the image-guided surgery toolkit IGSTK, to
support real-time acquisition of 2-D ultrasound (US) images acquired with the Terason portable US system. We describe the required hardware and software modifications imposed by the proposed extension and how the OpenIGTLink network communication protocol enabled us to minimize the changes to the system's hardware and software. The resulting navigation system retains its platform independence with the added capability for real-time image acquisition independent of the image source.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Imaging and Informatics-based Therapeutic Applications II
The current trend towards systems medicine will rely heavily on computational and bioinformatics capabilities to collect,
integrate, and analyze massive amounts of data from disparate sources. The objective is to use this information to make
medical decisions that improve patient care. At Georgetown University Medical Center, we are developing an
informatics capability to integrate several research and clinical databases. Our long term goal is to provide researchers at
Georgetown's Lombardi Comprehensive Cancer Center better access to aggregated molecular and clinical information
facilitating the investigation of new hypotheses that impact patient care. We also recognize the need for data mining
tools and intelligent agents to help researchers in these efforts.
This paper describes our initial work to create a flexible platform for researchers and physicians that provides access to
information sources including clinical records, medical images, genomic, epigenomic, proteomic and metabolomic data.
This paper describes the data sources selected for this pilot project and possible approaches to integrating these databases.
We present the different database integration models that we considered. We conclude by outlining the proposed
Information Model for the project.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Introduction: The International Society for the Advancement of Cytometry (ISAC) Data Standards Task Force (DSTF) is developing a new Advanced Cytometry Specification (ACS). DICOM has developed and is extending a pathology extension. The work of both groups is complementary with some overlap. Interoperation would benefit both groups and permit each to benefit from the other's expertise. Methods: The design and implementation of the CytometryML version
of the ACS schemas have been based on each schema describing one object (modularity), iterative (spiral) development, inheritance, and reuse of data-types and their definitions from DICOM, Flow Cytometry Standard, and other standards.
Results: These schemas have been validated with two tools and XML pages were generated from highest level schemas. Binary image data and its associated metadata are stored together in a zip file based container. A schema for a table of contents, which is one of the metadata files of this container, has recently been developed and reported upon. The binary image data is placed in one file in the container; and the metadata associated with an image in another. The schema for the image metadata file includes elements that are based on the DICOM design. This image schema includes descriptions of the acquisition context, image (including information on compression), specimen, slide, transmission medium, major optical parts, optical elements in one or more optical channels, detectors, and pixel format. The image schema describes both conventional camera systems and scanning or confocal systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present a novel method to detect abnormal regions from capsule endoscopy images. Wireless Capsule
Endoscopy (WCE) is a recent technology where a capsule with an embedded camera is swallowed by the patient to
visualize the gastrointestinal tract. One challenge is one procedure of diagnosis will send out over 50,000 images,
making physicians' reviewing process expensive. Physicians' reviewing process involves in identifying images
containing abnormal regions (tumor, bleeding, etc) from this large number of image sequence. In this paper we construct
a novel framework for robust and real-time abnormal region detection from large amount of capsule endoscopy images.
The detected potential abnormal regions can be labeled out automatically to let physicians review further, therefore,
reduce the overall reviewing process. In this paper we construct an abnormal region detection framework with the
following advantages: 1) Trainable. Users can define and label any type of abnormal region they want to find; The
abnormal regions, such as tumor, bleeding, etc., can be pre-defined and labeled using the graphical user interface tool we
provided. 2) Efficient. Due to the large number of image data, the detection speed is very important. Our system can
detect very efficiently at different scales due to the integral image features we used; 3) Robust. After feature selection
we use a cascade of classifiers to further enforce the detection accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multiple sclerosis (MS) is a demyelinating disease of the central nervous system that affects approximately
2.5 million people worldwide. Magnetic resonance imaging (MRI) is an established tool for the assessment
of disease activity, progression and response to treatment. The progression of the disease is variable and
requires routine follow-up imaging studies. Currently, MRI quantification of multiple sclerosis requires a
manual approach to lesion measurement and yields an estimate of lesion volume and interval change. In the
setting of several prior studies and a long treatment history, trends related to treatment change quickly
become difficult to extrapolate. Our efforts seek to develop an imaging informatics based MS lesion
computer aided detection (CAD) package to quantify and track MS lesions including lesion load, volume,
and location. Together, with select clinical parameters, this data will be incorporated into an MS specific e-
Folder to provide decision support to evaluate and assess treatment options for MS in a manner tailored
specifically to an individual based on trends in MS presentation and progression.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Radiation therapy (RT) is an important procedure in the treatment of cancer in the thorax and abdomen. However, its efficacy can be severely limited by breathing induced tumor motion. Tumor motion causes uncertainty in the tumor's location and consequently limits the radiation dosage (for fear of damaging normal tissue). This paper describes a novel signal model for tumor motion tracking/prediction that can potentially improve RT results. Using CT and breathing sensor data, it provides a more accurate characterization of the breathing and tumor motion than previous work and is non-invasive. The efficacy of our model is demonstrated on patient data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Last year, we presented the infrastructure for a medical imaging informatics DICOM-RT based ePR system for patients
treated with Proton Therapy (PT). The ePR has functionality to integrate patients' imaging and informatics data and
perform outcomes analysis with patient and physician profiling in order to provide clinical decision support and suggest
courses of treatment. In this paper, we present the development of a prototype for the image-guided outcomes analysis
for prostate cancer patient based on DICOM-RT ePR. This ePR system, using DICOM-RT and DICOM-ION objects as
well as clinical and biological parameters, provides tools to evaluate treatment plans and assess the outcomes of the
patient's treatment; hence, it promotes more successful treatment planning for new prostate cancer patients treated with
proton therapy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
System Infrastructure II: Integration and Visualization
CAVASS (Computer Aided Visualization and Analysis Software System) is an open-source software system that is
being developed by the Medical Image Processing Group (MIPG) at the University of Pennsylvania. It includes
extremely efficient (and often parallel) implementations of the most commonly used image display, manipulation, and
processing operations. CAVASS seamlessly interfaces with ITK and provides a user interface for it as well. It can
easily interface with a PACS by directly reading and writing DICOM images and can also read and write other common
image formats as well. We describe the general software architecture of CAVASS so one may quickly use it as the basis
for one's own applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe BrainIACS, a web-based medical image processing system that permits and facilitates algorithm developers
to quickly create extensible user interfaces for their algorithms. Designed to address the challenges faced by algorithm
developers in providing user-friendly graphical interfaces, BrainIACS is completely implemented using freely available,
open-source software. The system, which is based on a client-server architecture, utilizes an AJAX front-end written
using the Google Web Toolkit (GWT) and Java Servlets running on Apache Tomcat as its back-end. To enable
developers to quickly and simply create user interfaces for configuring their algorithms, the interfaces are described
using XML and are parsed by our system to create the corresponding user interface elements. Most of the commonly
found elements such as check boxes, drop down lists, input boxes, radio buttons, tab panels and group boxes are
supported. Some elements such as the input box support input validation. Changes to the user interface such as addition
and deletion of elements are performed by editing the XML file or by using the system's user interface creator. In
addition to user interface generation, the system also provides its own interfaces for data transfer, previewing of input
and output files, and algorithm queuing. As the system is programmed using Java (and finally Java-script after
compilation of the front-end code), it is platform independent with the only requirements being that a Servlet
implementation be available and that the processing algorithms can execute on the server platform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a remote real-time PACS-based telemedicine platform for clinical and diagnostic services delivered
at different care settings where the physicians, specialists and scientists may attend. In fact, the platform aims to provide
a PACS-based telemedicine framework for different medical image services such as segmentation, registration and
specifically high-quality 3D visualization. The proposed approach offers services which are not only widely accessible
and real-time, but are also secure and cost-effective. In addition, the proposed platform has the ability to bring in a realtime,
ubiquitous, collaborative, interactive meeting environment supporting 3D visualization for consultations, which
has not been well addressed with the current PACS-based applications. Using this ability, physicians and specialists can
consult with each other at separate places and it is especially helpful for settings, where there is no specialist or the
number of specialists is not enough to handle all the available cases. Furthermore, the proposed platform can be used as
a rich resource for clinical research studies as well as for academic purposes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital medical images are rapidly growing in size and volume. A typical study includes multiple image "slices." These
images have a special format and a communication protocol referred to as DICOM (Digital Imaging Communications in
Medicine). Storing, retrieving, and viewing these images are handled by DICOM-enabled systems. DICOM images are
stored in central repository servers called PACS (Picture Archival and Communication Systems). Remote viewing
stations are DICOM-enabled applications that can query the PACS servers and retrieve the DICOM images for viewing.
Modern medical images are quite large, reaching as much as 1 GB per file. When the viewing station is connected to the
PACS server via a high-bandwidth local LAN, downloading of the images is relatively efficient and does not cause
significant wasted time for physicians. Problems arise when the viewing station is located in a remote facility that has a
low-bandwidth link to the PACS server. If the link between the PACS and remote facility is in the range of 1 Mbit/sec,
downloading medical images is very slow. To overcome this problem, medical images are compressed to reduce the size
for transmission. This paper describes a method of compression that maintains diagnostic quality of images while
significantly reducing the volume to be transmitted, without any change to the existing PACS servers and viewer
software, and without requiring any change in the way doctors retrieve and view images today.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Medical imaging modalities generate huge amount of medical images daily, and there are urgent demands to search large-scale image databases in an RIS-integrated PACS environment to support medical research and diagnosis by using image visual content to find visually similar images. However, most of current content-based image retrieval
(CBIR) systems require distance computations to perform query by image content. Distance computations can be time consuming when image database grows large, and thus limits the usability of such systems. Furthermore, there is still a semantic gap between the low-level visual features automatically extracted and the high-level
concepts that users normally search for. To address these problems, we propose a novel framework that combines text retrieval and CBIR techniques in order to support searching large-scale medical image database while integrated RIS/PACS is in place. A prototype system for CBIR has been implemented, which can query similar
medical images both by their visual content and relevant semantic descriptions (symptoms and/or possible diagnosis). It also can be used as a decision support tool for radiology diagnosis and a learning tool for education.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The PERFORMS self-assessment scheme measures individuals skills in identifying key mammographic features on sets of known cases. One aspect of this is that it allows radiologists' skills to be trained, based on their data from this scheme. Consequently, a new strategy is introduced to provide revision training based on mammographic features that the radiologist has had difficulty with in these sets. To do this requires a lot of random cases to provide dynamic, unique,
and up-to-date training modules for each individual. We propose GIMI (Generic Infrastructure in Medical Informatics) middleware as the solution to harvest cases from distributed grid servers. The GIMI middleware enables existing and legacy data to support healthcare delivery, research, and training. It is technology-agnostic,
data-agnostic, and has a security policy. The trainee examines each case, indicating the location of regions of interest, and completes an
evaluation form, to determine mammographic feature labelling, diagnosis, and decisions. For feedback, the trainee can
choose to have immediate feedback after examining each case or batch feedback after examining a number of cases. All
the trainees' result are recorded in a database which also contains their trainee profile. A full report can be prepared for
the trainee after they have completed their training. This project demonstrates the practicality of a grid-based individualised training strategy and the efficacy in generating dynamic training modules within the coverage/outreach of the GIMI middleware. The advantages and limitations of the approach are discussed together with future plans.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Shanghai is piloting to develop an EHR system to solve the problems of medical document sharing for
collaborative healthcare, the solution of which is considering to following IHE XDS (cross-enterprise document
sharing) and XCA (cross-community access) technical profiles as well as combined with grid storage for images.
The first phase of the project targets text and image documents sharing cross four local domains or communities,
each of which consists of multiple hospitals. The prototype system was designed and developed with
service-oriented architecture (SOA) and Event-Driven Architecture (EDA), basing on IHE XDS.b and XCA
profiles, and consists of four level components: one central city registry; the multiple domain registries, each of
which is for one local domain or community; the multiple repositories corresponding to multiple local domain
registries; and multiple document source agents, each of which is located in each hospital to provide the patient
healthcare information. The system was developed and tested for performance evaluation including data
publication, user query and image retrieval. The results are extremely positive and demonstrate that the designed
EHR solution based on SOA with grid concept can scale effectively to serve medical document sharing
cross-domain or community in a large city.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
While screening mammography is accepted as the most adequate technique for the early detection of breast cancer, its low positive predictive value leads to many breast biopsies performed on benign lesions. Therefore, we have previously developed a knowledge-based system for computer-aided diagnosis (CADx) of mammographic lesions. It supports the radiologist in the discrimination of benign and malignant lesions. So far, our approach operates on the lesion level and employs the paradigm of content-based image retrieval (CBIR). Similar lesions with known diagnosis are retrieved automatically from a library of references. However, radiologists base their diagnostic decisions on additional resources, such as related mammographic projections, other modalities (e.g. ultrasound, MRI), and clinical data. Nonetheless, most CADx systems disregard the relation between the craniocaudal (CC) and mediolateral-oblique (MLO) views of conventional mammography. Therefore, we extend our approach to the full case level: (i) Multi-frame features are developed that jointly describe a lesion in different views of mammography. Taking into account the geometric relation between different images, these features can also be extracted from multi-modal data; (ii) the CADx system architecture is extended appropriately; (iii) the CADx system is integrated into the radiology information system (RIS) and the picture archiving and communication system (PACS). Here, the framework for image retrieval in medical applications (IRMA) is used to support access to the patient's health care record. Of particular interest is the application of the proposed CADx system to digital breast tomosynthesis (DBT), which has the potential to succeed digital mammography as the standard technique for breast cancer screening. The proposed system is a natural extension of CADx approaches that integrate only two modalities. However, we are still collecting a large enough database of breast lesions with images from multiple modalities to evaluate the benefits of the proposed approach on.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to enhance the readability, extensibility and sharing of DICOM files, we have introduced XML into DICOM file
system (SPIE Volume 5748)[1] and the multilayer tree structure into DICOM (SPIE Volume 6145)[2]. In this paper, we
proposed mapping DICOM to ODF(OpenDocument Format), for it is also based on XML. As a result, the new format
realizes the separation of content(including text content and image) and display style. Meanwhile, since OpenDocument
files take the format of a ZIP compressed archive, the new kind of DICOM files can benefit from ZIP's lossless
compression to reduce file size. Moreover, this open format can also guarantee long-term access to data without legal or
technical barriers, making medical images accessible to various fields.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automated distinction of medical images is an important preprocessing in Computer-Aided Diagnosis (CAD) systems.
The CAD systems have been developed using medical image sets with specific scan conditions and body parts. However,
varied examinations are performed in medical sites. The specification of the examination is contained into DICOM
textual meta information. Most DICOM textual meta information can be considered reliable, however the body part
information cannot always be considered reliable. In this paper, we describe an automated distinction of DICOM images
as a preprocessing for lung cancer CAD system. Our approach uses DICOM textual meta information and low cost
image processing. Firstly, the textual meta information such as scan conditions of DICOM image is distinguished.
Secondly, the DICOM image is set to distinguish the body parts which are identified by image processing. The
identification of body parts is based on anatomical structure which is represented by features of three regions, body
tissue, bone, and air. The method is effective to the practical use of lung cancer CAD system in medical sites.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mass screening based on multi-helical CT images requires a considerable number of images to be read. It is
this time-consuming step that makes the use of helical CT for mass screening impractical at present. Moreover, the
doctor who diagnoses a medical image is insufficient in Japan. To overcome these problems, we have provided
diagnostic assistance methods to medical screening specialists by developing a lung cancer screening algorithm that
automatically detects suspected lung cancers in helical CT images, a coronary artery calcification screening algorithm
that automatically detects suspected coronary artery calcification and a vertebra body analysis algorithm for quantitative
evaluation of osteoporosis likelihood by using helical CT scanner for the lung cancer mass screening. The functions to
observe suspicious shadow in detail are provided in computer-aided diagnosis workstation with these screening
algorithms. We also have developed the telemedicine network by using Web medical image conference system with the
security improvement of images transmission, Biometric fingerprint authentication system and Biometric face
authentication system. Biometric face authentication used on site of telemedicine makes "Encryption of file" and
"Success in login" effective. As a result, patients' private information is protected. We can share the screen of Web
medical image conference system from two or more web conference terminals at the same time. An opinion can be
exchanged mutually by using a camera and a microphone that are connected with workstation. Based on these diagnostic
assistance methods, we have developed a new computer-aided workstation and a new telemedicine network that can
display suspected lesions three-dimensionally in a short time. The results of this study indicate that our radiological
information system without film by using computer-aided diagnosis workstation and our telemedicine network system
can increase diagnostic speed, diagnostic accuracy and security improvement of medical information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The task of analyzing and comparing different algorithms, methods and applications through sound testing is an essential
qualification of algorithm design. A great limitation of image processing algorithm development lies in the difficulty to
assess exhaustively the software, or at least, using a large and diverse number of cases for comparison. This work
addresses a distributed and multicentric approach for medical image database environment and its implementation. The
purpose is to design and make available a free, online, multipurpose and multimodality medical image database system.
This database should include comprehensive, "hard" and unusual cases and be regularly updated with new imaging
protocols, new modalities data from various clinical applications. These set of peer-reviewed data can be used in
assessment tasks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present MIDAS, a web-based digital archiving system that processes large collections of data. Medical imaging research often involves interdisciplinary teams, each performing a separate task, from acquiring datasets to analyzing the processing results. Moreover, the number and size of the datasets continue to increase every year due to recent advancements in acquisition technology. As a result, many research laboratories centralize their data and rely on distributed computing power. We created a web-based digital archiving repository based on openstandards. The MIDAS repository is specifically tuned for medical and scientific datasets and provides a flexible data management facility, a search engine, and an online image viewer. MIDAS enables users to run a set of extensible image processing algorithms from the web to the selected datasets and to add new algorithms to the MIDAS system, facilitating the dissemination of users' work to different research partners. The MIDAS system is currently running in several research laboratories and has demonstrated its ability to streamline the full image processing workflow from data acquisition to image analysis and reports.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Digital Imaging and Communications in Medicine (DICOM) standard was
developed by the National Electrical Manufacturers Association (NEMA) and the
American College of Radiology (ACR) for medical image archiving and retrieval. An
extension to this implemented a standard named DICOM-RT for use in Radiation
Oncology. There are currently seven radiotherapy-specific DICOM objects which
include: RT Structure Set, RT Plan, RT Dose, RT Image, RT Beams Treatment Record,
RT Brachy Treatment Record, and RT Treatment Summary Record. The type of data
associated with DICOM-RT includes (1) Radiation treatment planning datasets (CT,
MRI, PET) with radiation treatment plans showing beam arrangements, isodose
distributions, and dose volume histograms of targets/normal tissues and (2) Image-guided
radiation modalities such as Siemens MVision mega-voltage cone beam CT (MV-CBCT).
With the advent of such advancing technologies, there has been an exponential increase
in image data collected for each patient, and the need for reliable and accessible image
storage has become critical. A potential solution is a Radiation Oncology specific picture
archiving and communication systems (PACS) that would allow data storage from
multiple vendor devices and support the storage and retrieval needs not only of a single
site but of a large, multi-facility network of radiation oncology clinics. This PACS
system must be reliable, expandable, and cost-effective to operate while protecting
sensitive patient image information in a Health Insurance Portability and Accountability
Act (HIPAA) compliant environment. This paper emphasizes the expanding DICOM-RT
storage requirements across our network of 8 radiation oncology clinics and the
initiatives we undertook to address the increased volume of data by using the ImageGrid
(CANDELiS Inc, Irvine CA) server and the IGViewer license (CANDELiS Inc, Irvine
CA) to create a DICOM-RT compatible PACS system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes Data Modeling for unstructured data of Diffusion Tensor Imaging (DTI). Data Modeling is an
essential first step for data preparation in any data management and data mining procedure. Conventional Entity-
Relational (E-R) data modeling is lossy, irreproducible, and time-consuming especially when dealing with unstructured
image data associated with complex systems like the human brain. We propose a methodological framework for more
objective E-R data modeling with unlimited query support by eliminating the structured content-dependent metadata
associated with the unstructured data. The proposed method is applied to DTI data and a minimum system is
implemented accordingly. Eventually supported with navigation, data fusion, and feature extraction modules, the
proposed system provides a content-based support environment (C-BASE). Such an environment facilitates an unlimited
query support with a reproducible and efficient database schema. Switching between different modalities of data, while
confining the feature extractors within the object(s) of interest, we supply anatomically specific query results. The price
of such a scheme is relatively large storage and in some cases high computational cost. The data modeling and its
mathematical framework, behind the scene of query executions and the user interface of the system are presented in this
paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Detection of acute intracranial hemorrhage (AIH) is a primary task in the interpretation of computed tomography (CT)
brain scans of patients suffering from acute neurological disturbances or after head trauma. Interpretation can be difficult
especially when the lesion is inconspicuous or the reader is inexperienced. We have previously developed a computeraided
detection (CAD) algorithm to detect small AIH. One hundred and thirty five small AIH CT studies from the Los
Angeles County (LAC) + USC Hospital were identified and matched by age and sex with one hundred and thirty five
normal studies. These cases were then processed using our AIH CAD system to evaluate the efficacy and constraints of
the algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The animal-to-researcher workflow in many of today's small animal imaging center is burdened with proprietary data
limitations, inaccessible back-up methods, and imaging results that are not easily viewable across campus. Such
challenges decrease the amount of scans performed per day at the center and requires researchers to wait longer for their
images and quantified results. Furthermore, data mining at the small animal imaging center is often limited to researcher
names and date-labelled archiving hard-drives. To gain efficiency and reliable access to small animal imaging data, such
a center needs to move towards an integrated workflow with file format normalization services, metadata databases,
expandable archiving infrastructure, and comprehensive user interfaces for query / retrieval tools - achieving all in a
cost-effective manner.
This poster presentation demonstrates how grid technology can support such a molecular imaging and small animal
imaging research community to bridge the needs between imaging modalities and clinical researchers. Existing projects
have utilized the Data Grid in PACS tier 2 backup solutions, where fault-tolerance is a high priority, as well as imagingbased
clinical trials where data security and auditing are primary concerns. Issues to be addressed include, but are not
limited to, novel database designs, file format standards, virtual archiving and distribution workflows, and potential grid
computing for 3-D reconstructions, co-registration, and post-processing analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Bone age assessment is a radiological procedure to evaluate a child's bone age based on his or her left-hand x-ray image.
The current standard is to match patient's hand with Greulich & Pyle hand atlas, which is outdated by 50 years and only
uses subjects from one region and one ethnicity. To improve bone age assessment accuracy for today's children, an
automated race- and gender-specific bone age assessment (BAA) system has been developed in IPILab. 1390 normal
left-hand x-ray images have been collected at Children's Hospital of Los Angeles (CHLA) to form the digital hand atlas
(DHA). DHA includes both male and female children of ages one to eighteen and of four ethnic groups: African
American, Asian American, Caucasian, and Hispanic. In order to apply DHA and BAA CAD into a clinical
environment, a web-based BAA CAD system and graphical user interface (GUI) has been implemented in Women and
Children's Hospital at Los Angeles County (WCH-LAC). A CAD server has been integrated in WCH's PACS
environment, and a clinical validation workflow has been designed for radiologists, who compare CAD readings with
G&P readings and determine which reading is more suited for a certain case. Readings are logged in database and
analyzed to assess BAA CAD performance in a clinical setting. The result is a successful installation of web-based BAA
CAD system in a clinical setting.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we address the problem of privacy protection and trust enhancement in a distributed healthcare eco system.
Increased trust in other parties of the eco system encourages medical entities to share data. This increases the availability
of data and consequently improves the general quality of health care. We present two different solutions to the above
described problem, both being developed using the DICOM standard (Digital Imaging and Communications in
Medicine). The first approach, which is partially relying on legislation, uses sticky policies and commitment protocols to
enhance trust. We propose to attach the access control policies to the data in the DICOM files. Furthermore, the source
of data disclosure makes sure that the destination commits to enforce the policies by obtaining a signature on the policies
and thus providing a proof of the commitment by the destination. The second approach aims at increasing trust by
technical enforcement. For this purpose, digital rights management (DRM) technology is used. We demonstrate that it is
possible to create a DICOM DRM container using the tools provided by this standard, hence still guaranteeing backward
compatibility.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.