The current diagnostic process at hospitals is mainly based on reviewing and comparing images coming from
multiple time points and modalities in order to monitor disease progression over a period of time. However, for
ambiguous cases the radiologist deeply relies on reference literature or second opinion. Although there is a vast
amount of acquired images stored in PACS systems which could be reused for decision support, these data sets
suffer from weak search capabilities. Thus, we present a search methodology which enables the physician to fulfill
intelligent search scenarios on medical image databases combining ontology-based semantic and appearance-based
similarity search. It enabled the elimination of 12% of the top ten hits which would arise without taking the
semantic context into account.
Diagnosis and treatment planning for patients can be significantly improved by comparing with clinical images of
other patients with similar anatomical and pathological characteristics. This requires the images to be annotated
using common vocabulary from clinical ontologies. Current approaches to such annotation are typically manual,
consuming extensive clinician time, and cannot be scaled to large amounts of imaging data in hospitals. On the
other hand, automated image analysis while being very scalable do not leverage standardized semantics and thus
cannot be used across specific applications. In our work, we describe an automated and context-sensitive workflow
based on an image parsing system complemented by an ontology-based context-sensitive annotation tool. An
unique characteristic of our framework is that it brings together the diverse paradigms of machine learning based
image analysis and ontology based modeling for accurate and scalable semantic image annotation.
Being able to automatically determine which portion of the human body is shown by a CT volume image offers
various possibilities like automatic labeling of images or initializing subsequent image analysis algorithms. This
paper presents a method that takes a CT volume as input and outputs the vertical body coordinates of its top
and bottom slice in a normalized coordinate system whose origin and unit length are determined by anatomical
landmarks. Each slice of a volume is described by a histogram of visual words: Feature vectors consisting of
an intensity histogram and a SURF descriptor are first computed on a regular grid and then classified into
the closest visual words to form a histogram. The vocabulary of visual words is a quantization of the feature
space by offline clustering a large number of feature vectors from prototype volumes into visual words (or cluster
centers) via the K-Means algorithm. For a set of prototype volumes whose body coordinates are known the
slice descriptions are computed in advance. The body coordinates of a test volume are computed by a 1D rigid
registration of the test volume with the prototype volumes in axial direction. The similarity of two slices is
measured by comparing their histograms of visual words. Cross validation on a dataset of 44 volumes proved
the robustness of the results. Even for test volumes of ca. 20cm height, the average error was 15.8mm.
Whole body CT scanning is a common diagnosis technique for discovering early signs of metastasis or for
differential diagnosis. Automatic parsing and segmentation of multiple organs and semantic navigation inside
the body can help the clinician in efficiently obtaining accurate diagnosis. However, dealing with the large amount
of data of a full body scan is challenging and techniques are needed for the fast detection and segmentation of
organs, e.g., heart, liver, kidneys, bladder, prostate, and spleen, and body landmarks, e.g., bronchial bifurcation,
coccyx tip, sternum, lung tips. Solving the problem becomes even more challenging if partial body scans are
used, where not all organs are present. We propose a new approach to this problem, in which a network of 1D
and 3D landmarks is trained to quickly parse the 3D CT data and estimate which organs and landmarks are
present as well as their most probable locations and boundaries. Using this approach, the segmentation of seven
organs and detection of 19 body landmarks can be obtained in about 20 seconds with state-of-the-art accuracy
and has been validated on 80 CT full or partial body scans.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.