A novel technology for estimating both the pose and the joint flexion from a single musculoskeletal X-ray image is presented for automatic quality assessment of patient positioning. The method is based on convolutional neural networks and does not require pose or flexion labels of the X-ray images for the training phase. The task is split into two steps: (i) detection of relevant bone contours in the X-ray by a feature-detection network and (ii) regression of the pose and flexion parameters by a pose-estimation network based upon the detected contours. This separation enables the pose-estimation network to be trained using synthetic contours, which are generated via projections of an articulated 3D model of the target anatomy. It is demonstrated that the use of data-augmentation techniques during training of the pose-estimation network significantly contributes to the robustness of the algorithm. Feasibility of the approach is illustrated using lateral ankle X-ray exams. Validation was performed using X-rays of an anthropomorphic phantom of the foot-ankle joint, imaged in various controlled positions. Reference pose parameters were established by an expert using an interactive tool to align the articulated 3D joint model with the phantom image. Errors in pose estimation are in the range of 2 degrees per pose angle and at the level of the expert performance. Using the rigid foot phantom the flexion parameter was constant, but the overall results indicate accurate estimation also of this parameter.
The quality of chest radiographs is a practical issue because deviations from quality standards cost radiologists' time, may lead to misdiagnosis and hold legal risks. Automatic and reproducible assessment of the most important quality figures on every acquisition can enable a radiology department to measure, maintain, and improve quality rates on an everyday basis. A method is proposed here to automatically quantify the quality according to the aspects of (i) collimation, (ii) patient rotation, and (iii) inhalation state of a chest PA radiograph by localizing a number of anatomical features and calculating some quality figures in accordance with international standards. The anatomical features related to these quality aspects are robustly detected by a combination of three convolutional neural networks and two probabilistic anatomical atlases. An error analysis demonstrates the accuracy and robustness of the method. The implementation proposed here works in real time (less than a second) on a CPU without any GPU support.
Automatic instance segmentation of individual vertebrae from 3D CT is essential for various applications in orthopedics, neurology, and oncology. In case model-based segmentation (MBS) shall be used to generate a mesh-based representation of the spine, a good initialization of MBS is crucial to avoid wrong vertebra labels due to the similar appearance of adjacent vertebrae. Here, we propose to use deep learning (DL) for MBS initialization and for robustly guiding MBS during segmentation to generate 24 instance segmentations for each and every vertebra. We propose a four-step approach: In step 1, we apply a first single-class U-Net to coarsely segment the spine. In step 2, we sample image patches along the coarse segmentation of step 1 and apply a second multi-class U-net to generate a fine segmentation including individual labeling of some key vertebrae and vertebra body landmarks. In step 3, we detect and label landmark coordinates from the classes estimated in step 2. In step 4, we initialize all MBS vertebrae models using the landmarks from step 3 and adapt the model to the joint vertebrae probability map from step 2. We validated our method on segmentation results from 147 patient images. We computed surface distances between segmentation and ground truth meshes and achieved root mean squared distances of RMSDist = 0.90 mm over all cases and vertebrae.
A CT rib-cage unfolding method is proposed that does not require to determine rib centerlines but determines the visceral cavity surface by model base segmentation. Image intensities are sampled across this surface that is flattened using a model based 3D thin-plate-spline registration. An average rib centerline model projected onto this surface serves as a reference system for registration. The flattening registration is designed so that ribs similar to the centerline model are mapped onto parallel lines preserving their relative length. Ribs deviating from this model appear deviating from straight parallel ribs in the unfolded view, accordingly. As the mapping is continuous also the details in intercostal space and those adjacent to the ribs are rendered well. The most beneficial application area is Trauma CT where a fast detection of rib fractures is a crucial task. Specifically in trauma, automatic rib centerline detection may not be guaranteed due to fractures and dislocations. The application by visual assessment on the large public LIDC data base of lung CT proved general feasibility of this early work.
KEYWORDS: Data modeling, 3D modeling, Hough transforms, Radiography, Bone, 3D image processing, Image processing, Chest, Performance modeling, Medical imaging
In this work, we present a new type of model for object localization, which is well suited for anatomical objects
exhibiting large variability in size, shape and posture, for usage in the discriminative generalized Hough transform
(DGHT). The DGHT combines the generalized Hough transform (GHT) with a discriminative training approach
to automatically obtain robust and efficient models. It has been shown to be a strong tool for object localization
capable of handling a rather large amount of shape variability. For some tasks, however, the variability exhibited
by different occurrences of the target object becomes too large to be represented by a standard DGHT model. To
be able to capture such highly variable objects, several sub-models, representing the modes of variability as seen by
the DGHT, are created automatically and are arranged in a higher dimensional model. The modes of variability
are identified on-the-fly during training in an unsupervised manner. Following the concept of the DGHT, the
sub-models are jointly trained with respect to a minimal localization error employing the discriminative training
approach. The procedure is tested on a dataset of thorax radiographs with the target to localize the clavicles.
Due to different arm positions, the posture and arrangement of the target and surrounding bones differs strongly,
which hampers the training of a good localization model. Employing the new model approach the localization
rate improves by 13% on unseen test data compared to the standard model.
Temporal subtraction techniques using 2D image registration improve the detectability of interval changes from chest
radiographs. Although such methods are well known for some time they are not widely used in radiologic practice. The
reason is the occurrence of strong pose differences between two acquisitions with a time interval of months to years in
between. Such strong perspective differences occur in a reasonable number of cases. They cannot be compensated by
available image registration methods and thus mask interval changes to be undetectable. In this paper a method is
proposed to estimate a 3D pose difference by the adaptation of a 3D rib cage model to both projections. The difference
between both is then compensated for, thus producing a subtraction image with virtually no change in pose. The method
generally assumes that no 3D image data is available from the patient. The accuracy of pose estimation is validated with
chest phantom images acquired under controlled geometric conditions. A subtle interval change simulated by a piece of
plastic foam attached to the phantom becomes visible in subtraction images generated with this technique even at strong
angular pose differences like an anterior-posterior inclination of 13 degrees.
Cardiac CT image reconstruction suffers from artifacts due to heart motion during acquisition. In order to mitigate these effects, it is common practice to choose a protocol with minimal gating window and fast gantry rotation. In addition, it is possible to estimate heart motion retrospectively and to incorporate the information
in a motion-compensated reconstruction (MCR). If shape tracking algorithms are used for generation of the heart motion-vector field (MVF), the number and positions of the motion vectors will not coincide with the number and positions of the voxels in the reconstruction grid. In this case, data interpolation is necessary for
MCR algorithms which require one motion vector at each voxel location. This work examines different data interpolation approaches for the MVF interpolation problem and the effects on the MCR results.
An automated segmentation of lung lobes in thoracic CT images is of interest for various diagnostic purposes like the
quantification of emphysema or the localization of tumors within the lung. Although the separating lung fissures are
visible in modern multi-slice CT-scanners, their contrast in the CT-image often does not separate the lobes completely.
This makes it impossible to build a reliable segmentation algorithm without additional information. Our approach uses
general anatomical knowledge represented in a geometrical mesh model to construct a robust lobe segmentation, which
even gives reasonable estimates of lobe volumes if fissures are not visible at all. The paper describes the generation of
the lung model mesh including lobes by an average volume model, its adaptation to individual patient data using a
special fissure feature image, and a performance evaluation over a test data set showing an average segmentation
accuracy of 1 to 3 mm.
Segmentation of organs in medical images can be successfully performed with shape-constrained deformable
models. A surface mesh is attracted to detected image boundaries by an external energy, while an internal
energy keeps the mesh similar to expected shapes. Complex organs like the heart with its four chambers can be
automatically segmented using a suitable shape variablility model based on piecewise affine degrees of freedom.
In this paper, we extend the approach to also segment highly variable vascular structures. We introduce a
dedicated framework to adapt an extended mesh model to freely bending vessels. This is achieved by subdividing
each vessel into (short) tube-shaped segments ("tubelets"). These are assigned to individual similarity transformations
for local orientation and scaling. Proper adaptation is achieved by progressively adapting distal vessel
parts to the image only after proximal neighbor tubelets have already converged. In addition, each newly activated
tubelet inherits the local orientation and scale of the preceeding one. To arrive at a joint segmentation of
chambers and vasculature, we extended a previous model comprising endocardial surfaces of the four chambers,
the left ventricular epicardium, and a pulmonary artery trunk. Newly added are the aorta (ascending and descending
plus arch), superior and inferior vena cava, coronary sinus, and four pulmonary veins. These vessels are
organized as stacks of triangulated rings. This mesh configuration is most suitable to define tubelet segments.
On 36 CT data sets reconstructed at several cardiac phases from 17 patients, segmentation accuracies of
0.61-0.80mm are obtained for the cardiac chambers. For the visible parts of the newly added great vessels,
surface accuracies of 0.47-1.17mm are obtained (larger errors are asscociated with faintly contrasted venous
structures).
We have compared and validated image registration methods with respect to the clinically relevant use-case
of lung CT max-inhale to max-exhale registration. Four fundamentally different algorithms representing main
approaches for image registration were compared using clinical images. Each algorithm was assigned to a different
person with extensive working knowledge of its usage. Quantitative and qualitative evaluation is performed.
Whereas the methods achieve similar results in target registration error, characteristic differences come to show
by closer analysis of the displacement fields.
Respiratory motion is a complicating factor in radiation therapy, tumor ablation, and other treatments of the
thorax and upper abdomen. In most cases, the treatment requires a demanding knowledge of the location of
the organ under investigation. One approach to reduce the uncertainty of organ motion caused by breathing is
to use prior knowledge of the breathing motion. In this work, we extract lung motion fields of seven patients
in 4DCT inhale-exhale images using an iterative shape-constrained deformable model approach. Since data was
acquired for radiotherapy planning, images of the same patient over different weeks of treatment were available.
Although, respiratory motion shows a repetitive character, it is well-known that patient's variability in breathing
pattern impedes motion estimation. A detailed motion field analysis is performed in order to investigate the
reproducibility of breathing motion over the weeks of treatment. For that purpose, parameters being significant
for breathing motion are derived. The analysis of the extracted motion fields provides a basis for a further
breathing motion prediction. Patient-specific motion models are derived by averaging the extracted motion
fields of each individual patient. The obtained motion models are adapted to each patient in a leave-one-out test
in order to simulate motion estimation to unseen data. By using patient-specific mean motion models 60% of
the breathing motion can be captured on average.
Accurate image registration is a necessary prerequisite for many diagnostic and therapy planning procedures
where complementary information from different images has to be combined. The design of robust and reliable
non-parametric registration schemes is currently a very active research area. Modern approaches combine
the pure registration scheme with other image processing routines such that both ingredients may benefit from
each other. One of the new approaches is the combination of segmentation and registration ("segistration").
Here, the segmentation part guides the registration to its desired configuration, whereas on the other hand
the registration leads to an automatic segmentation. By joining these image processing methods it is possible
to overcome some of the pitfalls of the individual methods. Here, we focus on the benefits for the registration task.
In the current work, we present a novel unified framework for non-parametric registration combined with energy-based
segmentation through active contours. In the literature, one may find various ways to combine these image
processing routines. Here, we present the most promising approaches within the general framework. It is based
on a single variational formulation of both the registration and the segmentation part. The performance tests
are carried out for magnetic resonance (MR) images of the brain, and they demonstrate the potential of the
proposed methods.
During medical imaging and therapeutic interventions, pulmonary structures are in general subject to cardiac
and respiratory motion. This motion leads potentially to artefacts and blurring in the resulting image material
and to uncertainties during interventions. This paper presents a new automatic approach for surface based
motion tracking of pulmonary structures and reports on the results for cardiac and respiratory induced motion.
The method applies an active shape approach to ad-hoc generated surface representations of the pulmonary
structures for phase to phase surface tracking. Input of the method are multi-phase CT data, either cardiac or
respiratory gated. The iso-surface representing the transition between air or lung parenchyma to soft tissue,
is triangulated for a selected phase p0. An active shape procedure is initialised in the image of phase p1 using
the generated surface in p0. The used internal energy term penalizes shape deformation as compared to p0.
The process is iterated for all phases pi to pi+1 of the complete cycle. Since the mesh topology is the same for
all phases, the vertices of the triangular mesh can be treated as pseudo-landmarks defining tissue trajectories.
A dense motion field is interpolated. The motion field was especially designed to estimate the error margins
for radiotherapy. In the case of respiratory motion extraction, a validation on ten biphasic thorax CT images
(2.5mm slice distance) was performed with expert landmarks placed at vessel bifurcations. The mean error on
landmark position was below 2.6mm. We further applied the method to ECG gated images and estimated the
influence of the heart beat on lung tissue displacement.
Deformable models have already been successfully applied to the semi-automatic segmentation of organs from
medical images. We present an approach which enables the fully automatic segmentation of the heart from multi-slice
computed tomography images. Compared to other approaches, we address the complete segmentation chain
comprising both model initialization and adaptation.
A multi-compartment mesh describing both atria, both ventricles, the myocardium around the left ventricle
and the trunks of the great vessels is adapted to an image volume. The adaptation is performed in a coarse-to-
fine manner by progressively relaxing constraints on the degrees of freedom of the allowed deformations. First,
the mesh is translated to a rough estimate of the heart's center of mass. Then, the mesh is deformed under the
action of image forces. We first constrain the space of deformations to parametric transformations, compensating
for global misalignment of the model chambers. Finally, a deformable adaptation is performed to account for
more local and subtle variations of the patient's anatomy.
The whole heart segmentation was quantitatively evaluated on 25 volume images and qualitatively validated
on 42 clinical cases. Our approach was found to work fully automatically in 90% of cases with a mean surface-
to-surface error clearly below 1.0 mm. Qualitatively, expert reviewers rated the overall segmentation quality as
4.2±0.7 on a 5-point scale.
Functional assessment of cardiac ventricular function requires time consuming manual interaction. Some automated
methods have been presented that predominantly used cardiac magnet resonance images. Here, an
automatic shape tracking approach is followed to estimate left ventricular blood volume from multi-slice computed
tomography image series acquired with retrospective ECG-gating. A deformable surface model method
was chosen that utilized both shape and local appearance priors to determine the endocardial surface and to
follow its motion through the cardiac cycle. Functional parameters like the ejection fraction could be calculated
from the estimated shape deformation. A clinical validation was performed in a porcine model with 60 examinations
on eight subjects. The functional parameters showed a good correlation with those determined by clinical
experts using a commercially available semi-automatic short axes delineation tool. The correlation coefficient
for the ejection fraction (EF) was 0.89. One quarter of these acquisitions were done with a low dose protocol.
All of these degraded images could be processed well. Their correlation slightly decreases when compared to the
normal dose cases (EF: 0.87 versus 0.88).
Segmentation and labelling of the left atrium from pre-operative images could be a valuable source of information for the planning of electrophysiology procedures to cure atrial fibrillation. A method is presented that uses multi-slice computed tomography (MSCT) images for this purpose that were initially acquired for coronary assessment. The method combines the power of active shape models (robustness by use of prior anatomical knowledge) with the advantages of solely data driven segmentation methods (accuracy). A triangular shape model was built for the human left atrium and its pulmonary vein trunks. It was automatically adapted to the MSCT images, labelling these structures and segmenting them coarsely. In addition, a segmentation of the blood pool by a Hounsfield threshold was applied to the images. The enclosed volumes were triangulated to get a fine surface representation yet still including many distracting objects (the artery tree, coronaries, adjacent chambers, and bones). A correspondence between surface triangles of the coarse, but anatomically labelled model surface and those of the fine iso-surface was established by a similarity criterion on position and orientation. This allows for the refinement of the model-based segmentation showing more anatomical details by selection of corresponding parts of the iso-surface. Vice versa, the correspondence could be used to assign anatomical labels to each iso-surface patch.
KEYWORDS: Image compression, Medical imaging, Visualization, Principal component analysis, Image segmentation, Image processing, Computed tomography, Heart, Data modeling, Computing systems
The amount of per-patient image data generated by medical imaging modalities such as MRI and multi-slice CT scanners increases rapidly. This is on one hand due to an increasing spatial image resolution and on the other hand due to the expanding use of multi-phase or cine studies. A cardiac multi-phase CT scan, generated in about 20 seconds scan time, easily generates about 2GB of image data. The visualization and further processing of those data is at the edge of the abilities of current computers. We therefore present a principal component analysis (PCA) based compression algorithm, which exploits the spatial and temporal coherence of medical multi-phase image data and which allows to retrieve the images with a selectable amount of information loss. The main focus of this work is to reduce the required amount of system memory, not to reduce the required amount of disk space, i.e. at any time only the decomposed image resides in the system memory. If an intensity value for a position (x,y,z,t) is required, it is calculated on demand. This is possible, since the intensity values are expressed as fast computable weighted sums. The method has been applied to cardiac multi-phase CT datasets. It could be shown that a compression ratio of 3:1 still keeps the compression-induced losses (mainly blurring) at the noise level of the original data (about 5 Hounsfield units). Compression ratios of 5:1 and more can be achieved keeping an undisturbed visual impression of the dataset. The influence of the image compression on an automated cardiac segmentation procedure has been studied. Compression ratios up to 8:1 lead to results that only marginally deviate from results of the uncompressed image.
KEYWORDS: Image segmentation, Data modeling, 3D modeling, Magnetic resonance imaging, Cardiovascular magnetic resonance imaging, Natural surfaces, Statistical modeling, Medical imaging, Eye models, Binary data
Cardiac MRI has improved the diagnosis of cardiovascular diseases by enabling the quantitative assessment of functional parameters. This requires an accurate identification of the myocardium of the left ventricle. This paper describes a novel segmentation technique for automated delineation of the myocardium. We propose to use prior knowledge by integrating a statistical shape model and a spatially varying feature model into a deformable mesh adaptation framework. Our shape model consists of a coupled, layered triangular mesh of the epi- and endocardium. It is adapted to the image by iteratively carrying out i) a surface detection and ii) a mesh reconfiguration by energy minimization. For surface detection a feature search is performed to find the point with the best feature combination. To accommodate the different tissue types the triangles of the mesh are labeled, resulting in a spatially varying feature model. The energy function consists of two terms: an external energy term, which attracts the triangles towards the features, and an internal energy term, which preserves the shape of the mesh. We applied our method to 40 cardiac MRI data sets (FFE-EPI) and compared the results to manual segmentations. A mean distance of about 3 mm with a standard deviation of 2 mm to the manual segmentations was achieved.
KEYWORDS: Image segmentation, Bone, Head, 3D modeling, Feature extraction, Medical imaging, Modeling, Databases, Systems modeling, Magnetic resonance imaging
The model of image features is critical to the robustness and accuracy of deformable models. Usually, an edge detector is used for this purpose, because the object boundary is expected to correspond with a strong directed gradient in the image. Two methods are presented to make a feature model more specific and suitable for a given object class for which this assumption is too weak. One aims at a better conformance of the model with the image features by a spatially varying parameterisation of clustered features that is learnt from a training set. The other discriminates the object surface from adjacent false attractors that have similar gradient properties by additional grey value properties. The clustered feature model was successfully applied in left ventricle segmentation to delineate the epicardium in cardiac MR images for which the image gradient reverses sign along the surface. The discriminating feature approach successfully prevented false attractions in CT bone segmentation to strong edges within other nearby bones (shown for femur head). In this case, the grey value beyond the attempted gradient position discriminated well the desired bone surface edges from these false edges.
KEYWORDS: Process modeling, Systems modeling, Picture Archiving and Communication System, Radiology, Medicine, Imaging systems, Modeling, Information technology, System integration, Data modeling
For the next generation integrated information systems for health care applications, more emphasis has to be put on systems which, by design, support the reduction of cost, the increase inefficiency and the improvement of the quality of services. A substantial contribution to this will be the modeling. optimization, automation and enactment of processes in health care institutions. One of the perceived key success factors for the system integration of processes will be the application of workflow management, with workflow management systems as key technology components. In this paper we address workflow management in radiology. We focus on an important aspect of workflow management, the generation and handling of worklists, which provide workflow participants automatically with work items that reflect tasks to be performed. The display of worklists and the functions associated with work items are the visible part for the end-users of an information system using a workflow management approach. Appropriate worklist design and implementation will influence user friendliness of a system and will largely influence work efficiency. Technically, in current imaging department information system environments (modality-PACS-RIS installations), a data-driven approach has been taken: Worklist -- if present at all -- are generated from filtered views on application data bases. In a future workflow-based approach, worklists will be generated by autonomous workflow services based on explicit process models and organizational models. This process-oriented approach will provide us with an integral view of entire health care processes or sub- processes. The paper describes the basic mechanisms of this approach and summarizes its benefits.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.