The quality of chest radiographs is a practical issue because deviations from quality standards cost radiologists' time, may lead to misdiagnosis and hold legal risks. Automatic and reproducible assessment of the most important quality figures on every acquisition can enable a radiology department to measure, maintain, and improve quality rates on an everyday basis. A method is proposed here to automatically quantify the quality according to the aspects of (i) collimation, (ii) patient rotation, and (iii) inhalation state of a chest PA radiograph by localizing a number of anatomical features and calculating some quality figures in accordance with international standards. The anatomical features related to these quality aspects are robustly detected by a combination of three convolutional neural networks and two probabilistic anatomical atlases. An error analysis demonstrates the accuracy and robustness of the method. The implementation proposed here works in real time (less than a second) on a CPU without any GPU support.
The purpose of this paper is the investigation of automatic evaluation of the quality of patient positioning and Field of View (FoV) in head CT scans. Studies have shown elevated risk of radiation-induced cataract in patients undergoing head CT examinations. The American Association of Physicists in Medicine (AAPM) published a protocol for head CT scans including requirements linking the optimal scan angle to anatomic landmarks in the skull. To help sensitizing staff for the need of correct patient positioning, a software-based tool detecting nonoptimal patient positioning was developed. Our experiments were conducted on 209 head CT exams acquired at the University Medical Center Hamburg Eppendorf (UKE). All of these examinations were done on the same Philips iCT scanner. Each exam contains a 3D volume with an in-plane voxel spacing of 0.44mm x 0.44mm and a slice distance of 1mm. As ground truth anatomic landmarks on the skull were annotated independently by three different readers. We applied an atlas registration technique to map CT scans to a probabilistic anatomical atlas. For a new CT scan, previously defined model landmarks were mapped back to the CT volume when registering it to the atlas thus labelling new head CT scans. From the location of the detected landmarks we derive the deviation of the actual head angulation and scan length from the optimal values. Furthermore, the presence of the eye-lenses in the FoV is predicted. The median error of the estimated landmark positions measured as distance to the plane generated from the ground truth landmark positions is below 1mm and comparable to the interobserver variability. A classifier for the prediction of the presence of the eye-lenses in the FoV from the estimated landmark locations achieves a κ value of 0.74. Furthermore there is moderate agreement of the estimated deviations of optimal head tilt and scan length with an expert’s rating.
Image registration and segmentation are two important tasks in medical image analysis. However, the validation
of algorithms for non-linear registration in particular often poses significant challenges:1, 2
Anatomical labeling based on scans for the validation of segmentation algorithms is often not available, and
is tedious to obtain. One possibility to obtain suitable ground truth is to use anatomically labelled atlas images.
Such atlas images are, however, generally limited to single subjects, and the displacement field of the registration
between the template and an arbitrary data set is unknown. Therefore, the precise registration error cannot be
determined, and approximations of a performance measure like the consistency error must be adapted. Thus,
validation requires that some form of ground truth is available.
In this work, an approach to generate a synthetic ground truth database for the validation of image registration
and segmentation is proposed. Its application is illustrated using the example of the validation of a registration
procedure, using 50 magnetic resonance images from different patients and two atlases. Three different non-linear
image registration methods were tested to obtain a synthetic validation database consisting of 50 anatomically
labelled brain scans.
FDG-PET is increasingly used for the evaluation of dementia patients, as major neurodegenerative disorders, such as
Alzheimer's disease (AD), Lewy body dementia (LBD), and Frontotemporal dementia (FTD), have been shown to
induce specific patterns of regional hypo-metabolism. However, the interpretation of FDG-PET images of patients with
suspected dementia is not straightforward, since patients are imaged at different stages of progression of
neurodegenerative disease, and the indications of reduced metabolism due to neurodegenerative disease appear slowly
over time. Furthermore, different diseases can cause rather similar patterns of hypo-metabolism. Therefore, classification
of FDG-PET images of patients with suspected dementia may lead to misdiagnosis. This work aims to find an optimal
subset of features for automated classification, in order to improve classification accuracy of FDG-PET images in
patients with suspected dementia. A novel feature selection method is proposed, and performance is compared to
existing methods. The proposed approach adopts a combination of balanced class distributions and feature selection
methods. This is demonstrated to provide high classification accuracy for classification of FDG-PET brain images of
normal controls and dementia patients, comparable with alternative approaches, and provides a compact set of features
A novel and robust method for automatic scan planning of MRI examinations of knee joints is presented. Clinical
knee examinations require acquisition of a 'scout' image, in which the operator manually specifies the scan volume
orientations (off-centres, angulations, field-of-view) for the subsequent diagnostic scans. This planning task is
time-consuming and requires skilled operators. The proposed automated planning system determines orientations
for the diagnostic scan by using a set of anatomical landmarks derived by adapting active shape models of the
femur, patella and tibia to the acquired scout images. The expert knowledge required to position scan geometries
is learned from previous manually planned scans, allowing individual preferences to be taken into account. The
system is able to automatically discriminate between left and right knees. This allows to use and merge training
data from both left and right knees, and to automatically transform all learned scan geometries to the side for
which a plan is required, providing a convenient integration of the automated scan planning system in the clinical
routine. Assessment of the method on the basis of 88 images from 31 different individuals, exhibiting strong
anatomical and positional variability demonstrates success, robustness and efficiency of all parts of the proposed
approach, which thus has the potential to significantly improve the clinical workflow.
In clinical MRI examinations, the geometry of diagnostic scans is defined in an initial planning phase. The operator plans the scan volumes (off-centre, angulation, field-of-view) with respect to patient anatomy in 'scout' images. Often multiple plans are required within a single examination, distracting attention from the patient waiting in the scanner. A novel and robust method is described for automated planning of neurological MRI scans, capable of handling strong shape deviations from healthy anatomy. The expert knowledge required to position scan geometries is learned from previous example plans, allowing site-specific styles to be readily taken into account. The proposed method first fits an anatomical model to the scout data, and then new scan geometries are positioned with respect to extracted landmarks. The accuracy of landmark extraction was measured to be comparable to the inter-observer variability, and automated plans are shown to be highly consistent with those created by expert operators using clinical data. The results of the presented evaluation demonstrate the robustness and applicability of the proposed approach, which has the potential to significantly improve clinical workflow.
A new approach for 3D vessel centreline extraction using multiple, ECG-gated, calibrated X-ray angiographic projections of the coronary arteries is described. The proposed method performs direct extraction of 3D vessel centrelines, without the requirement to either first compute prior 2D centreline estimates, or perform a complete volume reconstruction. A front propagation-based algorithm, initialised with one or more 3D seed points, is used to explore a volume of interest centred on the projection geometry's isocentre. The expansion of a 3D region is controlled by forward projecting boundary points into all projection images to compute vessel response measurements, which are combined into a 3D propagation speed so that the front expands rapidly when all projection images yield high vessel responses. Vessel centrelines are obtained by reconstructing the paths of fastest propagation. Based on these axes, a volume model of the coronaries can be constructed by forward projecting axis points into the 2D images where the borders are detected. The accuracy of the method was demonstrated via a comparison of automatically extracted centrelines with 3D centrelines derived from manually segmented projection data.