The spinal column is one of the most important anatomical structures in the human body and its centerline, that is, the
centerline of vertebral bodies, is a very important feature used by many applications in medical image processing. In the
past, some approaches have been proposed to extract the centerline of spinal column by using edge or region information
of vertebral bodies. However, those approaches may suffer from difficulties in edge detection or region segmentation of
vertebral bodies when there exist vertebral diseases such as osteoporosis, compression fracture. In this paper, we propose
a novel approach based on machine learning to robustly extract the centerline of the spinal column from threedimensional
CT data. Our approach first applies a machine learning algorithm, called AdaBoost, to detect vertebral cord
regions, which have a S-shape similar to and close to, but can be detected more stably than, the spinal column. Then a
centerline of detected vertebral cord regions is obtained by fitting a spline curve to their central points, using the
associated AdaBoost scores as weights. Finally, the obtained centerline of vertebral cord is linearly deformed and
translated in the sagittal direction to fit the top and bottom boundaries of the vertebral bodies and then a centerline of the
spinal column is obtained. Experimental results on a large CT data set show the effectiveness of our approach.
Body part recognition based on CT slice images is very important for many applications in PACS and CAD systems. In
this paper, we propose a novel approach that can recognize which body part a slice image belongs to robustly. We focus
on how to effectively express and use the unique statistical information of the correlation between the CT value and the
position information of each body part. We apply the machine learning method AdaBoost to express and use this
statistical information. Our approach consists of a training process and a recognition process. In the training process, we
first define the whole body using a set of specific classes to ensure that training images in the same class have a high
similarity, and prepare a training image set (positive samples and negative samples) for each class. Second, the training
images are normalized to a fixed size and rotation in each class. Third, features are calculated for each normalized
training image. Finally, AdaBoosted histogram classifiers are trained. After the training process, each class has its own
classifiers. In the recognition process, given a series of CT images, the scores of all classes for each slice image are calculated based on the classifiers obtained in the training process. Then, based on the scores of each slice and a simple model of body part sequence continuity, we use dynamic programming (DP) to eliminate false recognition results. Experimental results on 440 unknown series including lesions show that our approach has high a recognition rate.
In this paper, we propose a novel machine learning approach for interactive lesion segmentation on CT and MRI images.
Our approach consists of training process and segmenting process. In training process, we train AdaBoosted histogram
classifiers to classify true boundary positions and false ones on the 1-D intensity profiles of lesion regions. In segmenting
process, given a marker indicating a rough location of a lesion, the proposed solution segments its region automatically
by using the trained AdaBoosted histogram classifiers. If there are imperfects in the segmented result, based on one
correct location designated by the user, the solution does the segmentation again and gives a new satisfied result. There
are two novelties in our approach. The first is that we use AdaBoost in the training process to learn diverse intensity
distributions of lesion regions, and utilize the trained classifiers successfully in segmenting process. The second is that
we present a reliable and user-friendly way in segmenting process to rectify the segmented result interactively. Dynamic
programming is used to find a new optimal path. Experimental results show our approach can segment lesion regions
successfully, despite the diverse intensity distributions of the lesion regions, marker location variability and lesion region
shape variability. Our framework is also generic and can be applied for blob-like target segmentation with diverse
intensity distributions in other applications.
The purpose of this research is to develop a method for recognizing shapes of ribs in chest x-rays, which can be utilized
as intelligent assistance to diagnosis to decrease false positives (FPs) due to ribs in chest CAD and automatically
generate a schema in report. Shapes of ribs are manually extracted from several CR images to create a rib shape model
using PDM, in which shapes of anterior/posterior ribs are represented as sets of coordinates and an arbitrary shape of a
rib is expressed only with principle components that have a high contribution ratio to shape variation. Shapes of ribs in a
chest X-ray image are identified as follows: (a) Identify the lung field. (b) Find an allowable range of weights of
principle components in the shape model within which the model aligns to an edge of the lung field (a). (c) Create
several shape model images by applying different weights of principle components. (d) Apply a six-direction Gabor
filter to the X-ray image and each one of the shape model images to create an image containing only rib elements. (e)
From images created in (d), search for a shape model image that shows the highest correlation coefficient with the X-ray
image.We applied the rib shape model to 100 test images while changing weights of principle components. We were
able to identify positions of ribs and anatomical rib numbers with an average margin of error being no more than two
fifths of a rib and a half of a rib in case of anterior ribs.
We formulated a new dynamic range compression (DRC) processing algorithm that can be applied to chest CT images. This new DRC processing algorithm was based on an existing DRC processing algorithm. The new DRC processing algorithm, which we named “Generalized DRC processing,” is categorized as shift variant image processing and can explicitly utilize the results of anatomical region recognition. In addition, the application of the method is not restricted to the DRC. The method can enhance high frequency signals only in the lung due to its shift variant characteristics. Therefore, higher image quality than conventional USM is obtained. When using the Generalized DRC processing for chest CT images, the representation of soft tissues will be improved by roughly recognizing the lung region without affecting the density and contrast of the lung region. Unlike the conventional double gamma method, our method significantly reduces artifacts. In recent years, the reading volume of chest CT images is greatly increasing. In view of this we propose this method, which reduces the number of windowing on a viewer. We believe that this will improve the total reading efficiency, and especially, will allow more efficient lung cancer CT screening.
The striped patterns are superimposed in the radiographic images exposed with the stationary grid. When those images are displayed on a monitor, the scaling process causes the low frequency moire patterns overlapped over the object shadow. To prevent these moire patterns, it is necessary to remove the grid patterns before scaling process. The 1-dimenstional filtering can remove the grid pattern, on the other hand it removes some diagnostic information too. We developed two different grid pattern removal processes using 2-dimensional technique. The 2-dimensional technique can localize the information 2-dimensionally in frequency domain, so that the localized information includes the grid information. So the 2-dimensional method can remove the grid pattern with minimum loss of diagnostic information. Quality of images processed by the two 2-dimensional methods and the conventional 1-dimensional filtering method were evaluated. No grid patterns were observed in the images processed by three methods. However, as compared with the 1-dimensional filtered image, the images processed by the 2-dimensional methods were much sharper and have more detail information.
In order to enhance the micro calcifications selectively without enhancing noises, PEM (Pattern Enhancement Processing for Mammography) has been developed by utilizing not only the frequency information but also the structural information of the specified objects. PEM processing uses two structural characteristics i.e. steep edge structure and low-density isolated-point structure. The visual evaluation of PEM processing was done using two different resolution CR mammography images. The enhanced image by PEM processing was compared with the image without enhancement, and the conventional usharp-mask processed image. In the PEM processed image, an increase of noises due to enhancement was suppressed as compared with that in the conventional unsharp-mask processed image. The evaluation using CDMAM phantom showed that PEM processing improved the detection performance of a minute circular pattern. By combining PEM processing with the low and medium frequency enhancement processing, both mammary glands and micro calcifications are clearly enhanced.
Appearances of images are closely related with the luminance dependence of human visual characteristics. Radiographic images are displayed on the CRTs with various luminance as well as on high luminance light-boxes. We studied a tone scale that can improve consistency in appearance among various devices with different luminance. It is likely that radiologists diagnose images based on the relation between the brightness of region of interest and that of surrounding area. Lightness is defined as a relative brightness of region of interest compared with the maximum luminance level of the image. We think the lightness index can be applied for realizing the appearance matching of radiographic images. Lightness matching can be realized by displaying images with the tone scale which gets agreement of the gradients of the display tone scale, on the logarithm of output luminance vs. input data level plane, among display systems. In this paper we call it a 'lightness-equivalent' characteristic. We evaluated the appearance consistency of images displayed with the log-luminance linear tone scale, as realizing the lightness equivalent characteristic, compared with those displayed with the perceptual-linear tone scale. In evaluation the log-luminance linear tone scale gave almost the same appearance among devices with different luminance. On the other hand, the perceptual-liner tone scale gave lower visual contrast for images on the lower luminance device than the higher luminance device, which might have lead to observers perceiving as different appearances.
Dual-energy X-ray absorptiometry (DXA) is one of the bone densitometry techniques to diagnose osteoporosis, and has been gradually getting popular due to its high degree of precision. However, DXA involves a time-consuming examination because of its pencil-beam scan, and the equipment is expensive. In this study, we examined a new bone densitometry technique (CR-DXA) utilizing an X-ray imaging system and Computed Radiography (CR) used for medical X-ray image diagnosis. High level of measurement precision and accuracy could be achieved by X-ray rube voltage/filter optimization and various nonuniformity corrections based on simulation and experiment. The phantom study using a bone mineral block showed precision of 0.83% c.v. (coefficient of variation), and accuracy of 0.01 g/cm2, suggesting that a practically equivalent degree of measurement precision and accuracy to that of the DXA approach is achieved. CR-DXA is considered to provide bone mineral densitometry to facilitate simple, quick and precise bone mineral density measurement.
It is reported that the use of the dual-energy subtraction method enhances the abnormal shadow detection capability. However, as the subtracted image is significantly inferior to the original in signal-to-noise ratio (SNR), the x ray dosage normally used for chest x rays has not yielded subtracted images with adequate SNRs. Under these circumstances, we have concentrated on the fact that there is a correlation between the noise contents of bone and soft- tissue subtracted images although there is no correlation between the signal contents of these images. We now propose an algorithm that improves SNRs of subtraction images by reducing the noise only.