X-ray computed tomography (CT) reconstructs cross-sectional images from projection data. However, ionizing X-ray radiation associated with CT scanning might induce cancer and genetic damage and raises public concerns. Therefore, the reduction of radiation dose has attracted major attention. Few-view CT image reconstruction is an important topic to reduce the radiation dose. Recently, data-driven algorithms have shown great potential to solve the few-view CT problem. In this paper, we develop a dual network architecture (DNA) for reconstructing images directly from sinograms. In the proposed DNA method, a point-wise fully-connected layer learns the backprojection process requesting significantly less memory than the prior art and with O(C×N×NC) parameters where N and Nc denote the dimension of reconstructed images and number of projections respectively. C is an adjustable parameter that can be set as low as 1. Our experimental results demonstrate that DNA produces a competitive performance over the other state-of-the-art methods.Interestingly, natural images can be used to pre-train DNA to avoid overfitting when the amount of real patient images is limited.
Cone-beam breast computed tomography (CT) provides true 3D breast images with isotropic resolution and highcontrast information, detecting calcifications as small as a few hundred microns and revealing subtle tissue differences. However, breast is highly sensitive to x-ray radiation. It is critically important for healthcare to reduce radiation dose. Few-view cone-beam CT only uses a fraction of x-ray projection data acquired by standard cone-beam breast CT, enabling significant reduction of the radiation dose. However, insufficient sampling data would cause severe streak artifacts in images reconstructed using conventional methods. We propose a deep-learning-based method for the image reconstruction to establish a residual neural network model, which is applied for few-view breast CT to produce high quality breast CT images. In this study, we respectively evaluate the breast image reconstruction from one third and one quarter of x-ray projection views of the standard cone-beam breast CT. Based on clinical breast imaging dataset, we perform a supervised learning to train the neural network from few-view CT images to corresponding full-view CT images. Experimental results show that the deep learning-based image reconstruction method allows few-view breast CT to achieve a radiation dose <6mGy per cone-beam CT scan which is a threshold set by FDA for mammographic screening.
KEYWORDS: Breast, Tissues, Skin, Image segmentation, Mammography, 3D image processing, Breast imaging, Image processing, Computed tomography, Breast cancer
Cone Beam Breast CT is a three-dimensional breast imaging modality with high contrast resolution and no tissue overlap.
With these advantages, it is possible to measure volumetric breast density accurately and quantitatively with CBBCT 3D
images. Three major breast components need to be segmented: skin, fat and glandular tissue. In this research, a modified
morphological processing is applied to the CBBCT images to detect and remove the skin of the breast. After the skin is
removed, a 2-step fuzzy clustering scheme is applied to the CBBCT image volume to adaptively cluster the image voxels
into fat and glandular tissue areas based on the intensity of each voxel. Finally, the CBBCT breast volume images are
divided into three categories: skin, fat and glands. Clinical data is used and the quantitative CBBCT breast density
evaluation results are compared with the mammogram-based BIRADS breast density categories.
In Cone Beam Breast CT (CBBCT), breast calcifications have higher intensities than the surrounding tissues. Without
the superposition of breast structures, the three-dimensional distribution of the calcifications can be revealed. In this
research, based on the fact that calcifications have higher contrast, a local thresholding and a histogram thresholding
were used to select candidate calcification areas. Six features were extracted from each candidate calcification: average
foreground CT number value, foreground CT number standard deviation, average background CT number value,
background CT number standard deviation, foreground-background contrast, and average edge gradient. To reduce the
false positive candidate calcifications, a feed-forward back propagation artificial neural network was designed. The
artificial neural network was trained with the radiologists confirmed calcifications and used as classifier in the
calcification auto-detection task. In the preliminary experiments, 90% of the calcifications in the testing data sets were
detected correctly with an average of 10 false positives per data set.
Flat-panel detector-based cone beam CT usually employs FDK algorithm as the reconstruction method. Traditionally, the
row-wise ramp linear filtering was regularized by noise-suppression windows, such as Shepp-Logan, Hamming windows
etc before the backprojection to get the final acceptable (in terms of SNR) reconstructed 3-D volume data. Though noise
was reduced, this linear filtering regularized by noise suppression window had the potential to affect the signal spatial
resolution and thus to reduce the sharpness of the structure boundaries within the breast image especially impeding the
detection of the small calcifications and very small abnormalities that may indicate early breast cancer. Furthermore, the
reconstructed images were still characterized by smudges. In order to combat the aforementioned shortcomings, a Wavelet regularization method was conducted on projection data followed by row-wise ramp linear filtering inherited
within FDK.
Cone Beam Breast CT (CBBCT) acquires 3D breast images without compression to the breast. More detailed and
accurate information of breast lesions is revealed in CBBCT images. In our research, based on the observation that tumor
masses are more concentrated than the surrounding tissues, we designed a weighted average filter and a threedimensional
Iris filter to operate on the three-dimensional images. The basic process is: After weighted average filtering
and iris filtering, a thresholding is applied to extract suspicious regions. Next, after morphological processing, suspicious
regions are sorted based on their average Iris filter responses and the top 10 candidates are selected as detection results.
The detection results are marked out and provided to radiologists as CAD system output. In our experiment, our method
detects 12 mass locations out of 14 pathology-proven malignant clinical cases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.