Based on the X-ray physics in computed tomography (CT) imaging, the linear attenuation coefficient (LAC) of each human tissue is described as a function of the X-ray photon energy. Different tissue types (i.e. muscle, fat, bone, and lung tissue) have their energy responses and bring more tissue contrast distribution information along the energy axis, which we call tissue-energy response (TER). In this study, we propose to use TER to generate virtual monoenergetic images (VMIs) from conventional CT for computer-aided diagnosis (CADx) of lesions. Specifically, for a conventional CT image, each tissue fraction can be identified by the TER curve at the effective energy of the setting tube voltage. Based on this, a series of VMIs can be generated by the tissue fractions multiplying the corresponding TER. Moreover, a machine learning (ML) model is developed to exploit the energy-enhanced tissue material features for differentiating malignant from benign lesions, which is based on the data-driven deep learning (DL)-CNN method. Experimental results demonstrated that the DL-CADx models with the proposed method can achieve better classification performance than the conventional CT-based CADx method from three sets of pathologically proven lesion datasets.
Based on well-established X-ray physics in computed tomography (CT) imaging, the spectral responses of different materials contained in lesions are different, which brings richer contrast information at various energy bins. Hence, obtaining the material decomposition of different tissue types and exploring its spectral information for lesion diagnosis becomes extremely valuable. The lungs are housed within the torso and consist of three natural materials, i.e., soft tissue, bone, and lung tissue. To benefit the lung nodule differentiation, this study innovatively proposed to use lung tissue as one basis material along with soft tissue and bone. This set of basis materials will yield a more accurate composition analysis of lung nodules and benefit the following differentiation. Moreover, a corresponding machine learning (ML)-based computer-aided diagnosis framework for lung nodule classification is also proposed and used for evaluation. Experimental results show the advantages of the virtual monoenergetic images (VMIs) generated with lung tissue material over the VMIs without lung tissue and conventional CT images in differentiating the malignancy from benign lung nodules. The gain of 9.63% in area under the receiver operating characteristic curve (AUC) scores indicated that the energy-enhanced tissue features from lung tissue have a great potential to improve lung nodule diagnosis performance.
The tissue specific MRF type texture prior (MRFt) proposed in our previous work has been demonstrated to be advantageous in various clinical tasks. However, this MRFt model requires a previous full-dose CT (FdCT) scan of the same patient to extract the texture information for LdCT reconstructions. This requirement may not be met in practice. To alleviate this limitation, we propose to build a MRFt generator by internalizing a database with paired FdCT and LdCT scans using a (conditional) encoder-decoder structure model. We denote this method as the MRFtG-ConED. This generation model depends only on physiological features thus is robust for ultra-low dose CT scans (i.e., dosage < 10mAs). When the dosage is not extremely low (i.e., dosage > 10mAs), some texture information from LdCT images reconstructed by filtered back projection (FBP) can be also used to provide extra information.
Purpose: In sequential imaging studies, there exists rich information from past studies that can be used in prior-image-based reconstruction (PIBR) as a form of improved regularization to yield higher-quality images in subsequent studies. PIBR methods, such as reconstruction of difference (RoD), have demonstrated great improvements in the image quality of subsequent anatomy reconstruction even when CT data are acquired at very low-exposure settings.
Approach: However, to effectively use information from past studies, two major elements are required: (1) registration, usually deformable, must be applied between the current and prior scans. Such registration is greatly complicated by potential ambiguity between patient motion and anatomical change—which is often the target of the followup study. (2) One must select regularization parameters for reliable and robust reconstruction of features.
Results: We address these two major issues and apply a modified RoD framework to the clinical problem of lung nodule surveillance. Specifically, we develop a modified deformable registration approach that enforces a locally smooth/rigid registration around the change region and extend previous analytic expressions relating reconstructed contrast to the regularization parameter and other system dependencies for reliable representation of image features. We demonstrate the efficacy of this approach using a combination of realistic digital phantoms and clinical projection data. Performance is characterized as a function of the size of the locally smooth registration region of interest as well as x-ray exposure.
Conclusions: This modified framework is effectively able to separate patient motion and anatomical change to directly highlight anatomical change in lung nodule surveillance.
Markov random field (MRF) has been widely used to incorporate a priori knowledge as a penalty for regional smoothing in ultralow-dose computed tomography (ULdCT) image reconstruction, while the regional smoothing does not explicitly consider the tissue-specific textures. Our previous work showed the tissue-specific textures can be enhanced by extracting the tissue-specific MRF from the to-be-reconstructed subject’s previous full-dose CT (FdCT) scans. However, the same subject’s FdCT scans might not be available in some applications. To address this limitation, we have also investigated the feasibility of extracting the tissue-specific textures from an existing FdCT database instead of the to-be-reconstructed subject. This study aims to implement a machine learning strategy to realize the feasibility. Specifically, we trained a Random Forest (RF) model to learn the intrinsic relationship between the tissue textures and subjects’ physiological features. By learning this intrinsic correlation, this model can be used to identify one MRF candidate from the database as the prior knowledge for any subject’s current ULdCT image reconstruction. Besides the conventional physiological factors (like body mass index: BMI, gender, age), we further introduced another two features LungMark and BodyAngle to address the scanning position and angle. The experimental results showed that the BMI and LungMark are two features of the most importance for the classification. Our trained model can predict 0.99 precision at the recall rate of 2%, which means that for each subject, there will be 3390*0.02 = 67.8 valid MRF candidates in the database, where 3,390 is the total number of candidates in the database. Moreover, it showed that introducing the ULdCT texture prior into the RF model can increase the recall rate by 3% while the precision remaining 0.99.
Purpose: Prior-image-based reconstruction (PIBR) is a powerful tool for low-dose CT, however, the nonlinear behavior of such approaches are generally difficult to predict and control. Similarly, traditional image quality metrics do not capture potential biases exhibited in PIBR images. In this work, we identify a new bias metric and construct an analytical framework for prospectively predicting and controlling the relationship between prior image regularization strength and this bias in a reliable and quantitative fashion. Methods: Bias associated with prior image regularization in PIBR can be described as the fraction of actual contrast change (between the prior image and current anatomy) that appears in the reconstruction. Using local approximation of the nonlinear PIBR objective, we develop an analytical relationship between local regularization, fractional contrast reconstructed, and true contrast change. This analytic tool allows prediction bias properties in a reconstructed PIBR image and includes the dependencies on the data acquisition, patient anatomy and change, and reconstruction parameters. Predictions are leveraged to provide reliable and repeatable image properties for varying data fidelity in simulation and physical cadaver experiments. Results: The proposed analytical approach permits accurate prediction of reconstructed contrast relative to a gold standard based on exhaustive search based on numerous iterative reconstructions. The framework is used to control regularization parameters to enforce consistent change reconstructions over varying fluence levels and varying numbers of projection angles – enabling bias properties that are less location- and acquisition-dependent. Conclusions: While PIBR methods have demonstrated a substantial ability for dose reduction, image properties associated with those images have been difficult to express and quantify using traditional metrics. The novel framework presented in this work not only quantifies this bias in an intuitive fashion, but it gives a way to predict and control the bias. Reliable and predictable reconstruction methods are a requirement for clinical imaging systems and the proposed framework is an important step translating PIBR methods to clinical application.
Tremendous research efforts have been devoted to lower the X-ray radiation exposure to the patient in order to expand the utility of computed tomography (CT), particularly to pediatric imaging and population-based screening. When the exposure dosage goes down, both the X-ray quanta fluctuation and the system electronic background noise become significant factors affecting the image quality. Conventional edge-preserving noise smoothing would sacrifice tissue textures and compromise the clinical tasks. To relieve these challenges, this work models the noise problem by pre-log shifted Poisson statistics and extracts tissue textures from previous normal-dose CT scans as prior knowledge for texturepreserving Bayesian reconstruction of current ultralow-dose CT images. The pre-log shift Poisson model considers accurately both the X-ray quanta fluctuation and the system electronic noise while the prior knowledge of tissue textures removes the limitation of the conventional edge-preserving noise smoothing. The Bayesian reconstruction was tested by experimental studies. One patient chest scan was selected from a database of 133 patients’ scans at 100mAs/120kVp normal-dose level. From the selected patient scan, ultralow-dose data was simulated at 5mAs/120kVp level. The other 132 normal-dose scans were grouped according to how close their lung tissue texture patterns are from that of the selected patient scan. The tissue textures of each group were used to reconstruct the ultralow-dose scan by the Bayesian algorithm. The closest group to the selected patient produced almost identical results to the reconstruction when the tissue textures of the selected patient’s normal-dose scan were used, indicating the feasibility of extracting tissue textures from a previous normal-dose database to reconstruct any current ultralow-dose CT image. Since the Bayesian reconstruction can be time consuming, this work further investigates a strategy to efficiently store the projection matrix rather than computing the line integrals on-flight. This strategy accelerated the computing speed by more than 18 times.
Purpose: There are many clinical situations where diagnostic CT is used for an initial diagnosis or treatment planning,
followed by one or more CBCT scans that are part of an image-guided intervention. Because the high-quality diagnostic
CT scan is a rich source of patient-specific anatomical knowledge, this provides an opportunity to incorporate the prior
CT image into subsequent CBCT reconstruction for improved image quality. We propose a penalized-likelihood method
called reconstruction of difference (RoD), to directly reconstruct differences between the CBCT scan and the CT prior.
In this work, we demonstrate the efficacy of RoD with clinical patient datasets.
Methods: We introduce a data processing workflow using the RoD framework to reconstruct anatomical changes
between the prior CT and current CBCT. This workflow includes processing steps to account for non-anatomical
differences between the two scans including 1) scatter correction for CBCT datasets due to increased scatter fractions in
CBCT data; 2) histogram matching for attenuation variations between CT and CBCT; and 3) registration for different
patient positioning. CBCT projection data and CT planning volumes for two radiotherapy patients – one abdominal study
and one head-and-neck study – were investigated.
Results: In comparisons between the proposed RoD framework and more traditional FDK and penalized-likelihood
reconstructions, we find a significant improvement in image quality when prior CT information is incorporated into the
reconstruction. RoD is able to provide additional low-contrast details while correctly incorporating actual physical
changes in patient anatomy.
Conclusions: The proposed framework provides an opportunity to either improve image quality or relax data fidelity
constraints for CBCT imaging when prior CT studies of the same patient are available. Possible clinical targets include
CBCT image-guided radiotherapy and CBCT image-guided surgeries.
Markov random field (MRF) model has been widely used in Bayesian image reconstruction to reconstruct piecewise smooth images in the presence of noise, such as in low-dose X-ray computed tomography (LdCT). While it can preserve edge sharpness via edge-preserving potential function, its regional smoothing may sacrifice tissue image textures, which have been recognized as useful imaging biomarkers, and thus it compromises clinical tasks such as differentiating malignant vs. benign lesions, e.g., lung nodule or colon polyp. This study aims to shift the edge preserving regional noise smoothing paradigm to texture-preserving framework for LdCT image reconstruction while retaining the advantage of MRF’s neighborhood system on edge preservation. Specifically, we adapted the MRF model to incorporate the image textures of lung, bone, fat, muscle, etc. from previous full-dose CT scan as a priori knowledge for texture-preserving Bayesian reconstruction of current LdCT images. To show the feasibility of proposed reconstruction framework, experiments using clinical patient scans (with lung nodule or colon polyp) were conducted. The experimental outcomes showed noticeable gain by the a priori knowledge for LdCT image reconstruction with the well-known Haralick texture measures. Thus, it is conjectured that texture-preserving LdCT reconstruction has advantages over edge-preserving regional smoothing paradigm for texture-specific clinical applications.
Signal sparsity in computed tomography (CT) image reconstruction field is routinely interpreted as sparse angular sampling around the patient body whose image is to be reconstructed. For CT clinical applications, while the normal tissues may be known and treated as sparse signals but the abnormalities inside the body are usually unknown signals and may not be treated as sparse signals. Furthermore, the locations and structures of abnormalities are also usually unknown, and this uncertainty adds in more challenges in interpreting signal sparsity for clinical applications. In this exploratory experimental study, we assume that once the projection data around the continuous body are discretized regardless at what sampling rate, the image reconstruction of the continuous body from the discretized data becomes a signal sparse problem. We hypothesize that a dense prior model describing the continuous body is a desirable choice for achieving an optimal solution for a given clinical task. We tested this hypothesis by adapting total variation stroke (TVS) model to describe the continuous body signals and showing the gain over the classic filtered backprojection (FBP) at a wide range of angular sampling rate. For the given clinical task of detecting lung nodules of size 5mm and larger, a consistent improvement of TVS over FBP on nodule detection was observed by an experienced radiologists from low sample rate to high sampling rate. This experimental outcome concurs with the expectation of the TVS model. Further investigation for theoretical insights and task-dependent evaluations is needed.
In this paper, we proposed a low-dose computed tomography (LdCT) image reconstruction method with the help of prior
knowledge learning from previous high-quality or normal-dose CT (NdCT) scans. The well-established statistical
penalized weighted least squares (PWLS) algorithm was adopted for image reconstruction, where the penalty term was
formulated by a texture-based Gaussian Markov random field (gMRF) model. The NdCT scan was firstly segmented into
different tissue types by a feature vector quantization (FVQ) approach. Then for each tissue type, a set of tissue-specific
coefficients for the gMRF penalty was statistically learnt from the NdCT image via multiple-linear regression analysis.
We also proposed a scheme to adaptively select the order of gMRF model for coefficients prediction. The tissue-specific
gMRF patterns learnt from the NdCT image were finally used to form an adaptive MRF penalty for the PWLS
reconstruction of LdCT image. The proposed texture-adaptive PWLS image reconstruction algorithm was shown to be
more effective to preserve image textures than the conventional PWLS image reconstruction algorithm, and we further
demonstrated the gain of high-order MRF modeling for texture-preserved LdCT PWLS image reconstruction.
One hundred “normal-dose” computed tomography (CT) studies of the chest (i.e., 1,160 projection views, 120kVp, 100mAs) data sets were acquired from the patients who were scheduled for lung biopsy at Stony Brook University Hospital under informed consent approved by our Institutional Review Board. To mimic low-dose CT imaging scenario (i.e., sparse-view scan), sparse projection views were evenly extracted from the total 1,160 projections of each patient and the total radiation dose was reduced according to how many sparse views were selected. A standard filtered backprojection (FBP) algorithm was applied to the 1160 projections to produce reference images for comparison purpose. In the low-dose scenario, both the FBP and total variation-stokes (TVS) algorithms were applied to reconstruct the corresponding low-dose images. The reconstructed images were evaluated by an experienced thoracic radiologist against the reference images. Both the low-dose reconstructions and the reference images were displayed on a 4- megapixel monitor in soft tissue and lung windows. The images were graded by a five-point scale from 0 to 4 (0, nondiagnostic; 1, severe artifact with low confidence; 2, moderate artifact or moderate diagnostic confidences; 3, mild artifact or high confidence; 4, well depicted without artifacts). Quantitative evaluation measurements such as standard deviations for different tissue types and universal quality index were also studied and reported for the results. The evaluation concluded that the TVS can reduce the view number from 1,160 to 580 with slightly lower scores as the reference, resulting in a dose reduction to close 50%.
To reduce radiation dose in X-ray computed tomography (CT) imaging, one of the common strategies is to lower the milliampere-second (mAs) setting during projection data acquisition. However, this strategy would inevitably increase the projection data noise, and the resulting image by the filtered back-projection (FBP) method may suffer from excessive noise and streak artifacts. The edge-preserving nonlocal means (NLM) filtering can help to reduce the noise-induced artifacts in the FBP reconstructed image, but it sometimes cannot completely eliminate them, especially under very low-dose circumstance when the image is severely degraded. To deal with this situation, we proposed a statistical image reconstruction scheme using a NLM-based regularization, which can suppress the noise and streak artifacts more effectively. However, we noticed that using uniform filtering parameter in the NLM-based regularization was rarely optimal for the entire image. Therefore, in this study, we further developed a novel approach for designing adaptive filtering parameters by considering local characteristics of the image, and the resulting regularization is referred to as adaptive NLM-based regularization. Experimental results with physical phantom and clinical patient data validated the superiority of using the proposed adaptive NLM-regularized statistical image reconstruction method for low-dose X-ray CT, in terms of noise/streak artifacts suppression and edge/detail/contrast/texture preservation.
Cone-beam computed tomography (CBCT) has attracted growing interest of researchers in image reconstruction. The
mAs level of the X-ray tube current, in practical application of CBCT, is mitigated in order to reduce the CBCT dose.
The lowering of the X-ray tube current, however, results in the degradation of image quality. Thus, low-dose CBCT
image reconstruction is in effect a noise problem. To acquire clinically acceptable quality of image, and keep the X-ray
tube current as low as achievable in the meanwhile, some penalized weighted least-squares (PWLS)-based image
reconstruction algorithms have been developed. One representative strategy in previous work is to model the prior
information for solution regularization using an anisotropic penalty term. To enhance the edge preserving and noise
suppressing in a finer scale, a novel algorithm combining the local binary pattern (LBP) with penalized weighted leastsquares
(PWLS), called LBP-PWLS-based image reconstruction algorithm, is proposed in this work. The proposed
LBP-PWLS-based algorithm adaptively encourages strong diffusion on the local spot/flat region around a voxel and less
diffusion on edge/corner ones by adjusting the penalty for cost function, after the LBP is utilized to detect the region
around the voxel as spot, flat and edge ones. The LBP-PWLS-based reconstruction algorithm was evaluated using the
sinogram data acquired by a clinical CT scanner from the CatPhan® 600 phantom. Experimental results on the noiseresolution
tradeoff measurement and other quantitative measurements demonstrated its feasibility and effectiveness in
edge preserving and noise suppressing in comparison with a previous PWLS reconstruction algorithm.
Statistical iterative reconstruction (SIR) methods have shown remarkable gains over the conventional filtered
backprojection (FBP) method in improving image quality for low-dose computed tomography (CT). They reconstruct
the CT images by maximizing/minimizing a cost function in a statistical sense, where the cost function usually consists
of two terms: the data-fidelity term modeling the statistics of measured data, and the regularization term reflecting a
prior information. The regularization term in SIR plays a critical role for successful image reconstruction, and an
established family of regularizations is based on the Markov random field (MRF) model. Inspired by the success of nonlocal
means (NLM) algorithm in image processing applications, we proposed, in this work, a family of generic and edgepreserving
NLM-based regularizations for SIR. We evaluated one of them where the potential function takes the
quadratic-form. Experimental results with both digital and physical phantoms clearly demonstrated that SIR with the
proposed regularization can achieve more significant gains than SIR with the widely-used Gaussian MRF regularization
and the conventional FBP method, in terms of image noise reduction and resolution preservation.
Previous studies have reported that the volume-weighting technique has advantages over the linear interpolation
technique for cone-beam computed tomography (CBCT) image reconstruction. However, directly calculating the
intersecting volume between the pencil beam X-ray and the object is a challenge due to the computational complexity.
Inspired by previous works in area-simulating volume (ASV) technique for 3D positron emission tomography, we
proposed an improved ASV (IASV) technique, which can fast calculate the geometric probability of the intersection
between the pencil beam and the object. In order to show the improvements of using IASV technique in volumeweighting
based Feldkamp–Davis–Kress (VW-FDK) algorithm compared to the conventional linear interpolation
technique based FDK algorithm (LI-FDK), the variances images from both theoretical prediction and empirical
determination are described basing on the assumption of the uncorrelated and stationary noise for each detector bin. In
digital phantom study, both of the theoretically predicted variance images and the empirically determined variance
images concurred and demonstrated that the VW-FDK algorithm can result in uniformly distributed noise across the
FOV. In the physical phantom study, the performance enhancements by the VW-FDK algorithm were quantitatively
evaluated by the contrast-noise-ratio (CNR) merit. The CNR values from the VW-FDK result were about 40% higher
than the conventional LI-FDK result. Therefore it can be concluded that the VW-FDK algorithm can efficiently address
the non-uniformity noise and suppress noise level of the reconstructed images.
Computer-aided detection (CADe) of pulmonary nodules from computer tomography (CT) scans is critical for assisting radiologists to identify lung lesions at an early stage. In this paper, we propose a novel approach for CADe of lung nodules using a two-stage vector quantization (VQ) scheme. The first-stage VQ aims to extract lung from the chest volume, while the second-stage VQ is designed to extract initial nodule candidates (INCs) within the lung volume. Then rule-based expert filtering is employed to prune obvious FPs from INCs, and the commonly-used support vector machine (SVM) classifier is adopted to further reduce the FPs. The proposed system was validated on 100 CT scans randomly selected from the 262 scans that have at least one juxta-pleural nodule annotation in the publicly available database - Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI). The two-stage VQ only missed 2 out of the 207 nodules at agreement level 1, and the INCs detection for each scan took about 30 seconds in average. Expert filtering reduced FPs more than 18 times, while maintaining a sensitivity of 93.24%. As it is trivial to distinguish INCs attached to pleural wall versus not on wall, we investigated the feasibility of training different SVM classifiers to further reduce FPs from these two kinds of INCs. Experiment results indicated that SVM classification over the entire set of INCs was in favor of, where the optimal operating of our CADe system achieved a sensitivity of 89.4% at a specificity of 86.8%.
Reducing X-ray exposure to the patients is one of the major research efforts in the computed tomography (CT) field, and
one of the common strategies to achieve it is to lower the mAs setting (by lowering the X-ray tube current and/or
shortening the exposure time) in currently available CT scanners. However, the image quality from low mAs acquisition
is severely degraded due to excessive quantum noise, if no adequate noise control is applied during image
reconstruction. Different from filter-based algorithms, statistical reconstruction algorithms model the statistical property
of the noise using a cost function and minimize the cost function for an optimal solution in statistical sense. The
algorithms have shown to be feasible and effective in both sinogram and image domain. In our previous researches, we
proposed penalized reweighted least-squares (PRWLS) approaches to sinogram noise reduction and image
reconstruction for low-dose CT imaging, which are in this statistical category. This work is a continuation of the
research along this direction and aims to compare the reconstruction quality of two different PRWLS implementations
for low-dose cone-beam CT reconstruction: (1) PRWLS sinogram restoration followed by analytical Feldkamp-Davis-
Kress reconstruction, (2) fully iterative PRWLS image reconstruction. Inspired by our recent study on the variance of
low-mAs projection data in presence of electric noise background, a more accurate weight was adopted in the weighted
least-squares term. An anisotropic quadratic form penalty was utilized in both PRWLS implementations to preserve
edges during noise reduction. Experiments using the CatPhan® 600 phantom and anthropomorphic head phantom were
carried to study the relevant performance of these two implementations on image reconstruction. The results revealed
that the implementation (2) can outperform implementation (1) in terms of noise-resolution tradeoff measurement and
analysis of the reconstructed small objects due to its matched image edge-preserved penalty in the image domain.
However, those gains are offset by the cost of increased computational time. Thus, further examination of real patient
data is necessary to show the clinical significance of the iterative PRWLS image reconstruction over the PRWLS
sinogram restoration.
This paper introduces a new strategy to reconstruct computed tomography (CT) images from sparse-view projection data
based on total variation stokes (TVS) strategy. Previous works have shown that CT images can be reconstructed from
sparse-view data by solving a constrained TV problem. Considering the incompressible property of the voxels along the
tangent direction of isophote lines, a tangent vector is consolidated in this newly-proposed algorithm for normal vector
estimation. Then, a minimization problem based on this estimated normal vector is addressed and resolved in
computation. The to-be-estimated image is obtained by executing this two-step framework iteratively with projection
data fidelity constraints. By introducing this normal vector estimation, the edge information of the image is well
preserved and the artifacts are efficiently inhibited. In addition, the new proposed algorithm can mitigate the staircase
effects which are usually observed from the results of the conventional constrained TV method. In this study, the TVS
method was evaluated by patients’ brain raw data which was acquired from Siemens SOMATOM Sensation 16-slice CT
scanner. The results suggest that the proposed TVS strategy can accurately reconstruct the brain images and produce
comparable results relative to the TV-projection onto convex sets (TV-POCS) method and its general case: adaptiveweighted
TV-POCS (AwTV-POCS) method from 232,116 projection views. In addition, an improvement was observed
when using only 77 views for TVS method compared to the AwTV/TV-POCS methods. In the quantitative evaluation,
the TVS method showed adequate noise-resolution property and highest universal quality index value.
In single photon emission computed tomography(SPECT), the non-stationary Poisson noise in the projection data is
one of the major degrading factors that jeopardize the quality of reconstructed images. In our previous researches for
low-dose CT reconstruction, based on the noise properties of the log-transformed projection data, a penalized
weighted least-squares (PWLS) cost function was constructed and the ideal projection data(i.e., line integral) was
then estimated by minimizing the PWLS cost function. The experimental results showed the method could effectively suppress the noise without noticeable sacrifice of the spatial resolution for both fan- and cone-beam
low-dose CT reconstruction. In this work, we tried to extend the PWLS projection restoration method to SPECT by redefining the weight term in PWLS cost function, because the weight is proportional to measured photon counts for transmission tomography (i.e., CT) while inversely proportional to measured photon counts for emission tomography (i.e., SPECT and PET). The iterative Gauss-Seidel algorithm was then used to minimize the cost function, and since
the weight term was updated in each iteration, we refer our implementation as penalized reweighted least-squares
(PRWLS) approach. The restorated projection data was then reconstructed by an analytical cone-beam SPECT reconstruction algorithm with compensation for non-uniform attenuation. Both high and low level Poisson noise was
simulated in the cone-beam SPECT projection data, and the reconstruction results showed feasibility and efficacy of
our proposed method on SPECT.
Orally administered tagging agents are usually used in CT colonography (CTC) to differentiate residual bowel content
from native colonic structure. However, the high-density contrast agents tend to introduce the scatter effect on
neighboring soft tissues and elevate their observed CT attenuation values toward that of the tagged materials (TMs),
which may result in an excessive electronic colon cleansing (ECC) where pseudo-enhanced soft tissues are incorrectly
identified as TMs. To address this issue, we integrated a scale-based scatter correction as a preprocessing procedure into
our previous ECC pipeline based on the maximum a posteriori expectation-maximization (MAP-EM) partial volume
segmentation. The newly proposed ECC scheme takes into account both scatter effect and partial volume effect that
commonly appear in CTC images. We evaluated the new method with 10 patient CTC studies and found improved
performance. Our results suggest that the proposed strategy is effective with potentially significant benefits for both
clinical CTC examinations and automatic computer-aided detection (CAD) of colon polyps.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.