KEYWORDS: Signal attenuation, Monte Carlo methods, Sensors, Photons, Bone, Skin, Systems modeling, Single photon emission computed tomography, Radioisotopes, Lung
Small animal SPECT using low energy photons of I-125 and approaching resolutions of microscopic
levels, imaging parameters such as pinhole edge penetration, detector blur, geometric response, detector
and pinhole misalignment, and gamma photon attenuation and scatter can have increasingly noticeable
and/or adverse effects on reconstructed image quality. Iterative reconstruction algorithms, the widelyaccepted
standard for emission tomography, allow modeling of such parameters through a system
matrix. For this Monte Carlo simulation study, non-uniform attenuation correction was added to the
existing system model. The model was constructed using ray-tracing and further included corrections
for edge penetration, detector blur, and geometric aperture response. For each ray passing through
different aperture locations, this method attenuates a voxel's contribution to a detector element along
the photon path, which is then weighted according to a pinhole penetration model. To lower the
computational and memory expenses, symmetry along the detector axes and an incremental storage
scheme for the system model were used. For evaluating the nonuniform attenuation correction method,
3 phantoms were designed of which projection images were simulated using Monte Carlo methods. The
first phantom was used to examined skin artifacts, the second to simulate attenuation by bone, and the
third to generate artifacts of an air-filled space surrounded by soft tissue. In reconstructions without
attenuation correction, artifacts were observed with up to a 40% difference in activity. These could
be corrected using the implemented method, although in one case overcorrection occurred. Overall,
attenuation correction improved reconstruction accuracy of the radioisotope distribution in the presence
of structural differences.
Most partial volume correction (PVC) methods are ROI-based, and assume uniform activity within each ROI. Here, we extended a PVC method, developed by Rousset et al (JNM, 1998) called geometric transfer matrix (GTM), to a voxel-based PVC approach called v-GTM which accounts non-uniform activity within each ROI. The v-GTM method was evaluated using simulated data (perfect co-registered MRIs). We investigated the influence of noise, the effect of compensating detector response during iterative reconstruction methods and the effect of non-uniform activity. For simulated data, noise did not affect the accuracy of v-GTM method seriously. When detector response compensation was applied in iterative reconstruction, both PVC methods did not improve the recovery values. In the non-uniform experiment, v-GTM had slightly better recovery values and less bias than those of GTM. Conclusion: v-GTM resulted better recovery values, and might be useful for PVC in small regions of interest.
KEYWORDS: Point spread functions, Sensors, Reconstruction algorithms, Single photon emission computed tomography, Monte Carlo methods, Animal model studies, Image resolution, Expectation maximization algorithms, Data modeling, Detection and tracking algorithms
Reconstruction methodologies for data sets with reduced angular sampling (RAS) are essential for efficient dynamic or static preclinical animal imaging research using single photon emission computed tomography (SPECT). Modern iterative reconstruction methods can obtain 3D radiotracer distributions of the highest possible quality and resolution. Essential to these algorithms is an accurate model of the physical imaging process. We developed a new point-spread function (PSF) model for the pinhole geometry and compared it to a Gaussian model in a RAS setting. The new model incorporates the geometric response of the pinhole and the detector response of the camera by simulating the system PSF using the error function. Reconstruction of simulated data was done with OS-EM and COS-EM: a new convergent OS-EM based algorithm. The reconstruction of projection data of a simulated point source using the novel method showed improved FWHM values compared to a standard Gaussian method. COS-EM delivers improved results for RAS data, although it converges slower than OS-EM. The reconstruction of Monte Carlo simulated projection data from a resolution phantom shows that as few as 40 projections are sufficient to reconstruct an image with a resolution of approximately 4 mm. The new pinhole model applied to iterative reconstruction methods can reduce imaging time in small animal experiments by a factor of three or reduce the number of cameras needed to perform dynamic SPECT.
We investigate a new, provably convergent OSEM-like (ordered-subsets expectation-maximization) reconstruction algorithm for emission tomography. The new algorithm, which we term C-OSEM (complete-data OSEM), can be shown to monotonically increase the log-likelihood at each iteration. The familiar ML-EM reconstruction algorithm for emission tomography can be derived in a novel way. One may write a single objective function with complete, incomplete data and the reconstruction variables as in the EM approach. But in the objective function approach, there is no E-step. Instead, a suitable alternating descent on the complete data and then the reconstruction variables results in two update equations that can be shown to be equivalent to the familiar EM algorithm. Hence, minimizing this objective becomes equivalent to maximizing the likelihood. We derive our C-OSEM algorithm by modifying the above approach to update the complete data only along ordered subsets. The resulting update equation is quite different from OSEM, but still retains the speed-enhancing feature of the updates due to the limited backprojection facilitated by the ordered subsets. Despite this modification, we are able to show that the objective function decreases at each iteration, and (given a few more mild assumptions regarding the number of fixed points) conclude that the C-OSEM algorithm provides a monotonic convergence toward the maximum likelihood solution. We simulated noisy and noiseless emission projection data, and reconstructed them using the ML-EM, and the proposed C-OSEM with 4 subsets. We also reconstruct the data using the OSEM method. Anecdotal results show that the C-OSEM algorithm is much faster than ML-EM though slower than OSEM.
By analyzing the noise properties of calibrated low-dose Computed Tomography (CT) projection data, it is clearly seen that the data can be regarded as approximately Gaussian distributed with a nonlinear signal-dependent variance. Based on this observation, the penalized weighted least-square (PWLS) smoothing framework is a choice for an optimal solution. It utilizes the prior variance-mean relationship to construct the weight matrix and the two-dimensional (2D) spatial information as the penalty or regularization operator. Furthermore, a K-L transform is applied along the z (slice) axis to further consider the correlation among different sinograms, resulting in a PWLS smoothing in the K-L domain. As a tool for feature extraction and de-correlation, the K-L transform maximizes the data variance represented by each component and simplifies the task of 3D filtering into 2D spatial process slice by slice. Therefore, by selecting an appropriate number of neighboring slices, the K-L domain PWLS smoothing fully utilizes the prior statistical knowledge and 3D spatial information for an accurate restoration of the noisy low-dose CT projections in an analytical manner. Experimental results demonstrate that the proposed method with appropriate control parameters improves the noise reduction without the loss of resolution.
We previously introduced a new Bayesian reconstruction method for transmission tomographic reconstruction that is useful in attenuation correction in SPECT and PET. To make it practical, we apply a deterministic annealing algorithm to the method in order to avoid the dependence of the MAP estimate on the initial conditions. The Bayesian reconstruction method used a novel pointwise prior in the form of a mixture of gamma distributions. The prior models the object as comprising voxels whose values (attenuation coefficients) cluster into a few classes (e.g. soft tissue, lung, bone). This model is particularly applicable to transmission tomography since the attenuation map is usually well-clustered and the approximate values of attenuation coefficients in each region are known. The algorithm is implemented as two alternating procedures, a regularized likelihood reconstruction and a mixture parameter estimation. The Bayesian reconstruction algorithm can be effective, but has the problem of sensitivity to initial conditions since the overall objective is non-convex. To make it more practical, it is important to avoid such dependence on initial conditions. Here, we implement a deterministic annealing (DA) procedure on the alternating algorithm. We present the Bayesian reconstructions with/out DA and show the independence of initial conditions with DA.
We seek to optimize a SPECT brain-imaging system for the task of detecting a small tumor located at random in the brain. To do so, we have created a computer model. The model includes three-dimensional, digital brain phantoms which can be quickly modified to simulate multiple patients. The phantoms are then projected geometrically through multiple pinholes. Our figure of merit is the Hotelling trace, a measure of detectability by the ideal linear observer. The Hotelling trace allows us to quantitatively measure a system's ability to perform a specific task. Because the Hotelling trace requires a large number of samples, we reduce the dimensionality of our images using Laguerre-Gauss functions as channels. To illustrate our method, we compare a system built from small high-resolution cameras to one utilizing larger, low-resolution cameras.
Maximum a posteriori approaches in the context of a Bayesian framework have played an important role in SPECT reconstruction. The major advantages of these approaches include not only the capability of modeling the character of the data in a natural way but also the allowance of the incorporation of a priori information. Here, we show that a simple modification of the conventional smoothing prior, such as the membrane prior, to one less sensitive to variations in first spatial derivatives - the thin plate (TP) prior - yields improved reconstructions in the sensor of low bias at little change in variance. Although the nonquadratic priors, such as the weak membrane and the weak plate, can exhibit good performance, they suffer difficulties in optimization and hyperparameter estimation. On the other hand, the thin plate, which is a quadratic prior, leads to easier optimization and hyperparameter estimation. In this work, we evaluate and compare quantitative performance of MM, TP, and FBP algorithms in an ensemble sense to validate advantages of the thin plate model. We also observe and characterize the behavior of the associated hyperparameters of the prior distributions in a systematic way. To incorporate our new prior in a MAP approach, we model the prior as a Gibbs distribution and embed the optimization within a generalized expectation- maximization algorithm. For optimization for the corresponding M-step objective function, we use a version of iterated conditional mode. We show that the use of second- derivatives yields 'robustness' in both bias and variance by demonstrating that TP leads to very low bias error over a large range of smoothing parameter, while keeping a reasonable variance.
KEYWORDS: Brain, Autoregressive models, Single photon emission computed tomography, Tissues, Capillaries, Data modeling, Neuroimaging, Monte Carlo methods, Imaging systems, Radioisotopes
In the development of reconstruction algorithms in emission computed tomography (ECT), digital phantoms designed to mimic the presumed spatial distribution of radionuclide activity in a human are extensively used. Given the low spatial resolution in ECT, it is usually presumed that a crude phantom, usually with a constant activity level within an anatomically derived region, is sufficiently realistic for testing. Here, we propose that phantoms may be improved by assigning biologically realistic patterns of activity in more precisely delineated regions. Animal autoradiography is proposed as a source of realistic activity and anatomy. We discus the basics of radiopharmaceutical autoradiography and discuss aspects of using such data for a brain phantom. A few crude simulations with brain phantoms derived from animal data are shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.