Open Access
3 October 2014 Analysis of iterative region-of-interest image reconstruction for x-ray computed tomography
Author Affiliations +
Abstract
One of the challenges for iterative image reconstruction (IIR) is that such algorithms solve an imaging model implicitly, requiring a complete representation of the scanned subject within the viewing domain of the scanner. This requirement can place a prohibitively high computational burden for IIR applied to x-ray computed tomography (CT), especially when high-resolution tomographic volumes are required. In this work, we aim to develop an IIR algorithm for direct region-of-interest (ROI) image reconstruction. The proposed class of IIR algorithms is based on an optimization problem that incorporates a data fidelity term, which compares a derivative of the estimated data with the available projection data. In order to characterize this optimization problem, we apply it to computer-simulated two-dimensional fan-beam CT data, using both ideal noiseless data and realistic data containing a level of noise comparable to that of the breast CT application. The proposed method is demonstrated for both complete field-of-view and ROI imaging. To demonstrate the potential utility of the proposed ROI imaging method, it is applied to actual CT scanner data.

1.

Introduction

Iterative image reconstruction (IIR) for x-ray computed tomography (CT) has been heavily investigated in recent years.1 It is well known that IIR involves a substantially greater computational effort than direct reconstruction algorithms such as filtered back-projection (FBP). For CT, this issue is particularly acute because it is a high-resolution imaging modality, requiring the use of small voxels to achieve the inherent resolution of the CT data.2 Because IIR is based on implicit estimation of the image, a complete representation of the subject within the scanner view domain is also necessary. Objects appearing in the x-ray projections such as the patient couch, which have no relevance to the imaging task, must also be accounted for in the reconstruction volume. Such a complete representation is only necessary for implicit methods and not direct reconstruction algorithms such as FBP, where the inversion equation need only be evaluated at points within an ROI. The combination of IIR requiring the representation of large volumes and use of small voxel exacerbates the computational burden issue for CT image reconstruction. To address this problem, we propose a method for direct ROI IIR, which will allow for the use of smaller reconstruction volumes focused on ROIs relevant to a given imaging task.

The main tool employed to realize ROI IIR appears in our conference record (CR) publication for the 2014 SPIE medical imaging conference.3 In the CR, we presented an offshoot of our adaptive steepest descent–projection onto convex sets algorithm for edge enhanced image reconstruction applied to clinical digital breast tomosynthesis (DBT) projection data. To reach the desired DBT image quality goals, we proposed to compare the estimated and DBT projection data after convolving with a derivative kernel. The resulting IIR algorithm was seen to behave analogously to a class of analytic image reconstruction methods called Λ or local tomography.47 The resulting DBT images accentuated small structures relative to IIR employing the standard data fidelity term. Furthermore, the proposed derivative data fidelity yielded an IIR algorithm robust against heavily truncated projection data allowing for direct image reconstruction of a DBT ROI. It is this latter feature of the modified data fidelity term that we exploit for ROI IIR in CT.

The algorithm developed in Ref. 3 was designed to obtain a useful image after only a few iterations, but it was not run to convergence of the optimization problem upon which the algorithm was based. As a result, the obtained images depend on parameters characterizing the derivative filter in addition to other optimization and control parameters of the algorithm. In this article, we aim to characterize the solution of the proposed optimization problem and accordingly we employ an accurate solver developed by Chambolle and Pock (CP).8,9 In this way, we narrow down the IIR parameter space to only those needed in specifying the optimization problem. With the understanding gained by thoroughly investigating the solution of the proposed optimization problem, design of IIR with severely truncated iteration is facilitated.

In Sec. 2, we present the proposed optimization problem and the CP algorithm derived to solve it. In Sec. 3, we characterize the solution of the optimization problem with ideal and noisy simulated data for sparse-view and standard CT acquisition. In Sec. 4, we focus on ROI image reconstruction, comparing the use of the proposed derivative weighted data fidelity term with that of a standard Euclidean data fidelity with no additional weighting. Finally, in Sec. 5, we demonstrate ROI reconstruction on actual CT scanner data.

2.

Optimization for CT IIR Using a Derivative Weighted Data Fidelity Term

We introduce an optimization problem designed to allow for direct ROI image reconstruction in CT, and present the algorithm used to solve the optimization problem. The CT data model is presented in Sec. 2.1; the particular data fidelity term of interest is introduced in Sec. 2.2, where the link with Λ-tomography is also explained; incorporation of this fidelity into an optimization for CT IIR is discussed in Sec. 2.3; and finally, a pseudocode for the solver is given in Sec. 2.4.

2.1.

Data Model

We employ the standard x-ray transform as the data model for developing CT image reconstruction algorithms

Eq. (1)

g(r0,θ^)=0dtf(r0+tθ^),
where the data function g is a line integration over the object function f over a ray originating at a source location r0 and point in the direction θ^. For x-ray transmission, imaging samples of this data function are usually estimated by taking the negative logarithm of the transmitted x-ray intensity divided by the incident x-ray intensity. Analytic image reconstruction algorithms such as FBP or fan-beam FBP are devised by seeking direct inversion equations for this model.

For IIR, the most common approach, and the approach taken here, is to represent f with a finite size expansion set such as pixels and relate the expansion coefficients directly to the discrete samples of the x-ray projection data

Eq. (2)

g=Xf,
where g is a vector denoting the projection data samples; X is the discrete form of the x-ray transform; and f is a vector denoting the expansion coefficients. For the present work, f is a vector denoting pixel coefficients and X is computed by the line-intersection method.

2.2.

Derivative Weighted Data Fidelity and its Relation to Λ-Tomography

The majority of IIR algorithms that use Eq. (2) employ some form of optimization because the data contain physical factors that can make Eq. (2) inconsistent. This data model may also not specify a unique image if there is insufficient sampling. An example of a simple optimization problem for IIR is to minimize the square of the Euclidean distance between estimated and available projection data

Eq. (3)

f*=argminfϕEucl(f),
where

Eq. (4)

ϕEucl(f)=12(Xfg)22.
Although there may be no solution to Eq. (2), there will always be a minimizer for Eq. (3). Furthermore, this optimization problem can easily be extended to incorporate prior knowledge on the image f, such as a smoothness penalty or physical constraints on pixel values.

In Ref. 3, we presented and motivated an alternative data fidelity function

Eq. (5)

ϕu(f)=12Du(Xfg)22,
where the Euclidean distance is modified by performing numerical differentiation on the estimated and available projections. The symbol u refers to continuous differentiation of a data function on each projection. In two-dimensional (2-D) CT, the direction of this differentiation is unambiguous, up to a sign (the sign plays no role because of the norm of the data fidelity), and the coordinate u indicates position on the detector. In three-dimensional (3-D) CT, the coordinate u is taken to represent the location on the detector in the direction parallel with the x-ray source trajectory. The symbol Du represents a matrix that encapsulates the action of numerical differentiation in the u-direction. As discussed in Ref. 3, the use of this data fidelity can lead to IIR algorithms robust against low-frequency data inconsistency and it can facilitate ROI IIR.

In Ref. 3, we also consider the data fidelity modification which combines ϕEucl(f) and ϕu(f) in order to provide control over the absolute gray level of images obtained by the use of ϕu(f). We define

Eq. (6)

ϕcombo(f)=12Du(Xfg)22+12c2(Xfg)22=12(Du+cI)(Xfg)22.
The second equality states that the sum of the quadratic terms in ϕcombo(f) is equivalent for modifying the derivative filter on the data by adding c times the identity matrix. The last form is convenient for developing the solver and this equality only holds if Du is antisymmetric. Thus, the particular form of Du can vary, but we require it to be an antisymmetric matrix. The finite differencing weighting of Du de-emphasizes the low-frequency components of the data, which, in turn, can impact the overall gray level of the reconstructed images. Adding back the standard Euclidean data fidelity with a small constant c2 addresses this issue.

The use of differentiation in the data fidelity term has a similar motivation as the local filtering employed in Λ-tomography, and a precise connection between the two approaches can be made. In Λ-tomography, see for example Ref. 7, the 2-D parallel beam FBP algorithm is modified by replacing the ramp filter with filtering of the projection data with a second derivative

Eq. (7)

4πR*(2u2)gpara(θ,u)=Λrf(r),
where R is the 2-D Radon transform (forward projection for 2-D parallel-beam CT) and R* is the adjoint operation, back-projection; the second derivative with respect to the detector coordinate u is performed on the continuous 2-D parallel beam sinogram gpara(θ,u) with θ indicating view angle; and the operator Λr performs 2-D cone filtering on the continuous image function f(r). The 2-D cone filter takes the form |k|, where k is the spatial frequency vector counterpart to the spatial vector r. The advantage of Λ-tomography is that the filter is local, and it can thus be performed on heavily truncated projection data. The price to be paid is that Λrf(r) is reconstructed instead of f(r). Yet, the image Λrf(r) can have utility in that the cone filtering yields an image with enhanced edges highlighting the discontinuous boundary between different tissue types.

To see the connection between Λ-tomography and use of the data fidelity ϕu(f), we consider the optimization problem

Eq. (8)

f*=argminfϕu(f).
The solution set to this optimization problem can be identified by taking the gradient of ϕu(f) and setting it equal to zero

Eq. (9)

XTDuTDuXf=XTDuTDug,XT(Du2)Xf=XT(Du2)g,
where the antisymmetry of Du is used to arrive at the second equation. If the scanning configuration for X is a 2-D parallel-beam with complete angular coverage, the following associations can be made in the continuous limit
XRXTR*,Du22u2,ff(r),ggpara(θ,u).

In the limit of this special case, the normal equation in Eq. (9) becomes

Eq. (10)

R*(2u2)Rf(r)=14πΛrf(r)=R*(2u2)gpara(θ,u).
As a result, for parallel-beam data the operations on the projection data yields the cone-filtered image and the minimization of ϕu can be interpreted as deconvolution from the cone-filtering. Another important implication of this link is that the first step of gradient descent on ϕu is the Λ-tomography image, provided that gradient descent is initialized with the zero image. This fact can be important for interpreting the image properties of IIR algorithms exploiting ϕu and run with a truncated iteration. A similar connection is available between ϕcombo and modified Λ-tomography, see Eq. (13) of Ref. 7. We remind the reader that ϕu can be utilized for any CT configuration even though the connection with Λ-tomography occurs under the special circumstances of parallel ray imaging.

2.3.

Optimization Problem for CT IIR

The optimization equation we investigate here is

Eq. (11)

f*=argminf12(Du+cI)(Xfg)22such that(|f|)1γ.
The operator is a matrix representing a finite differencing approximation to the image gradient. The number of pixels/voxels in f is taken to be N and the spatial gradient operator yields a vector-valued image, e.g., z, of size 2N or 3N depending on whether the image is 2-D or 3-D, respectively. The vector symbol is used to indicate a spatial vector-valued image, and the spatial magnitude operator, |·|, converts a spatial vector-valued image to a scalar image by taking the vector magnitude at each pixel/voxel. The quantity f is the spatial gradient of f; |f| is the gradient magnitude image (GMI); and (|f|)1, the sum over the GMI, is the image total variation (TV). The optimization equation, Eq. (11), seeks the image with lowest combined derivative and Euclidean data discrepancy amongst all images with TV less than or equal to γ.

This particular optimization problem is one variation of many that combines some form of data fidelity with a term involving image TV. The TV term is useful for exploiting sparsity in the GMI for image reconstruction problems with limited data. We use it exactly for this reason; the direct ROI reconstruction problem is a form of limited data acquisition, because this problem is equivalent to the projection truncation problem for IIR. The reason for this is that in the implicit solution of Eq. (2), only transmission rays that pass through the ROI are used. This contrasts with direct reconstruction through FBP, where all projection data are used in filtering before back-projecting to the ROI. The particular form of the gradient sparsity exploiting optimization problem, where the constraint is placed in the image TV, is particularly convenient because the data fidelity metric is actually the sum of two fidelity metrics. Furthermore, for computer simulation studies, as performed in this article, the phantom TV provides a good reference for selecting the parameter γ.

To completely specify Eq. (11) for the following CT simulations, we select a particular form of Du. As required, Du is chosen to be purely antisymmetric and it represents convolution with a 21-point kernel. This kernel is generated by smoothing an antisymmetric differencing kernel with a Gaussian of standard deviation ω measured in detector bin units. The smoothing of the finite differencing kernel is useful for controlling noise texture in the reconstructed images. The kernel is shown in Fig. 1 for different values of ω.

Fig. 1

Kernel used in specifying the derivative filter Du for different values of ω, the standard deviation of the Gaussian smoothing.

JMI_1_3_031007_f001.png

Aside from the usual scanning geometry parameters, there are three parameters—γ, c, and ω—involved in the optimization problem of study in Eq. (11). The goal of this article is to demonstrate the effect of each of these parameters on the solution of Eq. (11) in order to gain intuition about this optimization problem for various CT imaging configurations and in particular for direct ROI IIR. For real data applications, such as the DBT reconstructions presented in Ref. 3, an IIR algorithm with a truncated iteration may be used. In this case, we do not expect to achieve the solution to Eq. (11) and as a result, the attained reconstructed volume depends on all the parameters of the IIR algorithm further complicating the characterization of the algorithm. Thus, before a thorough investigation of truncated IIR motivated by Eq. (11), it is important to characterize the solution of this optimization problem. To this end, we employ the CP algorithm for its accurate solution.

2.4.

CP Algorithm to Solve the Proposed Optimization Problem

We have found that the CP algorithm provides a useful framework for prototyping convex optimization problems, constrained or unconstrained, for image reconstruction in CT. For such problems our experience indicates that an accurate solution can be obtained in hundreds to thousands of iterations. Although this is too many for practical IIR on realistic size systems with current computational technology, the required number of iterations is acceptable for characterization studies. Nevertheless, we introduce a few slight modifications to Eq. (11) that we have found to improve the convergence rate and thereby facilitate theoretical investigation. In addition, the CP algorithm is primal–dual and it solves convex minimization problems simultaneously with its dual maximization. The details on how to derive the dual maximization and the corresponding CP algorithm instance for CT is presented in Ref. 9. Here, we simply state the primal and dual problems along with the CP pseudocode for their solution.

We modify Eq. (11) by introducing two constants, λ and ν,

Eq. (12)

f*=argminf12λ(Du+cI)(Xfg)22such that(|νf|)1νγ.
Note that λ and ν do not affect the solution, but we have found in Ref. 10 that they can have a large impact on the algorithm convergence rate. The parameter ν is not varied. Instead it is used to balance the magnitude of the linear operators (Du+cI)X and and we set

Eq. (13)

ν=(Du+cI)X2/2,
where the norm ·2 applied to a linear operator, also known as the spectral norm, is its largest singular value. This norm can be evaluated by the power method. Use of ν balances the two linear operators, which have different physical units. The parameter λ does not impact the solution of Eq. (12) because of the hard constraint, but it can have a strong impact on the CP algorithm convergence rate.

The dual maximization to Eq. (12) is

Eq. (14)

(y*,z*)=argmax(y,z){12λy22+gT(DucI)yνγ(|z|)}such thatXT(DucI)y=νTz,
where y is dual to a data vector and, therefore, has the same size as a sinogram; z is dual to an image gradient, having size 2N or 3N for 2-D or 3-D CT, respectively; and the norm · yields the magnitude of the largest component of the vector argument. In implementing the back-projection, XT, care must be taken to insure that its action is equivalent to that of the true matrix transpose of X. We note that the transpose of is a finite differencing approximation to the negative divergence of a vector-valued image.

The pseudocode for solving the primal, Eq. (12), and dual, Eq. (14), appears in Algorithm 1. This algorithm requires a projection onto an 1 ball,11,12 which is given in Algorithm 2. Although this projection algorithm is described in detail in Ref. 11, we provide it here for completeness. Discussion on convergence criteria for the CP algorithm appears in Refs. 9, 10, and 12, but for the sake of brevity we do not go into detail on convergence here. For the presented results, we have performed the convergence checks and verified that the images are indeed accurate solutions to Eq. (11).

Algorithm 1

Pseudocode for N steps of the CP algorithm instance for solving Eqs. (12) and (14). For conciseness, we set the data filter Fc=Du+cI. Note that the transpose of this filter is FcT=−Du+cI. The smoothing parameter selects the specific form of Du, see Fig. 1. The expression, (FcX,ν∇), is a linear operator combining the filtered projection with the gradient. This combined operator takes in an image vector and yields the concatenation of a data vector and a spatial vector-valued image.9 In Line 11, the ratio of image vectors is to be computed pixel-wise, and in the case of a zero pixel in the denominator, the ratio for that pixel is set to 1. In Line 12, a scalar-valued image is multiplied by a spatial vector-valued image. This operation is again performed pixel-wise, where the spatial vector at each pixel of t→ is scaled by the corresponding pixel value of r yielding a spatial vector-valued image.

1: INPUT: data g, TV constraint parameter γ, combination parameter c, filter smoothing parameter ω
2: INPUT: algorithm parameter λ
3: ν=FcX2/2
4: L(FcX,ν)2
5: τ1/L;σ1/L;θ1;n0
6: initialize f0, y0, and z0 to zero
7: f¯0f0
8: repeat
9: yn+1(yn+σFc(Xf¯ng))/(1+σ/λ)
10: t=zn+σνf¯n
11: r=1σProjectOnto1Ballνγ(|t|/σ)/|t|
12: zn+1rt
13: fn+1fnτ(XTFcTyn+1+νTzn+1)
14: f¯n+1fn+1+θ(fn+1fn)
15: nn+1
16: untilnN
17: OUTPUT: image fN

Algorithm 2

Pseudocode for the function ProjectOntoℓ1 Ballα(x), which projects x onto the ℓ1-ball of scale γ. In terms of an optimization problem, this function takes in a vector x and returns x* such that x*=argminx′12‖x−x′‖22, such that ‖x′‖1≤α. The vector x is taken to be one-dimensional with length N, and the individual components are labeled xi with index i being an integer in the interval [1,N].

1: functionProjectOnto1Ballα(x)
2: ifx1αthen
3: returnx
4: end if
5: m=|x|
6: Sort m in descending order: m1m2mN
7: ρmaxj such that mj1j(k=1jmkα)>0, for j[1,N]
8: θ(1/ρ)(k=1ρmkα)
9: w=max(|x|θ,0)
10: returnwsign(x)
11: end function

3.

Characterization with Simulated Untruncated Projection Data

The simulated 2-D CT data are based on the breast CT imaging application. The computer generated phantom shown in Fig. 2 models fat and fibro-glandular tissues. The complex structure is obtained by generating an image with power-law filtered white noise, followed by thresholding. The structure model is described in Ref. 13. The scanning geometry is 2-D fan-beam CT with full circular coverage, as what would be available in the midplane of an actual breast CT scan. The source-to-isocenter distance is 36 cm, and the source-to-detector distance is 72 cm. A linear detector with 1024 bins is modeled, and for the majority of the simulations, 256 projection views are taken. The exceptions are 1024 views are used for the studies without TV regularization, and 64 views are employed in one study examining the possibility of sparse-view sampling. In this initial series of characterization studies, we examine image reconstruction from ideal noiseless data, where the projections are generated from a phantom defined on the same 512×512pixel grid used for the image reconstruction. We also perform studies for inconsistent CT data with noise modeling a typical breast CT scan, and projection operating on mismatched grids. The phantom projections are generated from a 1024×1024pixel grid and the image estimate projections are generated from a 512×512pixel grid.

Fig. 2

Breast phantom for computed tomography (CT) and its corresponding gradient magnitude image (GMI). (a) The linear attenuation map of the phantom in the gray scale window [0.174,0.253]cm1. The attenuation values for the tissues, fat, and fibro-glandular, simulated in the phantom are 0.194 and 0.233cm1, respectively. These values model a monochromatic x-ray source at 50 keV. (b) The GMI [0.0,0.1]cm1 illustrating the test phantom’s sparsity in the GMI.

JMI_1_3_031007_f002.png

Comparison of images presented in this work may not be straight-forward because of the loss of absolute gray level information. For some series of images, in order to have objective image comparisons, we compute the mean, fmean, and standard deviation, fstd, of the pixel values within a specified region contained just inside the boundaries of the test phantom or the field-of-view (FOV), whichever is smaller. The gray scale range is computed by

Eq. (15)

gmin=fmean2fstd,gmax=fmean+2fstd.
It is stated explicitly in the figure captions when this method is used to compute the image gray scale.

To assist in the interpretation of the CT IIR results, we show FBP and Λ-tomography images in Fig. 3. The Λ-tomography image shows the expected enhancement at tissue borders and loss of the absolute gray level. Again, the reason Λ-tomography might be an interesting choice in addition to the edge enhancement is that the filter is local, allowing for projection truncation.

Fig. 3

Analytic reconstruction from noiseless data comprised of 1024 projections. (a) Reconstruction by filtered back-projection (FBP), using a pure ramp filter, and (b) Λ-tomography where the data are convolved with a finite differencing approximation to the second derivative prior to back-projection. The Λ-tomography image aids in the interpretation of the images at low iteration numbers of the presented iterative image reconstruction (IIR) algorithm. The gray scale is computed by use of Eq. (15).

JMI_1_3_031007_f003.png

The first set of IIR images result from a CP algorithm designed to minimize the standard quadratic data fidelity ϕEucl(f). The pseudocode appears in Algorithm 2 of Ref. 9. Intermediate images of this algorithm are shown in Fig. 4, where the 1024-view data are noiseless and perfectly consistent. The behavior of the progression of images, starting from a zero initial image, is generic to other first-order nonsequential algorithms applied to minimization of ϕEucl(f). The image after the first iteration is blurry; in fact, by a similar argument to the one presented in Sec. 2.2, one can show that this image is approximately the result of back-projection applied to the projection data. As the iterations progress, the images become less blurry, and eventually the test phantom is obtained.

Fig. 4

Iterative image reconstruction from noiseless projection data containing 1024 views by use of a Chambolle and Pock (CP) algorithm designed to minimize ϕEucl(f). The image sequence shows intermediate iterations, and the iteration number is indicated in each panel. The gray scale window appears above each image. The gray scale is computed by use of Eq. (15).

JMI_1_3_031007_f004.png

The next set of IIR images result from a CP algorithm designed to minimize the proposed data fidelity ϕcombo(f) with ω=0 and c=0. The same pseudocode from Algorithm 2 of Ref. 9 can be employed to minimize this data fidelity. Similar to gradient descent, one can show that the first iteration of this CP algorithm is proportional to a Λ-tomography image if a zero initial estimate is used. Intermediate images of this algorithm are shown in Fig. 5, where the 1024-view data are noiseless and perfectly consistent. Indeed, we observe that the first panel shows an image proportional to that of Λ-tomography and as the algorithm proceeds to convergence we observe that the images tend toward the phantom. Minimization of both ϕEucl(f) and ϕcombo(f) results in accurate recovery of the test phantom, but the approach to this solution is quite different; the first iterate in minimizing ϕcombo(f) already has much of the tissue structure information relative to that of minimizing ϕEucl(f). Although the image appearance of the early iterations in Fig. 5 has been explained, it is not obvious that the phantom should be recovered accurately at convergence—in light of the finite-differencing filter applied to both system matrix and data in ϕcombo(f).

Fig. 5

Iterative image reconstruction from noiseless projection data containing 1024 views by use of a CP algorithm designed to minimize ϕcombo(f). The image sequence shows intermediate iterations, and the iteration number is indicated in each panel. The gray scale window appears above each image. The gray scale is computed by use of Eq. (15).

JMI_1_3_031007_f005.png

In order to understand the phantom recovery, we apply singular value decomposition (SVD) in a manner similar to that of Ref. 14. We note that under ideal noiseless conditions, the minimizer of ϕcombo(f) with ω=0 and c=0 satisfies the following linear equation:

Eq. (16)

DuXf=DuXf0,
where f0 is the test phantom. If the combined system matrix DuX is left-invertible then this equation can be reduced to f=f0 and the minimizer of ϕcombo(f) with ω=0 and c=0 is the test phantom f0. The system matrix DuX is left-invertible if its condition number is finite. We demonstrate this by performing SVD on smaller systems with the same sampling ratio, computing the corresponding condition numbers, and extrapolating to the system size of interest. The present sampling ratio takes the form of 2N projections onto a detector of 2N bins for an image represented on an N×N pixel grid, where N=512. The largest and smallest singular values for systems with the same sampling ratio are shown in Fig. 6 for N varying between 18 and 64. Assuming a linear relationship between the logarithm of these singular values and the logarithm of N, a condition number of 8.87 is estimated for the present system corresponding to N=512. Thus, DuX is left-invertible and the minimizer of ϕcombo(f) is f0, thereby explaining the accurate phantom recovery shown in Fig. 5.

Fig. 6

Maximum and minimum singular values for the combined system matrix DuX for a two-dimensional CT configuration with image pixel array N×N and projection data set consisting of 2N projections over a 2π scanning arc and a linear detector array with 2N detection elements. The solid circles represent singular values arrived at through direct SVD of a small system with N ranging from 18 to 64. The solid diamond represents extrapolation to the case of interest at N=512. The linear fitting model is displayed in the legend of each graph.

JMI_1_3_031007_f006.png

It is also interesting to observe the evolution of the CP algorithm iterates when the sampling ratio is more challenging. Reducing the view number to 256 projections results in a system matrix DuX with a much larger condition number. Intermediate images of the CP algorithm are shown in Fig. 7, where the 256 view data are noiseless and perfectly consistent. We again observe that the first panel shows an image proportional to that of Λ-tomography and as the algorithm proceeds to convergence we observe that the images seem to tend toward the phantom; however, the converged image shows noise-like texture in the images resulting from the insufficiency of the number of projections. Interestingly, however, the absolute gray level appears to be accurately recovered despite the poor overall conditioning of DuX with 256 projections.

Fig. 7

Same as Fig. 5 except that the projection data contain only 256 views. The gray scale is computed by use of Eq. (15).

JMI_1_3_031007_f007.png

As discussed in Sec. 2.3, the view angle undersampling can be dealt with by exploiting GMI sparsity. To demonstrate this, we solve the optimization problem in Eq. (12) using the pseudocode in Algorithm 1 and again model noiseless consistent 256-view projection data. Because the data are ideal, the parameters c and ω are again set to zero. The TV constraint parameter is selected to be that of the phantom, γ=γ0. The resulting sequence of images is shown in Fig. 8, and they resemble those of Fig. 7 except that at iteration 10 and after the noise-like artifacts seem to be removed by the addition steps constraining the image TV. Furthermore, at convergence, the phantom is recovered to high accuracy. The accurate phantom recovery for this example, of course, depends on the fact that we know the correct TV constraint value γ, but the purpose here is to show that an ideal program can recover the phantom under ideal conditions. That it can do so is not obvious because the data discrepancy objective is different than the standard Euclidean form. Although the Λ-tomography interpretation of the low iteration images is useful, it is important to keep in mind that the specific evolution of the images depends on the values of all of the parameters in Algorithm 1 in addition to parameters of the optimization problem Eq. (12). The converged image, on the other hand, depends only on parameters of the optimization problem.

Fig. 8

Same as Fig. 7 except that the image sequence results from use of Algorithm 1, which is designed to solve total variation (TV)-constrained minimization of ϕcombo(f), see Eq. (12). Note that the first images resemble that of Λ-tomography. We also observe that at convergence the phantom is recovered accurately. The gray scale is computed by use of Eq. (15).

JMI_1_3_031007_f008.png

In order to further test the robustness against view angle undersampling, we reduce the ideal data set to only 64 projections. The resulting image sequence appears in Fig. 9. We observe that Algorithm 1 can accurately recover the test phantom as the image estimates near convergence. The Λ-tomography interpretation of the low iteration images, however, suffers due to the streaks from the view angle undersampling. Nevertheless, the insensitivity of the proposed data discrepancy term to absolute gray level does not appear to hinder the ability of Algorithm 1 to recover the phantom from sparse view data.

Fig. 9

Same as Fig. 8 except that the simulated projection data contain only 64 views. In this case, the low iteration images are contaminated by view-angle undersampling artifacts, which appear in the form of streaks. As the image estimates approach convergence, we observe that the phantom is again accurately recovered. The gray scale is computed by use of Eq. (15).

JMI_1_3_031007_f009.png

Finally, we investigate the response of Algorithm 1 to inconsistent projection data. Two forms of inconsistency are introduced in the data model. First, there is mismatch between the discrete phantom grid, 1024×1024, and reconstructed image grid, 512×512. Second, noise is introduced with a Poisson-like Gaussian distribution, i.e., the variance of the Gaussian is set equal to the mean, where the mean is the average transmission at each detector bin assuming 7.5×104 photons are incident to each bin prior to passing through the subject. Thus, we are modeling a low-dose CT scan as what is typically proposed for breast CT. Images corresponding to the accurate solution of Eq. (12) with the TV constraint parameter selected to be that of the phantom, γ=γ0, and various values of the parameters ω and c, are displayed in Fig. 10. The parameter ω clearly impacts noise texture, and here, we only aim to show how image quality changes with this parameter. Optimal setting of ω can only be determined based on the particular image task of the CT scan. Although c has little impact on the reconstructed images for these simulations with untruncated projections and full image representation, this parameter helps to control gray level for ROI imaging.

Fig. 10

Images reconstructed from 256 projections with an inconsistent data model by Algorithm 1. In the top row, the derivative smoothing width is fixed at ω=1, and for the bottom row the combination parameter is set to c=0. The gray scale for each of the images is [0.171, 0.233]. As can be seen c has no perceptible effect on these images where the data contain no truncated projections. The derivative smoothing width, on the other hand, has a clear impact on the noise texture.

JMI_1_3_031007_f010.png

4.

Characterization for CT ROI Imaging

In this section, the CT simulations aim at illustrating the utility of Algorithm 1 for ROI imaging, where an incomplete image representation is used that spans only a given ROI. This study is performed in two steps: first, using a full image representation, we reconstruct the FOV of a truncated projection acquisition; and second, using only image pixels in this same FOV, we perform image reconstruction. By the use of the term FOV, we specifically refer to the collection of all pixels that are illuminated by all views in the data set. We use this term instead of “interior tomography,”15 which has a specific meaning in 2-D image reconstruction where the aim is to recover a continuous region interior to the support of the subject from projections illuminating only this interior region. Furthermore, for interior tomography, the data model is the continuous 2-D Radon or x-ray transform, while for the present FOV imaging, the data model is the discrete x-ray transform. We make the distinction between ROI and FOV in order to allow for the design of a scan where the FOV is larger than the ROI in order to avoid potential artifacts at the FOV border.

First, we investigate the impact of truncated data while keeping the image representation complete, i.e., the image pixels cover the region where the object support is nonzero. The simulated projection data are ideal and noiseless, containing 256 projections. Each of the projections is truncated so that the FOV of the scan is a centered circle of diameter 9 cm, i.e., half the width of the image array. The left column of images in Fig. 11 corresponds to converged solutions of Eq. (12) for ω=0 and c=0, and for comparison, the right column of images results from the same optimization problem without the proposed weighting Du. For this configuration, it is not clear what is the optimal choice of the TV constraint parameter γ, even though we know what is the test phantom. Using the TV of the phantom within the FOV, γFOV and the TV of the entire phantom γ0 as lower and upper bounds on γ, respectively, we reconstruct images for four intermediate values of γ parametrized by γw

Eq. (17)

γ=(1γw)γFOV+γwγ0.

Fig. 11

Images reconstructed from truncated, ideal noiseless data. The shown images on the left column are converged solutions to Eq. (12) with c=0 and ω=0, and those on the right column are converged solutions of the same optimization without the weighting Du, i.e., TV-constrained Euclidean data discrepancy minimization. The value of γw indicated in each panel determines the TV constraint parameter γ by Eq. (17). The horizontal lines indicate the location of the image profiles plotted in Fig. 12. The gray scale is set to [0.165, 0.265].

JMI_1_3_031007_f011.png

For γw=1, use of the weighting Du seems to make little difference; both images show cupping although that of the Du weighting image has slightly less cupping than when no weighting is used. As γw decreases, both sets of images show a decrease in resolution particularly in going from γw=0.5 to 0.25; however, for the Du weighting images edges within the FOV appear to remain sharp relative to the edges in the images generated without the proposed weighting. Moreover, the Du weighting images show markedly less cupping. A quantitative sense of these observations is obtained in the profile plots of Fig. 12.

Fig. 12

Profiles of the images shown in Fig. 11 compared with the phantom profile.

JMI_1_3_031007_f012.png

Second, we investigate the impact of employing an incomplete image representation, where pixels cover only the region of the FOV from the previous study, a centered 9-cm diameter circular ROI. Note that in this case it does not matter whether the projections are truncated to this 9 cm ROI or not. For untruncated projections, measurements corresponding to the rays not intersecting the ROI end up not playing any role in the IIR algorithm. Converged images—with and without the Du weighting—obtained by Algorithm 1 appear in Fig. 13. It is in the use of the smaller ROI representation where we see a possible major advantage of the Du weighting. The Du weighting reduces the data discrepancy incurred by using too small an image representation. As a result, the TV constraint can be applied without destroying structure information in the ROI while the use of the TV constraint without the Du weighting destroys this information. Inspecting the gray scale windows for both plots, we note that the automatically computed gray scale for the Du weighting image is substantially lower than the values of the actual phantom, while the image without weighting has approximately the correct absolute gray scale (aside from the fact that the structure information is lost). That the absolute gray scale of the Du weighting is lost is the primary motivation for introducing the parameter c.

Fig. 13

Images reconstructed from ideal noiseless data containing 256 projections. The image representation is restricted so that all pixels are contained in a 9-cm diameter ROI. The left image is a converged solution to Eq. (12) with c=0 and ω=0, and the the right image is a converged solution of the same optimization without the weighting Du, i.e., TV-constrained Euclidean data discrepancy minimization. The TV constraint parameter is set to the TV of the phantom within the ROI. The gray scale is computed by use of Eq. (15).

JMI_1_3_031007_f013.png

For the final simulations, we again employ the ROI-only representation, but model inconsistent projection data with grid mismatch and measurement noise, corresponding to 7.5×104 photons incident to each detector bin. Again, the data acquisition configuration is chosen so that the FOV coincides with the 9-cm diameter ROI. In the grid of images shown in Figs. 14 and 15, results are shown for various values of γ and c, while ω is fixed to one. The value of γ is parametrized in these figures with the multiplicative factor γf

Eq. (18)

γ=γfγFOV,
where γFOV is the TV of the phantom within the FOV. The same results are shown in both figures, because Fig. 14, with its wide all encompassing gray scale window, provides a sense of the gray level shifting, while Fig. 15 depicts contrast and noise texture more clearly.

Fig. 14

Images reconstructed from inconsistent noisy data containing 256 projections. Each of the projections is truncated so that the field-of-view (FOV) of the scan is a centered circle of diameter 9 cm and the image representation is restricted so that all pixels are contained in the FOV. The array of images results from accurate solution to Eq. (12) for ω=1 and various values of c and the TV constraint parameter is set according to Eq. (18). The values of γf, see Eq. (18), indicated in the left most images remain constant for each row, and the values of c indicated in the bottom row of images remain constant for each column. The image gray scale is [0, 0.35], wide enough to include all images for the sake of comparison. For individual images, of course, a narrower gray scale may be employed to improve contrast. The test phantom is displayed in the lower, right panel for reference.

JMI_1_3_031007_f014.png

Fig. 15

Same as Fig. 14 except that the gray scales are determined by Eq. (15) and written above each of the corresponding image panels.

JMI_1_3_031007_f015.png

The leftmost column of images in Fig. 15 shows results for c=0. Each of these images show the structural information of the phantom within the FOV, but the gray values are quite low and independent of γ. Within this column, the TV constraint is an effective control over image regularization, and the image corresponding to γf=1 shows almost no noise artifacts while suffering minimal blurring due to the regularization. As c increases, the gray levels of the reconstructed images also increase. The restoration of the gray levels, however, comes at the price of the TV constraint becoming less effective. Following the γf=1 row, the increasing data discrepancy caused by increasing c results in image details being smoothed away. Inspecting the c=0.05 column, the image details can be recovered by loosening the TV constraint, but there is a clear loss of detail relative to the c=0 images. Nevertheless, for some imaging applications, it may be important to employ the c parameter to recover absolute gray level information.

5.

Application of ROI IIR to Severely Truncated Projections from Actual CT Scanner Data

To demonstrate application of the proposed algorithm to actual scanner data, we employ an XCounter CT scan of a rat. The data set contains 1000 projections with a 768×512 detector array where the detection elements are 0.1 mm in width. In order to test the ROI capability, we truncate the projection data to include only transmission rays passing through a cylindrical subvolume of radius 8 mm and height 18 mm. This volume contains part of the rat jaw and a fore paw. For reference, we employed FBP to reconstruct the complete untruncated data set, and display slices that traverse the chosen ROI in Fig. 16.

Fig. 16

From left to right are a transaxial slice, normal to the z-axis, and vertical slices normal to the y- and x-axes reconstructed from the XCounter CT scan of a rat by a standard FBP implementation. The display plane resolution is 0.1 mm. In order to reduce noise, three 0.1-mm thick slices along the view axis are averaged. The display gray scale is [0.08,0.45]. The boxed regions indicate the display bounds of Fig. 17.

JMI_1_3_031007_f016.png

Returning to the truncated data set, we apply FBP, Λ-tomography, and the proposed ROI IIR algorithm with c=ω=0 for two different values of the TV constraint, γ. The constraint values correspond to an average gradient magnitude of 1.25×103(cm1) and 6.0×104(cm1). The resulting images are shown in Fig. 17. The FBP images show the usual cupping in the FOV of the scan, and it is difficult to see structure near or beyond the FOV. The images generated by Λ-tomography do not show a discontinuity at the FOV border, and the edges of structures are clearly seen but the gray levels are, as expected, lost. The ROI IIR algorithm appears to retain some gray level information without suffering from strong discontinuities at the FOV border. The difference in behavior near the FOV border is most striking in the images of the middle column of Fig. 17 which show the rat fore paw. We point out that this study on CT scanner data is only a preliminary demonstration and we did not attempt to optimize image quality over the parameters c, ω, and γ. Likewise, the implementations of FBP and Λ-tomography are basic because these images serve only as a reference. For the FBP images, we did not explore alternate filter designs nor attempt to counteract the cupping artifact by postprocessing or extrapolating projections.

Fig. 17

ROIs reconstructed by, from bottom row to top row, FBP, Λ-tomography, ROI IIR with c=ω=0 and γ=γ1 and same for γ=γ2. The constraint parameters γ1 and γ2 correspond to average gradient magnitudes of 1.25×103(cm1) and 6.0×104(cm1), respectively. From left to right are a transaxial slice, normal to the z-axis, and vertical slices normal to the y- and x-axes. The display plane resolution is 0.1 mm. In order to reduce noise, three 0.1-mm thick slices along the view axis are averaged. The display gray scale that was set by eye is given above each image.

JMI_1_3_031007_f017.png

6.

Discussion

We have presented an IIR algorithm based on an alternative data fidelity term that is designed to facilitate ROI reconstruction. The use of the restricted image representation can allow for IIR with high-resolution image representation without incurring prohibitively high computational and memory demands. Furthermore, the use of the proposed filtering on the data discrepancy preserves edges allowing for a high degree of regularization before structures in the subject are blurred away.

The present investigation mainly focuses on image properties of the converged solution of TV constrained data discrepancy minimization, where the images depend on three parameters: γ controlling regularization, ω affecting noise texture, and c allowing for gray level recovery. It may be that for practical application of the proposed data fidelity term, IIR algorithms operate with truncated iteration. Characterizing such algorithms becomes more complicated as other algorithm parameters in addition to the three of the optimization problems affect the reconstructed image when convergence of the corresponding optimization problem is not achieved. The established link with Λ-tomography, however, can aid in the design of IIR using the proposed derivative weight and operating with truncated iteration.

Acknowledgments

D. N. Kraemer and E. G. Roth were supported by the NSF MedIX REU Program Award No. 1359459. This work was supported in part by NIH R01 Grants Nos. CA158446, CA182264, EB018102, and EB000225. The contents of this article are solely the responsibility of the authors and do not necessarily represent the official views of the National Institutes of Health.

References

1. 

J. Nuytset al., “Modelling the physics in the iterative reconstruction for transmission computed tomography,” Phys. Med. Biol., 58 (12), R63 –R96 (2013). http://dx.doi.org/10.1088/0031-9155/58/12/R63 PHMBA7 0031-9155 Google Scholar

2. 

E. Y. Sidkyet al., “A constrained, total-variation minimization algorithm for low-intensity X-ray CT,” Med. Phys., 38 (3), S117 –S125 (2011). http://dx.doi.org/10.1118/1.3560887 MPHYA6 0094-2405 Google Scholar

3. 

E. Y. Sidkyet al., “Enhancing tissue structures with iterative image reconstruction for digital breast tomosynthesis,” Proc. SPIE, 9033 90330W (2014). http://dx.doi.org/10.1117/12.2043776 PSISDG 0277-786X Google Scholar

4. 

A. G. RammA. I. Katsevich, The Radon Transform and Local Tomography, CRC Press, Boca Raton, Florida (1996). Google Scholar

5. 

A. I. Katsevich, “Cone beam local tomography,” SIAM J. Appl. Math., 59 (6), 2224 –2246 (1999). http://dx.doi.org/10.1137/S0036139998336043 SMJMAP 0036-1399 Google Scholar

6. 

M. A. Anastasioet al., “Local cone-beam tomography image reconstruction on chords,” J. Opt. Soc. Am. A, 24 (6), 1569 –1579 (2007). http://dx.doi.org/10.1364/JOSAA.24.001569 JOAOD6 0740-3232 Google Scholar

7. 

J. FrikelE. T. Quinto, “Characterization and reduction of artifacts in limited angle tomography,” Inverse Probl., 29 (12), 125007 (2013). http://dx.doi.org/10.1088/0266-5611/29/12/125007 INPEEY 0266-5611 Google Scholar

8. 

T. PockA. Chambolle, “Diagonal preconditioning for first order primal-dual algorithms in convex optimization,” in Int. Conf. on Computer Vision (ICCV 2011), 1762 –1769 (2011). Google Scholar

9. 

E. Y. SidkyJ. H. JørgensenX. Pan, “Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle–Pock algorithm,” Phys. Med. Biol., 57 (10), 3065 –3091 (2012). http://dx.doi.org/10.1088/0031-9155/57/10/3065 PHMBA7 0031-9155 Google Scholar

10. 

E. Y. Sidkyet al., “Constrained TpV minimization for enhanced exploitation of gradient sparsity: application to CT image reconstruction,” J. Trans. Eng. Health Med., 2 1800418 (2014). http://dx.doi.org/10.1109/JTEHM.2014.2300862 Google Scholar

11. 

J. Duchiet al., “Efficient projections onto the 1-ball for learning in high dimensions,” in Proc. of the 25th Int. Conf. on Machine learning, 272 –279 (2008). Google Scholar

12. 

E. Y. SidkyJ. S. JørgensenX. Pan, “First-order convex feasibility algorithms for x-ray CT,” Med. Phys., 40 (3), 031115 (2013). http://dx.doi.org/10.1118/1.4790698 MPHYA6 0094-2405 Google Scholar

13. 

I. ReiserR. M. Nishikawa, “Task-based assessment of breast tomosynthesis: effect of acquisition parameters and quantum noise,” Med. Phys., 37 (4), 1591 –1600 (2010). http://dx.doi.org/10.1118/1.3357288 MPHYA6 0094-2405 Google Scholar

14. 

J. S. JørgensenE. Y. SidkyX. Pan, “Quantifying admissible undersampling for sparsity-exploiting iterative image reconstruction in x-ray CT,” IEEE Trans. Med. Imaging, 32 (2), 460 –473 (2013). http://dx.doi.org/10.1109/TMI.2012.2230185 ITMID4 0278-0062 Google Scholar

15. 

F. Natterer, The Mathematics of Computerized Tomography, John Wiley & Sons, New York (1986). Google Scholar

Biography

Emil Y. Sidky received the BS degree in physics and mathematics from the University of Wisconsin, Madison, in 1987 and the PhD degree in physics from the University of Chicago in 1993. He held academic positions in physics at the University of Copenhagen and Kansas State University. He joined the University of Chicago in 2001, where he is currently a research associate professor. His current interests are in CT image reconstruction and large-scale optimization.

David N. Kraemer is currently pursuing an undergraduate degree in mathematics and computer science at Grinnell College, which he began last fall. He expects to graduate in 2017. His current work in medical physics has come through an REU through DePaul University and the University of Chicago, funded by the NSF.

Erin G. Roth is completing her undergraduate degree in physics at Carleton College. During the summer of 2013, she worked as a NASA Illinois Space Grant student studying black holes at Northwestern University. At Carleton College, she has spent time researching the basic behavior of pulsars. This summer, as part of the DePaul MedIX REU program, she is working in the field of medical imaging at the University of Chicago.

Christer Ullberg has been with XCounter since 1997 and prior to that has more than 10 years of professional experience in project management. Previously, he was responsible for all electronics design for space applications at ACR Electronic AB and worked as project manager of the multinational scientific balloon project PIROG. He is a world expert on CdTe photon counting, dual energy technology.

Ingrid S. Reiser is an assistant professor of radiology at the University of Chicago in Chicago, Illinois. Her research interests include computer-aided detection and diagnosis methods for breast cancer in dedicated breast CT and digital breast tomosynthesis, as well as objective assessment of x-ray tomographic x-ray breast imaging systems.

Xiaochuan Pan received his BS degree from Beijing University, Beijing, China, in 1982, his MS degree from the Institute of Physics, Chinese Academy of Sciences, Beijing, in 1986, and the master’s and PhD degrees from the University of Chicago in 1988 and 1991, respectively, all in physics. He is currently a professor with the Department of Radiology, University of Chicago. His research interests include physics, algorithms, and applications of tomographic imaging.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Emil Y. Sidky, David N. Kraemer, Erin G. Roth, Christer Ullberg, Ingrid S. Reiser, and Xiaochuan Pan "Analysis of iterative region-of-interest image reconstruction for x-ray computed tomography," Journal of Medical Imaging 1(3), 031007 (3 October 2014). https://doi.org/10.1117/1.JMI.1.3.031007
Published: 3 October 2014
Lens.org Logo
CITATIONS
Cited by 36 scholarly publications and 4 patents.
Advertisement
Advertisement
KEYWORDS
X-ray computed tomography

Image restoration

Reconstruction algorithms

Algorithm development

CT reconstruction

Data modeling

Image filtering

Back to Top