Open Access
12 January 2019 Robust multimaterial decomposition of spectral CT using convolutional neural networks
Zhengyang Chen, Liang Li
Author Affiliations +
Abstract
Spectral computed tomography (CT) can reconstruct scanned objects at different energy-bins and thus solve the multimaterial decomposition (MMD) problem. Because the linear attenuation coefficients of different basis materials may be extremely close, the decomposition problem is often ill-conditioned. Meanwhile, traditional material decompositions with image-domain algorithms are usually voxelwise based. Therefore, these algorithms rely heavily on image quality. Ring artifacts often exist in the reconstructed images of spectral CT due to the inconsistency feature of energy-resolved detectors and beam-hardening effect. Considering the enlargement of the receptive field and taking advantage of the modeling ability of convolutional neural networks in deep learning, we proposed a convolutional material decomposition algorithm to solve the MMD problem through a basis of patches instead of pixels of the spectral CT images. Simulations and physical experiments were performed to validate the proposed algorithm, and its quality was compared with a traditional MMD algorithm in the image domain. Results show that the proposed method achieves good accuracy, reduces mean squared errors by one to two orders, and exhibits robustness in the MMD of spectral CT images even in the case that obvious ring artifacts is presented.

1.

Introduction

The ability of spectral computed tomography (CT) to distinguish photons in terms of their different energies allows this technique to decompose scanned objects into their basis materials.15 Compared with linear attenuation coefficients from traditional single energy CT, the distribution of basis materials can effectively reflect inner information. For example, in clinical practice, liver fat quantification requires the calculation of the components of fat, tissue, blood, and contrast agent.6

With the upgrading of product applications and the popularity of spectral CT, many material decomposition algorithms have been proposed. These algorithms can be divided into three types, namely, projection-domain processing methods, image-domain processing methods, and direct iterative reconstruction methods. Projection-domain methods use the projections from detectors as the input data and bring the decomposition model into nonlinear polychromatic projection process.79 Therefore, they are sensitive to the mismatch of the projection data in different energy-bins. Image-domain methods assume that the effective attenuation coefficients of each pixel of the spectral CT images can be decomposed into the linear combination of several basis functions.1016 They are widely using in current preclinical and clinical applications. However, the performance of image-domain method is usually affected by the ring artifacts and beam-hardening artifacts within the CT images especially for the spectral CT using the photon-counting detectors. The direct iterative reconstruction method incorporates the models of the material decomposition and the physics of spectral CT transmission.1720 For example, in Ref. 20, the iterative reconstruction method can even perform MMD of traditional integral detectors using angle-dependent filters. Theoretically, it has the potentials of decreasing noise and artifacts by taking the advantages of accurate modeling. However, its computational cost of the forward- and backward-projections between the material images and the projections in different energy-bins is expensive. In this paper, we focus on the MMD problem in image-domain.

Traditional material decomposition algorithms in spectral CT use the linear combination of a set of basis functions to represent the linear attenuation coefficient of location x and then apply these functions in material decomposition.21 Different basis functions lead to different material decomposition methods. For example, Ref. 22 uses the linear attenuation curves of preselected materials as the basis functions, leading an analytical solution of MMD problem. However, regardless of the choice of basis functions, these traditional material decomposition algorithms are voxelwise based, using no more than dozens of pixels around the target pixel. Therefore, their performances usually depend on the reconstructed image quality of spectral CT. They are often affected by the beam-hardening artifacts and ring artifacts in the spectral CT using the photon-counting detectors. An idea to avoid these problems is using the data-driven methods, for example, machine learning, instead of the analytical solution such as Ref. 22. In this way, we can expand the receptive field of data-driven algorithms to provide more information for their learning.

Wang23 published an outlook article on deep learning and presented a roadmap of CT imaging methods in the deep learning framework. Inspired by his work, we imported the deep learning technique into spectral CT and proposed a multimaterial decomposition (MMD) method in image-domain.24 Deep learning has made considerable progress in many fields. In image analyses, deep neural networks can extract the features of images without manual marking and are thus effective in image classification.2527 Recently, deep learning has shown its effectiveness in medical imaging,28 such as auxiliary diagnosis,2931 lesion localization,32,33 and image noise reduction.34,35 By using deep learning, we can break through the restriction of sampling theorem and realize reconstruction using few views.36 Using deep learning to acquire priori knowledge of images can also enhance the results of traditional MMD algorithm.37 As a universal approximator,38 neural networks can be trained to solve the MMD problem by given adequate and suitable training data. In this paper, we propose a convolutional material decomposition (CMD) algorithm on the basis of patches instead of voxels of the spectral CT images by building two convolutional neural networks (CNN) based on existing networks. Section 2 presents the CMD algorithm. Then, the results of simulation and physical experiment are presented in Sec. 3 to validate it. Sections 4 and 5 are the discussion and conclusion.

2.

Method

2.1.

MMD Model

As an image-domain method, the reconstructed image of spectral CT is used as the input data. The reconstructed image is the x-ray linear attenuation coefficient μ(x,E) of an object scanned at different spectra. Here, x is the spatial position, and E is the spectrum or energy-bin.35

Knowing μ(x,E) by the spectral CT reconstruction algorithm, we choose the attenuation coefficient μl(E) of the basis material l at different spectra E as the basis function. Thus, we have

Eq. (1)

μ(x,E)=l=1Nlη(x,l)μl(E),E=1,,NE,
where η(x,l) denotes the volume fraction of basis material l at position x, Nl is the number of basis materials, and NE is the number of spectra. As proven in Ref. 4, we apply the constraints on volume and mass conservation, all of which lead to the following constraints:

Eq. (2)

{l=1Nlη(x,l)=1,x0η(x,l)1,  x,l.

The constraints on each x have the same form as the result of the image classification, that is, the possibility of each image type. Therefore, a CNN designed for image classification can be transformed to fit the material decomposition problem.

2.2.

Deep Learning Based on CNN

Equations (1) and (2) describe the relationship between the reconstructed image and the decomposition result. Traditional material decomposition algorithms in image-domain are based on these voxelwise equations. They often suffer from the spatial-specific errors or artifacts such as beam-hardening artifacts, metal artifacts, ring artifacts, and so on. Although some decomposition algorithms add regularization terms based on a priori knowledge, e.g., total variation, into the cost function to improve image quality, the results are still unsatisfactory. These spatial-specific features are difficult to describe using explicit mathematical formulas; nevertheless, they can be learned using deep neural networks. As a universal approximator, neural networks have been proved effective if enough neurons are given.38 This paper will show that neural networks can determine the patterns of material decomposition if the neural networks are trained with sufficient samples.

Enlarging the receptive field of each pixel can improve material decomposition performance as well. As shown in Fig. 1, we train the CNN to fit each patch and the exact decomposition result of this patch’s central voxel. Then, by moving this receptive field voxel-by-voxel among the reconstructed CT image, we obtain the decomposition result. We refer to this method as the CMD algorithm.

Fig. 1

Work flowchart of CMD.

OE_58_1_013104_f001.png

CNN has become one of the most important methods of image analysis.26,38 A CNN consists of several simple operations or so-called layers. By sending an image into the CNN and going through several layers, such as convolutional, activation, pooling, and fully connected layers, we can transform the image into the result we want, including classification or material decomposition.

The image can be considered as a three-rank tensor T(x,y,k), whose first two parameters x and y describe the position of a voxel and k is the channel.39 For a chromatic image, k ranges from 1 to 3, and for the reconstructed image of spectral CT, k ranges from 1 to NE. All the layers mentioned above are the derivable operations on this tensor, transforming tensor into another tensor. Because these parameters are derivable, most of parameters in the layers can be trained automatically by stochastic gradient descent (SGD) or other training algorithms.38

Details of these layers can be found in the Appendix.

2.3.

CNN Used in CMD

We build two CNNs for the decomposition task. Both CNNs are simplified versions of the CNNs that work effectively in image classification. Let x=32 and y=32. k=5 is the size of the input patch T(x,y,k), which is the receptive field with size x, y and channel number k=NE.

2.3.1.

Visual geometry group network

The first CNN is based on a visual geometry group (VGG) network.25 The VGG network is stacked by convolution layers with small kernel. Its structure is shown in Fig. 2. All the activation layers are ReLUs except for the last one, which is Softmax.

Fig. 2

Structure of the first CNN based on VGG used in this work. The left parts of the rectangles are the layers’ names, and the right parts are the sizes of the input tensors T and output tensors U. The numbers after the convolutional layers are (cx,cy,l), which refer to the size of the convolutional kernels. Parameter k of convolutional layers is determined by the input tensor.

OE_58_1_013104_f002.png

2.3.2.

Deep residual network

To compare the performances of the different networks, we build the second CNN based on a deep residual network (DRN),27 which is more complicated than a VGG network. The network comprises three types of blocks that are shown in Fig. 3. Block A does not change the size of the input tensor. Block B halves the size of the input image by downsampling the first two parameters. Block C halves the size while doubling its channels. The full structure of the DRN is shown in Fig. 4.

Fig. 3

Three substructures in DRN. The furcation indicates two copies of the related tensor. The plus sign in the circle represents the addition of a corresponding element in the tensor with the same shape.

OE_58_1_013104_f003.png

Fig. 4

Structure of the second CNN used in this paper.

OE_58_1_013104_f004.png

2.4.

Loss Function and SGD with Momentum

As shown in Fig. 1, (T,U) is the training sample, where T is a three-rank tensor and U is a one-rank tensor. U represents the material decomposition result of the central voxel of T. The CNN is initialized randomly and trained using the minibatch SGD method. To establish a regression problem, we use mean square error (MSE)

Eq. (3)

L=1NSTS1Nl[UF(T)]2,
as the loss value, where F(T) is the output of the CNN. S is a set of T, who is the minibatch in the training step, the validation set in validation step, and test set in test step. NS is the number of tensors in S. As defined above, every parameter in the CNN layers is derivable or assigned with a gradient. According to the chain rule of derivation,38 the partial derivatives of L to the parameters in the CNN are calculable. For a parameter p of the CNN, the updating function is

Eq. (4)

p=plr×Lp,
where lr is the learning rate or the step length of the gradient descent algorithm. To improve the stability of the convergence, we add the momentum term ρ to Eq. (4)

Eq. (5)

v=ρvlr×Lp,p=p+v.

However, no specific method can be used for training CNNs. In this study, a learning rate decrease policy is adopted to train the CNN and achieve satisfactory convergence.

Initially, we set lr=LR and patience = P and then pick VAL_NUM patches randomly from the validation phantoms as the validation set. We then pick BATCH_NUM samples from the training set randomly and use them to train the CNN through SGD. For every ITER_NUM iteration (the training patches are reselected for every iteration), we send the validation set to calculate the MSE of the CNN output. If the MSE is greater than the lastMSE for more than patience times sequentially, we divide lr into two, add P_STEP to patience, repick the validation set, and continue the iteration until the MAX_ITERATION_TIMES or while lr is less than ϵ. The pseudocode is shown in Fig. 5.

Fig. 5

Pseudocode of learning rate decrease policy.

OE_58_1_013104_f005.png

2.5.

Convergence Property of Neural Network

The SGD with momentum method is widely used in deep learning optimization and turns out to be effective among various applications.

In the case of a local minimum p* nearing p. At time step t, Eq. (5) can be rewritten as

Eq. (6)

vt+1=ρvtlr×Ltpt,pt+1=pt+vt+1=pt+ρvtlr×Ltpt=pt+ρ(ptpt1)lr×Ltpt.

Define momentum operator At

Eq. (7)

At=(lr·Ltptptp*ρ10).

Equation (11) is equivalent to

Eq. (8)

(pt+1p*ptp*)=At×(ptp*pt1p*).

Reference 40 proved that when pt is close to the local minimum p*, if

Eq. (9)

(1ρ)2lr·Ltptptp*(1+ρ)2,
we may get

Eq. (10)

λ(At)=ρ,
where λ is the spectral radius of At. When At is a constant matrix, λ=ρ<1 which ensures Eq. (8) converges. Actually, At may be changed in the above iterations. In this case, Ref. 40 verified that this momentum method could robustly yield linear convergence with rate ρ empirically around a local minimum p*.

The difference between the method in Ref. 40 and ours is that we evaluate the gradient L/p by sampling a minibatch from the training set instead of calculating over the whole training set, which is a widely used trick to accelerate the convergence without losing too much accuracy. In this way, the SGD with momentum method is converged in the expectation sense.

However, because the fitting ability of deep neural network is nearly infinite, the search for the global optimal solution in most cases will lead to overfitting.41 On the other hand, critical point is quite rare in a deep neural network. The SGD method tends to stop in some “flat minima.”42 The loss of flat minima that SGD could reach is related with the structure of neural network.43 As stated in the paper, the CNNs we used are based on VGG and DRN, which are widely verified in image classification. As a task immigration, these CNNs perform well according to our experiments.

3.

Numerical Simulation and Physical Experiment Results

To train the CNN to solve the MMD problem, we need organize three datasets, namely, the training dataset for SGD to optimize the parameters in the CNNs, the validation dataset for adjusting the hyperparameters (such as the number of mini-batches, lr, and so on), and the test dataset for evaluating the performance of the VGG-CMD and DRN-CMD. The three datasets should satisfy the same distribution.38 The ground truth values for the three datasets are also necessary as the supervised signal for training or the evaluation criterion. The phantom generation, projection, and reconstruction are all coded in MATLAB R2014a. The construction, training, and testing of CNN are coded by deep learning library Keras 2 in Python.44

3.1.

Numerical Simulation Results

In numerical simulations, we used 115 digital phantoms based on the 2-D Shepp–Logan phantom to train and evaluate the proposed CMD algorithm. All of these phantoms comprise ten ellipses and five materials such as soft tissue, lung, bone, blood, and air. Their linear attenuation coefficients are obtained from Ref. 5, as shown in Table 1. One of the phantoms is shown in Fig. 6. The concentrations of these ellipses are shown in Table 2. “×” in Table 2 means the concentration is 0. “∼” means the concentration is 1 minus the other materials’ concentrations for meeting the constraints of Eq. (2). “±0.1” means that the concentration of that ellipse satisfies 0.1 normal distribution among different phantoms. Then, we randomly changed the position, size, and direction of the ellipses, and the concentrations of the materials to generate other similar 114 phantoms, as shown in Fig. 7.

Table 1

ID of materials used this paper and in Ref. 5.

This paper [the order in Figs. 6(b)–6(f)]12345
MaterialSoft tissueLungBoneBloodAir
Reference 5 (the indexes in Fig. 4)32741

Fig. 6

An example of the digital phantoms. (a) One of the phantoms who comprises 10 ellipses. (b)–(f) The concentrations of five materials: soft tissue, lung, bone, blood, and air, respectively.

OE_58_1_013104_f006.png

Table 2

Mean concentrations of materials in each ellipse of the generated phantoms.

ID of ellipsesID of materials
12345
1××1××
21××××
30.1±0.10.5±0.1××
40.1±0.10.25±0.1××
50.2±0.1××0.35±0.1
60.2±0.10.25±0.1××
70.2±0.1××0.25±0.1
8×0.3±0.1××
9×0.6±0.1××
10×1×××

Fig. 7

Some phantoms used in the simulation. The size, position, and direction of every ellipse and the concentrations of the five materials are different (but close).

OE_58_1_013104_f007.png

Note that the linear attenuation coefficients of materials 1, 4 and 2, 5 in this paper are both relatively close, which worsens the ill-condition of MMD problem.

The x-ray beam spectra used in numerical simulations were generated with a Siemens simulator45 at 75, 135, 105, and 95 kilovolts peak (kVp), respectively. A 12-mm thickness aluminum filter was added. To simulate five-energy-bin photon-counting detectors, each spectrum is divided into five energy-bins as shown in Fig. 8. The x-ray fan-beam covered a 20-mm-diameter field of view with 256 detector elements. The number of views over 360 deg is 360. The reconstructed images were discretized into a 128×128 grid. We use the method in Ref. 46 to obtain the projection, which can simulate the statistical fluctuation and scattering in the imaging process, and can simulate the unsatisfactory situation such as pile up and charge sharing of the detectors based on real detectors. The spectral CT images in different energy-bins were reconstructed with the ASD-POCS algorithm with 25 iterations.47

Fig. 8

In numerical simulations, four spectra were generated using a Siemens simulator. Each spectrum was divided into five different energy-bins. Twenty energy-bins of the four spectra used in this paper. The maximum energies of (a–d) are 75, 135, 105, and 95 kVp, respectively. The scales of the energy-bins are marked in each figure.

OE_58_1_013104_f008.png

Two different simulation studies were carried out. The first one only used the reconstructed spectral CT images of all of 115 phantoms with the 75 kVp spectrum of Fig. 8(a). We divided them randomly into three parts with the number of 85, 15, and 15, and used them to generate image patches, which were used as training sets, verification sets, and test sets, respectively. The patches we used has 32×32×5  voxels. As stated in Sec. 2.4, we set LR=0.1, P=3.5, P_STEP = 0.5, VAL_NUM = 16384, BATCH_NUM = 128, ITER_NUM = 64, ϵ=1e-4, and ρ=0.9.

The MSE of decomposition results of VGG-CMD and DRN-CMD algorithms are shown in Fig. 9 for the 15 test phantoms. As a contrast, we used an extended direct inversion algorithm in image-domain algorithm to calculate the MMD voxel-by-voxel (EDI-MMD). It can be seen as a spectral CT version of the original one proposed by Mendonca et al.4,22 The procedure of EDI-MMD is shown in Fig. 10. The basis material triplets set is in Table 3.

Fig. 9

MSEs for 15 testing phantoms based on three methods.

OE_58_1_013104_f009.png

Fig. 10

Pseudocode of EDI-MMD.

OE_58_1_013104_f010.png

Table 3

Triplets set for phantom in simulation.

Triplet IDSoft tissueLungBoneBloodAir
1
2
3
4
5
6

On our GPU (GTX 1070), training VGG-CMD and DRN-CMD took about 3 and 12 h, respectively. After separation, the average MSE of all the 15 testing phantoms are 6.5908e-4  mm2 for VGG-CMD, 4.7651e-4  mm2 for DRN-CMD, and 1.0030e-2  mm2 for EDI-MMD, respectively. The MSE of EDI-MMD algorithm of different testing phantoms are very close because the illcondition of MMD problems of the test phantoms are similar with the same five basis materials as shown in Figs. 6(b)6(f).

There are some differences among the MSE of VGG-CMD and DRN-CMD algorithms for different phantoms due to their different structures. However, compared to the results of EDI-MMD, the MSE are reduced more than two orders in general. The decomposition results of one phantom are shown in Fig. 11. The rows from top to bottom are the decomposition ground truth and the results of VGG-CMD, DRN-CMD, and EDI-MMD, respectively. The columns from left to right are the concentrations of five materials: soft tissue, lung, bone, blood, and air, respectively. The EDI-MMD algorithm decomposes the soft tissue, lung, and blood wrong due to the close linear attenuation coefficients.

Fig. 11

Ground truth and decomposition results of three algorithms. The rows from top to bottom are ground truth, results of VGG-CMD, DRN-CMD, and EDI-MMD algorithms. The columns from left to right are the concentrations of soft tissue, lung, bone, blood, and air.

OE_58_1_013104_f011.png

We chose the 44th, 69th, and 102nd rows of the decomposition result of lung to draw profiles, which divide the ellipses halves. The positions of these profiles are showed in the second column of Fig. 11, and the section lines are drawn in Fig. 12. The MSE of the three methods on the profiles are presented in Table 4.

Fig. 12

Profiles of the decomposition results of lung. (a–c) Mean the different positions of the 44th, 69th, and 102nd rows.

OE_58_1_013104_f012.png

Table 4

MSE of the decomposition section lines of material 2.

MSE (mm−2)Profiles (a)Profiles (b)Profiles (c)
VGG-CMD0.00610.00600.0235
DRN-CMD0.00510.00720.0253
EDI-MMD0.12150.04070.0076

Equation (1) indicates that the decomposition result should only depend on single voxel if we ignore the unideal factors in spectral CT image. Although it uses patches as the input data, the output of CMD algorithm should rely mainly on the central voxel of every patch. We chose a test phantom. A 3×3 square in the center of each patch of its reconstructed images is set to zero. The decomposition results are shown in Fig. 13. Although the zero region only occupies 32/322=0.879% of the whole patch, the degradation of the output of the CMD is drastic. This result indicates that the deep learning procedure of CMD algorithm can insure that the central voxel of each patch plays a more important role.

Fig. 13

Decomposition results of (a) VGG-CMD and (b) DRN-CMD. The top row of each figure is the results with normal patches. The bottom row of each figure is the results with the patches setting the central 3×3 square to zero. The columns from left to right are the concentrations of soft tissue, lung, bone, blood, and air.

OE_58_1_013104_f013.png

In the second simulation, we studied the CMD performance on the spectral CT images from an unknown or never trained spectrum. Traditional material decomposition algorithms need to measure μl(E) of the basis materials and spectral CT images of the scanned object at the same x-ray spectrum. μl(E) relates to not only the physical properties of material itself but also the spectrum of incident x-ray beam and the detection efficiency of the detector. Therefore, their decomposition results are sensitive to the variation of the effective spectrum of spectral CT systems. Figure 14 shows the decomposition results of EDI-MMD algorithm. The first, second, and third rows are the results when the x-ray spectra and μl(E) of the basis materials within the corresponding energy-bins are all exactly provided. If the x-ray spectrum used in CT scanning is changed from the one measuring μl(E) of the basis materials, the decomposition results of EDI-MMD will deteriorate. As showed in the last row of Fig. 14, μl(E) of the basis materials used in EDI-MMD algorithm were generated by performing interpolations among the μl(E) of the basis materials of the spectra of Figs. 8(a)8(c). The average MSE of the results of spectra Figs. 8(a)8(c) is 1.1751e-2  mm2, while the MSE of the results of spectrum Fig. 8(d) is 1.2002e-2  mm2.

Fig. 14

Decomposition results of EDI-MMD algorithm. The rows from top to bottom are the results using spectrum (a), (b), (c), and (d) of Fig. 8, respectively. The columns from left to right are the concentrations of soft tissue, lung, bone, blood, and air.

OE_58_1_013104_f014.png

Trained by patches from multiple different spectra rather than one specific spectra, CMD shows its potential to solve the MMD problem of unknown x-ray spectra. Because training for every CT system and its configuration on kVp/filtering is sometimes uneconomical. In this simulation, we randomly picked 85 phantoms and reconstructed their spectral CT images at the 15 energy-bins of Figs. 8(a)8(c). Then, from each of these 15 channels of patch, five channels were randomly selected to form the input of CNN. That is, the input tensor still had 32×32×5  voxels. The other 15 phantoms were randomly picked to generate the validation dataset in exactly the same way. We use the same learning rate decrease policy as in the first simulation to train the CNNs. As the test dataset, the reconstructed images of the rest 15 phantoms in all of 20 energy-bins of Fig. 8(a)8(d) were used to test the decomposition results of the CMD algorithm.

The results are shown in Fig. 15. For the first three spectra, the average MSE on 15 testing phantoms are 1.3352e-3  mm2 for VGG-CMD and 6.6640e-4  mm2 for DRN-CMD, respectively. And for the fourth spectra, which was not used in training, the MSE of the above two CMD algorithms are 9.7165e-4 and 5.3719e-4  mm2, respectively. As shown in Fig. 15, the CMDs’ performance did not turn worse when the x-ray spectrum was changed from the one calibrating the basis materials. This result means that we can use the proposed CMD algorithm to solve the MMD problems in different spectral CT systems with different hardware configurations even if the spectrum of a CT system is unknown.

Fig. 15

MSE on 15 testing phantoms with four different spectra.

OE_58_1_013104_f015.png

3.2.

Physical Experiment Results

In the physical experiment, we used the reconstructed images of our spectral CT system to validate CMD algorithm and compare it with the EDI-MMD algorithm. The experiments were completed on our spectral CT system as shown in Fig. 16. It uses a conventional Hamamatsu L12161-07 x-ray tube and a linear photon-counting detector array (eV3500, eV PRODUCTS, Saxonburg, Pennsylvania).46,48 The detector array has 256 elements and 5 energy-bins of each one. The energy-bins were set at [26, 33], [33, 40], [40, 50], [50, 60], and [60, 80] keV. The size of each element is 0.5  mm×2  mm. The x-ray tube’s max energy was set 75 kVp, and the tube current 20  μA. Over a 360-deg scan, 360 projections were acquired, and every projection radiated for 5 s. The distance between the x-ray source and the detector was 70 cm, and the distance between the x-ray source and the center of rotation was 44 cm. The reconstructed image had 256×256  pixels whose size was 0.025  mm×0.025  mm. The image was reconstructed by the filtered-backprojection algorithm for each energy-bin.

Fig. 16

The experimental spectral system.

OE_58_1_013104_f016.png

The phantom used in the experiments was a 50-mm diameter resinic cylinder with eight 11-mm-diameter round holes to insert centrifugal tubes, as shown in Fig. 17. We used three types of solutions, namely, NaI, Gd(NO3)3, and CaCl2, with different concentrations to test the performance of the CMD algorithm and compare it with EDI-MMD algorithm. Four basis materials were used. NaI and Gd(NO3)3 represented the typical components of the contrast agents used in clinics, CaCl2 and H2O represented the typical components of the human body. According to the concentrations in Refs. 11 and 49, all the concentrations (weight/volume) of the three solutes we used are listed in Table 5. In each spectral CT scanning, we used eight solutions with different concentrations in one row of Table 5. After six scans, we obtained 6×5=30 reconstructed images of different solutions and energy-bins.

Fig. 17

(a) All of the 48 centrifugal tubes filled with different concentration solutions. (b) The resinic cylinder phantom.

OE_58_1_013104_f017.png

Table 5

Concentrations in centrifugal tubes.

SoluteConcentrations in the centrifugal tubes (mg/mL)
NaI02.93.23.53.84.14.44.7
NaI5.05.35.65.96.26.56.87.1
Gd(NO3)3·6H2O07.17.88.59.29.910.611.3
Gd(NO3)3·6H2O12.012.713.414.114.815.516.216.9
CaCl2·2H2O0200240280320360400440
CaCl2·2H2O480520560600640680720760

Figures 18(a)18(f) show the six reconstructed images in the first energy-bin of [26, 33] keV, and Figs. 18(g)18(k) are the reconstructed images of the first phantom in all five energy-bins, whose display window are [0.15,0.7]. It shows that the images at higher energy-bins are darker than the ones at lower energy-bins, which means high energy x-ray photons were less absorbed than low energy x-ray photons. The diameter of centrifuge tubes contains 36 voxels in reconstructed images. Note that there are some ring artifacts in the images which were caused by the different detection response functions element-by-element, especially the abnormally high detection efficiency of the elements on the edges of the 32-element modules that comprised the whole detector array. Moreover, these elements suffered from severer pile-up distortion compared with other elements, resulting in further deviation from the Beer–Lambert law.

Fig. 18

(a)–(f) The spectral CT reconstructions with the first energy-bin [26, 33] keV. (a), (b) The reconstructed image of the NaI solutions, (c), (d) for Gd(NO3)3 solutions, and (e), (f) for CaCl2 solutions. The display window is [0, 1]. (g)–(k) The reconstructed images of the NaI solutions in all five energy-bins. The display window is [0.15,0.7].

OE_58_1_013104_f018.png

As shown in Fig. 18, the concentrations within the red circles were used to generate the validation dataset. Those within the green circles were used to generate the testing dataset. The rest were used for the training dataset. Because it was difficult to exactly obtain the effective spectrum of the x-ray beam used in experiment, we could not get the exact linear attenuation coefficients of every solutes. Thus, we used the maximum concentration solutions of three solutes and water as the basis materials to validate the decomposition algorithms.

The regions within the centrifugal tubes were extracted to validate our CMD algorithm and compare it with EDI-MMD algorithm. To improve the generalization of the CNNs with limited training data, we randomly overturned the input patches.35 We obtained 36,324 samples for the training dataset, and 6054 samples for the validation and testing datasets, respectively. Because there always had some noise and artifacts, we used a smaller learning rate than that used in the simulation. We set LR=0.01, P=2.5, P_STEP = 0.5, VAL_NUM = 6054, BATCH_NUM = 128, ITER_NUM = 64, ϵ=1e-5, and ρ=0.9. We also calculated the decomposition results by EDI-MMD algorithm as a contrast. The triplets set for EDI-MMD is in Table 6.

Table 6

Triplets set for phantom in experiment.

Triplet IDNaI (7.1  mg/mL)Gd(NO3)3·6H2O (16.9  mg/mL)CaCl2·2H2O (760  mg/mL)H2O
1
2
3

Figure 19 shows the decomposition results of six different concentration solutions highlighted in italics in Table 5. Figures 19(a)19(f) are the results of 4.1  mg/mL NaI, 5.9  mg/mL NaI, 9.9  mg/mL Gd(NO3)36H2O, 14.1  mg/mL Gd(NO3)36H2O, 360  mg/mL CaCl22H2O, and 600  mg/mL CaCl22H2O, respectively. In each figures [Figs. 19(a)19(f)], the columns from left to right are the results of four basis materials, NaI, Gd (NO3)3, CaCl2, and H2O, respectively. The rows from top to bottom are the ground truth, decomposition results of VGG–CMD, DRN-CMD, and EDI-MMD algorithms, respectively. We can find that both the VGG-CMD and DRN-CMD algorithms can provide good decomposition results. They are much better than the results of EDI-MMD algorithm which are seriously suffered from the ring artifacts existing in the spectral CT images. As shown in the last rows of Figs. 19(a)19(d), the decomposition results are far from desire for the low concentration solutions. Moreover, the deeper network used in DRN-CMD algorithm can provide a smoother and more uniform results. In addition, compared with Ref. 37, the decomposition of traditional materials cannot achieve good results even if there is a clear prior knowledge (the solution consists of two basic materials: water and solvent) in the case of serious ring artifacts.

Fig. 19

Decomposition results of italic cells in Table 5, namely, (a) 4.1  mg/mL NaI, (b) 5.9  mg/mL NaI, (c) 9.9  mg/mL Gd(NO3)3·6H2O, (d) 14.1  mg/mL Gd(NO3)3·6H2O, (e) 360  mg/mL CaCl2·2H2O, and (f) 600  mg/mL CaCl2·2H2O. In each figure of (a–f), there are 4×4 images. The rows from top to bottom are the ground truth, the decomposition results of VGG-CMD, DRN-CMD, and EDI-MMD algorithm. The columns from left to right are the results of NaI, Gd(NO3)3·6H2O, CaCl2·2H2O, and H2O.

OE_58_1_013104_f019.png

Figure 20 shows the decomposition MSE of three algorithms. The average MSE of six testing images are 5.3719e-3  mm2 for VGG–CMD, 4.8768e-3  mm2 for DRN–CMD, and 1.9544e-1  mm2 for EDI-MMD. In general, the MSE of CMD algorithms are reduced more than one order comparing to the MSE of EDI-MMD. Figure 21 shows the profiles of the decomposition results of NaI locating at the colorful lines in Fig. 19(a). MSE of the decomposition section line are 1.0645e-2, 2.7356e-3, and 1.2634e-1  mm2 for three methods. The conclusion is similar to the numerical simulations. Although the CMD algorithm has never been trained by the testing solutions, the generalization of the CNNs guarantees its accuracy. Compared with the direct inversion decomposition method, CMD can resist the effects of ring artifacts that provide much more accurate and uniform decomposition results.

Fig. 20

MSEs of the six test images in Fig. 19.

OE_58_1_013104_f020.png

Fig. 21

The profiles of the decomposition results of NaI locating at the lines in Fig. 19(a).

OE_58_1_013104_f021.png

4.

Discussion

Compared with traditional direct inversion algorithms, the proposed algorithm improves the decomposition results significantly especially for the cases that artifacts, e.g., ring and beam-hardening artifacts, existing in the CT images or the spectrum of x-ray source is distorted. For example, in current spectral CT systems using photon-counting detectors, the ring artifacts often appear due to the property inconsistency among the detector units (e.g., detection response function, dead time, pulse pile-up, etc.). As an image-domain method, no matter how the images are reconstructed, the proposed CMD algorithm can provide better decomposition results than direct inversion methods as shown in Fig. 19. It works well even the linear attenuation coefficients of the basis materials are close. Due to the robustness of CMD, it is also expected to be used in other incomplete-data or sparse-data CT cases in the future, e.g., limited-angle CT, few-view CT, and low-dose CT.4648,4952

Because the forward-propagation process in the CNN can be configured on GPU, the CMD algorithm can complete the material decompositions in a few seconds once it has been well trained. However, CMD also has some disadvantages. Traditional MMD algorithms in image domain only need the linear attenuation coefficients of the basis materials. When environment changes, we can update the algorithms easily. However, CMD method needs to gather training data and be retrained. The simulation 2 shows that the trained neural network is compatible with different imaging conditions, but it is almost impossible to generate every data for all the situations. For CMD, the biggest problem is that it is difficult to get the ground truth in practical situations for training. Even though we know all the material distributions of the designed phantoms, it is still difficult to ensure the diversity of data. We hope to train the neural network with simulation data as real as possible, and then decompose the materials directly on the real samples in our further experiments. Incorporating the models of the material decomposition and the physics of spectral CT, the direct iterative reconstruction and decomposition method sometimes provide good results. In the future, it is worth combining CMD algorithm with iterative reconstruction algorithm.

5.

Conclusion

In this study, we proposed a deep-learning-based CMD algorithm to solve the MMD problem of spectral CT in image-domain. We redesigned two CNNs that worked well in the image classification field. Given the property of the Softmax layer, the output of the CMD can satisfy the mass and volume conservation constraints automatically. Simulation and experimental results proved the CMD algorithm can robustly solve the MMD problem of spectral CT.

6.

Appendix

6.1.

Layers Used in This Work

6.1.1.

Convolutional layer

The convolutional layer is the most important layer in a CNN. The advanced features in an image are detected by convolutional layers. A convolutional layer receives a three-rank tensor and outputs a three-rank tensor as well. A convolutional layer can be divided into the weight tensor W(cx,cy,k,l) and the bias matrix b(l). Receiving a tensor T(x,y,k), the output tensor is

Eq. (11)

U(x,y,l)={cxcyk[T(x+cx,y+cy,k)×W(cx,cy,k,l)]}+b(l).

Two types of convolutional operations are available for the edge of an image, i.e., using Eq. (11) to directly lead the size reduction or supplementing 0 around the input tensor to keep the size unchanged.

6.1.2.

Activation layer

As shown above, no matter how many convolutional layers are piled up, the effect is the same as the case of only one convolutional layer because the operation in the convolutional layer is linear.38 To enhance the capability of the CNN, nonlinearity should be introduced. We explain two types of activation layers, each of which has its own features.

Rectified linear unit (ReLU) is a simple activation function that effectively introduces nonlinearity into CNNs and has thus been widely used since its discovery. ReLU sets the negative value of the output tensor of a convolutional layer to 0, that is,

Eq. (12)

U(x,y,k)={T(x,y,k),T(x,y,k)>00,T(x,y,k)0.

ReLU simplifies the process of derivation and lightens the vanishing gradient problem of traditional artificial neural networks, resulting in the rapid convergence of networks.

Softmax is the last layer of a CNN. It receives a one-rank tensor and outputs a one-rank tensor:38

Eq. (13)

U(x)=eT(x)xeT(x).

Obviously, the output of the Softmax activation layer satisfies the constraints of Eq. (2). In image classification, the output of Softmax (in common, the Softmax activation layer is the last layer of the CNN, so it is also the output of a CNN) is considered as the possibility of each image type predicted by CNN. However, in the material decomposition problem, we train the output as the volume fraction of each material of the central pixel of the input patch.

6.1.3.

Batch-normalization layer

Similar to the ordered subset used in CT reconstruction, which can accelerate convergence, we send a certain number of samples to update the CNN at a time during training. This set of samples is called a minibatch. BN is set between the convolutional and activation layers to scale and translate the values in a batch uniformly and to improve the following activation layers.38,53 BN accelerates convergence, relaxes the requirements of the initial values of networks, and makes very deep networks possible.

6.1.4.

Max pooling layer

The pooling layer is a screening operation of the input three-rank tensor. Given the strides of the x and y directions sx and sx, the output is

Eq. (14)

U(x,y,k)=max0isx10jsy1{T[(sx1)x+i,(sy1)y+j,k]}.

As shown in Eq. (14), the max pooling layer shrinks the size of the image and introduces nonlinearity into the CNN. The max pooling layer also preserves only one value of a region of the image. Therefore, small translations in the image will be ignored by the CNN. In another word, the translation invariance is improved.

6.1.5.

Fully connected layer

An inchoate neural network consists of fully connected layers and activation layers. This composition is ineffective.38 After extracting features through the convolutional layer, the fully connected layers can work effectively.

The fully connected layer receives a 1-rank tensor T(x) and outputs a one-rank tensor:

Eq. (15)

U(y)=xW(x,y)T(x)+b(y).

In a CNN, the first two parameters of the image’s size, x and y, will shrink when flowing through the convolutional and pooling layers. The third parameter k enlarges when operated by the convolutional layer. Therefore, the image becomes increasingly small while the number of its channels increases. Finally, we ignore the difference in the first two parameters, flatten the three-rank tensor image into a one-rank tensor, and send it into the fully connected layer.

6.2.

Codes Available

Our trained codes, some synthetic and real datasets used in this paper can be downloaded from Github repository available at: https://github.com/ZhengyangChen/Convolutional_material_decomposition.

Acknowledgments

This work was supported in part by NSFC 11775124, 61571256, 11525521, 81427803, and the National Key Research and Development Program of China, 2017YFC0109103 and 2018YFC0115502.

References

1. 

Y. Long and J. A. Fessler, “Multi-material decomposition using statistical image reconstruction for spectral CT,” IEEE Trans. Med. Imaging, 33 (8), 1614 –1626 (2014). https://doi.org/10.1109/TMI.2014.2320284 ITMID4 0278-0062 Google Scholar

2. 

M. Patino et al., “Material separation using dual-energy CT: current and emerging applications,” Radiographics, 36 (4), 1087 –1105 (2016). https://doi.org/10.1148/rg.2016150220 Google Scholar

3. 

J. Chu et al., “Combination of current-integrating/photon-counting detector modules for spectral CT,” Phys. Med. Biol., 58 (19), 7009 –7024 (2013). https://doi.org/10.1088/0031-9155/58/19/7009 PHMBA7 0031-9155 Google Scholar

4. 

P. R. Mendonça, P. Lamb and D. V. Sahani, “A flexible method for multi-material decomposition of dual-energy CT images,” IEEE Trans. Med. Imaging, 33 (1), 99 –116 (2014). https://doi.org/10.1109/TMI.2013.2281719 ITMID4 0278-0062 Google Scholar

5. 

L. Li et al., “Spectral CT modeling and reconstruction with hybrid detectors in dynamic-threshold-based counting and integrating modes,” IEEE Trans. Med. Imaging, 34 (3), 716 –728 (2015). https://doi.org/10.1109/TMI.2014.2359241 ITMID4 0278-0062 Google Scholar

6. 

Y. Xue et al., “Statistical image-domain multimaterial decomposition for dual-energy CT,” Med. Phys., 44 (3), 886 –901 (2017). https://doi.org/10.1002/mp.12096 MPHYA6 0094-2405 Google Scholar

7. 

R. E. Alvarez and A. Macovski, “Energy-selective reconstructions in x-ray computerised tomography,” Phys. Med. Biol., 21 (5), 733 –744 (1976). https://doi.org/10.1088/0031-9155/21/5/002 PHMBA7 0031-9155 Google Scholar

8. 

L. Lehmann et al., “Generalized image combinations in dual KVP digital radiography,” Med. Phys., 8 (5), 659 –667 (1981). https://doi.org/10.1118/1.595025 MPHYA6 0094-2405 Google Scholar

9. 

H. Xue et al., “A correction method for dual energy liquid CT image reconstruction with metallic containers,” J. X-Ray Sci. Technol., 20 (3), 301 –316 (2012). https://doi.org/10.3233/XST-2012-0339 JXSTE5 0895-3996 Google Scholar

10. 

L. Li et al., “A dynamic material discrimination algorithm for dual MV energy x-ray digital radiography,” Appl. Radiat. Isot., 114 188 –195 (2016). https://doi.org/10.1016/j.apradiso.2016.05.018 ARISEF 0969-8043 Google Scholar

11. 

P. Sukovic and N. H. Clinthorne, “Penalized weighted least-squares image reconstruction for dual energy x-ray transmission tomography,” IEEE Trans. Med. Imaging, 19 (11), 1075 –1081 (2000). https://doi.org/10.1109/42.896783 ITMID4 0278-0062 Google Scholar

12. 

K. Taguchi et al., “Image-domain material decomposition using photon-counting CT,” Proc. SPIE, 6510 651008 (2007). https://doi.org/10.1117/12.713508 PSISDG 0277-786X Google Scholar

13. 

Y. Zou and M. D. Silver, “Analysis of fast kV-switching in dual energy CT using a pre-reconstruction decomposition technique,” Proc. SPIE, 6913 691313 (2008). https://doi.org/10.1117/12.772826 PSISDG 0277-786X Google Scholar

14. 

C. Maaß, M. Baer and M. Kachelrieß, “Image-based dual energy CT using optimized precorrection functions: a practical new approach of material decomposition in image domain,” Med. Phys., 36 (8), 3818 –3829 (2009). https://doi.org/10.1118/1.3157235 MPHYA6 0094-2405 Google Scholar

15. 

T. Niu et al., “Iterative image-domain decomposition for dual-energy CT,” Med. Phys., 41 (4), 041901 (2014). https://doi.org/10.1118/1.4866386 MPHYA6 0094-2405 Google Scholar

16. 

M. Petrongolo, X. Dong and L. Zhu, “A general framework of noise suppression in material decomposition for dual-energy CT,” Med. Phys., 42 (8), 4848 –4862 (2015). https://doi.org/10.1118/1.4926780 MPHYA6 0094-2405 Google Scholar

17. 

R. F. Barber et al., “An algorithm for constrained one-step inversion of spectral CT data,” Phys. Med. Biol., 61 (10), 3784 –3818 (2016). https://doi.org/10.1088/0031-9155/61/10/3784 PHMBA7 0031-9155 Google Scholar

18. 

Y. Zhao, X. Zhao and P. Zhang, “An extended algebraic reconstruction technique (E-ART) for dual spectral CT,” IEEE Trans. Med. Imaging, 34 (3), 761 –768 (2015). https://doi.org/10.1109/TMI.2014.2373396 ITMID4 0278-0062 Google Scholar

19. 

T. Zhao, L. Li and Z. Chen, “K-edge eliminated material decomposition method for dual-energy x-ray CT,” Appl. Radiat. Isot., 127 231 –236 (2017). https://doi.org/10.1016/j.apradiso.2017.06.018 ARISEF 0969-8043 Google Scholar

20. 

J. W. Stayman and S. Tilley, “Model-based multi-material decomposition using spatial-spectral CT filters,” (2018) http://aiai.jhu.edu/papers/CT2018_stayman.pdf Google Scholar

21. 

A. M. Alessio and L. R. MacDonald, “Quantitative material characterization from multi-energy photon counting CT,” Med. Phys., 40 (3), 031108 (2013). https://doi.org/10.1118/1.4790692 MPHYA6 0094-2405 Google Scholar

22. 

P. R. Mendonça et al., “Multi-material decomposition of spectral CT images,” Proc. SPIE, 7622 76221W (2010). https://doi.org/10.1117/12.844531 PSISDG 0277-786X Google Scholar

23. 

G. Wang, “A perspective on deep imaging,” IEEE Access, 4 8914 –8924 (2016). https://doi.org/10.1109/ACCESS.2016.2624938 Google Scholar

24. 

Z. Chen and L. Li, “Preliminary research on multi-material decomposition of spectral CT using deep learning,” in Fully3D, 523 –526 (2017). Google Scholar

25. 

K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” (2014). Google Scholar

26. 

J. Schmidhuber, “Deep learning in neural networks: an overview,” Neural Networks, 61 85 –117 (2015). https://doi.org/10.1016/j.neunet.2014.09.003 NNETEB 0893-6080 Google Scholar

27. 

K. He et al., “Deep residual learning for image recognition,” in Computer Vision and Pattern Recognition, 770 –778 (2016). Google Scholar

28. 

H. Greenspan, B. van Ginneken and R. M. Summers, “Guest editorial deep learning in medical imaging: overview and future promise of an exciting new technique,” IEEE Trans. Med. Imaging, 35 (5), 1153 –1159 (2016). https://doi.org/10.1109/TMI.2016.2553401 ITMID4 0278-0062 Google Scholar

29. 

P. Hu et al., “Automatic 3D liver segmentation based on deep learning and globally optimized surface evolution,” Phys. Med. Biol., 61 (24), 8676 –8698 (2016). https://doi.org/10.1088/1361-6560/61/24/8676 PHMBA7 0031-9155 Google Scholar

30. 

P. Moeskops et al., “Automatic segmentation of MR brain images with a convolutional neural network,” IEEE Trans. Med. Imaging, 35 (5), 1252 –1261 (2016). https://doi.org/10.1109/TMI.2016.2548501 ITMID4 0278-0062 Google Scholar

31. 

M. Anthimopoulos et al., “Lung pattern classification for interstitial lung diseases using a deep convolutional neural network,” IEEE Trans. Med. Imaging, 35 (5), 1207 –1216 (2016). https://doi.org/10.1109/TMI.2016.2535865 ITMID4 0278-0062 Google Scholar

32. 

H.-C. Shin et al., “Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning,” IEEE Trans. Med. Imaging, 35 (5), 1285 –1298 (2016). https://doi.org/10.1109/TMI.2016.2528162 ITMID4 0278-0062 Google Scholar

33. 

Q. Dou et al., “Automatic detection of cerebral microbleeds from MR images via 3D convolutional neural networks,” IEEE Trans. Med. Imaging, 35 (5), 1182 –1195 (2016). https://doi.org/10.1109/TMI.2016.2528129 ITMID4 0278-0062 Google Scholar

34. 

J. M. Wolterink et al., “Generative adversarial networks for noise reduction in low-dose CT,” IEEE Trans. Med. Imaging, 36 (12), 2536 –2545 (2017). https://doi.org/10.1109/TMI.2017.2708987 ITMID4 0278-0062 Google Scholar

35. 

H. Chen et al., “Low-dose CT via convolutional neural network,” Biomed. Opt. Express, 8 (2), 679 –694 (2017). https://doi.org/10.1364/BOE.8.000679 BOEICL 2156-7085 Google Scholar

36. 

Y. Han, J. Kang and J. C. Ye, “Deep learning reconstruction for 9-view dual energy CT baggage scanner,” (2018). Google Scholar

38. 

I. Goodfellow et al., Deep Learning, MIT Press, Cambridge (2016). Google Scholar

39. 

L. Li et al., “A tensor PRISM algorithm for multi-energy CT reconstruction and comparative studies,” J. X-Ray Sci. Technol., 22 (2), 147 –163 (2014). https://doi.org/10.3233/XST-140416 JXSTE5 0895-3996 Google Scholar

40. 

J. Zhang, I. Mitliagkas and C. Ré, “Yellowfin and the art of momentum tuning,” (2017). Google Scholar

41. 

A. Choromanska et al., “The loss surfaces of multilayer networks,” in AISTATS, 192 –204 (2015). Google Scholar

42. 

S. Hochreiter and J. Schmidhuber, “Flat minima,” Neural Comput., 9 (1), 1 –42 (1997). https://doi.org/10.1162/neco.1997.9.1.1 NEUCEB 0899-7667 Google Scholar

43. 

D. Soudry and E. Hoffer, “Exponentially vanishing sub-optimal local minima in multilayer neural networks,” (2017). Google Scholar

46. 

R. Li, L. Li and Z. Chen, “Spectrum reconstruction method based on the detector response model calibrated by x-ray fluorescence,” Phys. Med. Biol., 62 (3), 1032 –1045 (2017). https://doi.org/10.1088/1361-6560/62/3/1032 PHMBA7 0031-9155 Google Scholar

47. 

E. Y. Sidky and X. Pan, “Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization,” Phys. Med. Biol., 53 (17), 4777 –4807 (2008). https://doi.org/10.1088/0031-9155/53/17/021 PHMBA7 0031-9155 Google Scholar

48. 

L. Li et al., “Full-field fan-beam x-ray fluorescence computed tomography with a conventional x-ray tube and photon-counting detectors for fast nanoparticle bioimaging,” Opt. Eng., 56 (4), 043106 (2017). https://doi.org/10.1117/1.OE.56.4.043106 Google Scholar

49. 

H. Q. Le and S. Molloi, “Least squares parameter estimation methods for material decomposition with energy discriminating detectors,” Med. Phys., 38 (1), 245 –255 (2011). https://doi.org/10.1118/1.3525840 MPHYA6 0094-2405 Google Scholar

50. 

Z. Chen et al., “A limited-angle CT reconstruction method based on anisotropic TV minimization,” Phys. Med. Biol., 58 (7), 2119 –2141 (2013). https://doi.org/10.1088/0031-9155/58/7/2119 PHMBA7 0031-9155 Google Scholar

51. 

M. Chang et al., “A few-view reweighted sparsity hunting (FRESH) method for CT image reconstruction,” J. X-Ray Sci. Technol., 21 (2), 161 –176 (2013). https://doi.org/10.3233/XST-130370 JXSTE5 0895-3996 Google Scholar

52. 

J. Wang et al., “Penalized weighted least-squares approach to sinogram noise reduction and image reconstruction for low-dose x-ray computed tomography,” IEEE Trans. Med. Imaging, 25 (10), 1272 –1283 (2006). https://doi.org/10.1109/TMI.2006.882141 ITMID4 0278-0062 Google Scholar

53. 

S. Ioffe and C. Szegedy, “Batch normalization: accelerating deep network training by reducing internal covariate shift,” (2015). Google Scholar

Biographies of the authors are not available.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Zhengyang Chen and Liang Li "Robust multimaterial decomposition of spectral CT using convolutional neural networks," Optical Engineering 58(1), 013104 (12 January 2019). https://doi.org/10.1117/1.OE.58.1.013104
Received: 8 June 2018; Accepted: 11 December 2018; Published: 12 January 2019
Lens.org Logo
CITATIONS
Cited by 23 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Computed tomography

Convolutional neural networks

Lithium

Detection and tracking algorithms

Lawrencium

Optical engineering

Reconstruction algorithms

Back to Top