Open Access
26 February 2016 Fast multislice fluorescence molecular tomography using sparsity-inducing regularization
Sedigheh Marjaneh Hejazi, Saeed Sarkar, Ziba Darezereshki
Author Affiliations +
Funded by: Tehran University of Medical Sciences and the Iran Nanotechnology Initiative Council
Abstract
Fluorescence molecular tomography (FMT) is a rapidly growing imaging method that facilitates the recovery of small fluorescent targets within biological tissue. The major challenge facing the FMT reconstruction method is the ill-posed nature of the inverse problem. In order to overcome this problem, the acquisition of large FMT datasets and the utilization of a fast FMT reconstruction algorithm with sparsity regularization have been suggested recently. Therefore, the use of a joint L1/total-variation (TV) regularization as a means of solving the ill-posed FMT inverse problem is proposed. A comparative quantified analysis of regularization methods based on L1-norm and TV are performed using simulated datasets, and the results show that the fast composite splitting algorithm regularization method can ensure the accuracy and robustness of the FMT reconstruction. The feasibility of the proposed method is evaluated in an in vivo scenario for the subcutaneous implantation of a fluorescent-dye-filled capillary tube in a mouse, and also using hybrid FMT and x-ray computed tomography data. The results show that the proposed regularization overcomes the difficulties created by the ill-posed inverse problem.

1.

Introduction

Small-animal fluorescence molecular tomography (FMT) is a high-sensitivity, nonionizing, and relatively low-cost imaging modality that enables high-throughput preclinical studies in the fields of drug development, dermatological research, and intraoperative imaging.1,2 FMT three-dimensionally (3-D) resolves the biodistribution of fluorescent markers accumulated in the target tissue of a live animal. In FMT, a near-infrared (NIR) light source at the excitation wavelength is projected onto the test subject at different positions. The excitation light propagates diffusely into the tissue, and some of the photons are absorbed by fluorochromes, which re-emit part of the energy at a longer wavelength. Then the fluorescence-excitation light intensities are measured by detectors placed around the subject. This measurement process is repeated for several source positions, so as to obtain a dataset for reconstruction of the fluorochrome concentration. In modern FMT systems, detection is performed using a charge-coupled-device (CCD) camera in noncontact mode. Recently, FMT technology has experienced rapid development as a result of advances in 360 deg noncontact projection systems.

The first FMT technique involving noncontact geometry was implemented by employing fibers to deliver and collect photons from the animal periphery using a photomultiplier tube.3,4 The FMT imaging performance was then improved by introducing full noncontact excitation and collection of the diffuse fluorescent emission using a CCD.5 Noncontact detection offers flexibility in that the choice of detector and its positioning on the imaging surface can be optimized, while fiber–tissue contact effects are also avoided. Recently, a free-space FMT 360 deg projection acquisition system was developed, in which the fan beam is rotated around the specimen and data can be gathered with a 0.5- to 3-mm spatial resolution.6,7 Such implementations are reasonably flexible as regards integration with other imaging modalities, such as x-ray computed tomography (CT). However, the resultant integrated imaging systems generate large datasets that require long computational time and large storage capacity. Consequently, it is necessary to develop a rapid convergence method that can quickly reconstruct the sparse distribution of the fluorescent sources.

Gu et al.8 have proposed a reconstruction method using adaptive meshing for use with two-dimensional (2-D) diffuse optical tomography, and the results obtained for phantom measurements demonstrate both qualitative and quantitative optical image reconstruction improvement. In addition, fully adaptive finite-element-based reconstruction algorithms have recently been reported, which have been applied to human breast geometry reconstruction using point-source illumination.911 However, these proposed algorithms have been applied to specific geometries only, such as cylinders and hemispheres. Furthermore, well-documented finite-element-method (FEM) reconstruction software packages such as NIR frequency-domain optical absorption and scatter tomography (NIRFAST) and time-resolved optical absorption and scattering tomography, which provide FEM solutions based on L2 regularization to accommodate the complex geometry of small animals, have been made available.12 However, such software is not well suited to fan-beam FMT imaging methods, which are designed to localize sparse fluorescent source distribution in the targeted tissues. Therefore, sparsity-promoting methods designed to solve this problem have emerged.13,14

In order to promote sparsity and to preserve edge information, L1-norm-based methods have been studied recently. For example, Dutta et al.15 have proposed a combination of L1 and total variation (TV) norm penalties to constrain the FMT inverse problem. They have suggested a compound approach that uses a combination of the separable paraboloidal surrogates (SPSs) method with the preconditioned conjugate gradient (PCG) algorithm to minimize the joint L1 and TV penalties. As a result of ill conditioning, the surrogate functions typically have high curvature. This is known to correspond to a slow convergence rate and, therefore, this approach is impractical for many applications involving large-scale problems. The fast iterative shrinkage thresholding algorithm (FISTA), having a rapid convergence rate, has been investigated as regards solutions of the sparsity-promoting regularization problem using the combined L1 norm and TV approach.16 However, FISTA is designed for simpler regularization problems and cannot be applied efficiently to composite regularization problems. In contrast, the joint L1 and TV norm regularization problem can be efficiently solved using the fast composite splitting algorithm (FCSA), which transforms the L1 and TV norm regularization problem into simpler subproblems.17 To the best of our knowledge, application of the FCSA to magnetic resonance imaging (MRI) has been widely explored, but it has not been investigated with regard to optical imaging. In this paper, a newly developed multislice (MS) 360 deg free-space FMT imaging method is presented, which incorporates structural information into the reconstruction algorithm based on a suitable sparsity-promoting regularization problem.

The paper is organized as follows. In Sec. 2, the experimental optical setup and TV-L1 regularization are described for the FMT imaging system. Then, numerical and in vivo experimental results obtained using the proposed methods are presented in Sec. 3. Finally, in Sec. 4, the findings are discussed and conclusions are presented.

2.

Methods

2.1.

Multislice-Fluorescence Molecular Tomography System Design and Implementation

A flow chart of the proposed MS-FMT implementation, acquisition, and reconstruction process is shown in Fig. 1, and the experimental setup of the FMT system is shown schematically in Fig. 2.18 The system includes three continuous-wave (CW) excitation diode-pumped diode laser sources emitting at 473, 533, and 769 nm with up to 20-mW output power (B & W TEK Inc., Newark, DE). The NIR laser-beam propagation axis was aligned perpendicularly to those of the 473- and 533-nm lasers. The beams, after reflection from the dichroic mirrors, spatially overlapped and were directed to a mirror oriented at 45 deg. The mirror reflected the resultant laser beam, which passed through a motorized variable attenuator with an optical density of 0.3 to 2. The attenuator was built for accurate control of the source power to within 5 to 20 mW, depending on the sample thickness and absorption.

Fig. 1

FMT method implementation flow chart.

JBO_21_2_026012_f001.png

Fig. 2

Schematic view of MS-FMT imaging system.

JBO_21_2_026012_f002.png

The excitation light was then directed to the sample surface via a set of mirrors and a home-made two-axis laser scanner (Fig. 2). The scanner was mounted on a computer-controlled, micrometer-precision XY translation stage, which moved the laser beam across the surface of a cylindrical specimen holder. The holder was a custom-made device designed to allow the information necessary for image reconstruction to be obtained. After passing through the sample, the excitation beam reached a home-made motorized filter wheel with two series of bandpass filters that separated the excitation wavelengths from the emission wavelengths. The fluorescent radiation was then captured by a cooled electron-multiplying charge-coupled-device (EMCCD) camera (Luca, Andor, UK; 1024×1024  pixels) with a fixed focal length (F) lens (AC254-030B, Thorlabs; F=30  mm).

The FMT hardware was controlled via an intuitive graphical user interface, which allowed users to select the required lasers, filters, angle of rotation, and imaging modes. In addition to the tomography mode, reflectance measurement and white-light imaging could also be performed by moving the scanner to the appropriate position and activating a light-emitting diode light source (not shown). The entire imaging system was housed within a light, tightly sealed box constructed from aluminum posts in a cage structure.

2.1.1.

Phantom

To perform an experimental evaluation of the proposed system, we prepared a cylindrical phantom of 2 cm in diameter. The phantom was composed of 1 g agarose (BioGene, Kimbolton, UK), 4 ml intralipid 20% (Fresenius SE, Bad Homburg, Germany), and 3  μl Indian ink (Pelikan Holding, Schindellegi, Switzerland) dissolved in 100 ml of water.19 The reduced scattering and absorption coefficients at 692 nm were found to be μs=80  mm1 and μa=0.01  mm1, respectively.20 A hollow transparent tube of 1.5-mm inner diameter filled with a fluorescent dye was inserted inside the phantom. The phantom was then placed in the holder, which automatically moved to the center of the camera’s field of view (FOV). The gantry was then rotated 360 deg around the subject in 36 steps. At each step, the laser beam was scanned across the FOV (3×3  cm), and a complete dataset was generated using a laser scanner. A prereconstruction algorithm was then applied to the dataset, which included an optical center offset (OCO) evaluation.

The OCO problem arises frequently in FMT experiments, where it is essential to obtain accurate 3-D reconstruction results. In this paper, the OCO was obtained using an automated rotational center-location method. In this method, capillary tubes were filled with a fluorescent dye and placed horizontally in a tube mouse holder. FMT data were then obtained to generate a sinogram.21,22 The Fourier transform of the sinogram was taken and multiplied by a binary mask, which returned a value of 1 outside the double-wedge region. The constrained Fourier coefficients of the sinogram inside a double wedge were used to find the sinogram Fourier metric (QSF). Once the QSF was found, a number of sinograms were created by displacing the projections at an angle of [π,2π] about a horizontal center (HC) determined by the number of pixels (s). The QSF of each sinogram was calculated in order to obtain the center of rotation (CoR), using22

Eq. (1)

CoR=HCoI+so2,
where HCoI is the HC of the image and so is the displacement at the minimum QSF.

2.1.2.

In vivo experiment

The mouse imaging experiments were approved by the Institutional Animal Care and Use Committee of the Tehran University of Medical Sciences. For the in vivo experiment, the lower 10 mm of a glass capillary tube with an inner diameter of 1 mm was filled with 400 micromoles of fluorescein (Invitrogen Inc.) diluted in 2% intralipid. A thin layer of oil was used to seal the top of the fluorescent solution. Then the remaining tube was filled with 2% intralipid. The tube was inserted into the back of a sacrificed and shaved 12-week-old Balb\c mouse. The mouse was placed in a transparent cylindrical holder inside the MS-FMT imaging system. The FMT image acquisition began with the dorsal and the ventral sides of the mouse facing the EMCCD camera and the light source, respectively. The laser beam was scanned in a raster pattern across the sample’s surface in a 30  mm×30  mm region at the fixed position of the camera (Fig. 3). A total of horizontally arranged 9 source positions were used for each angular position, which were each separated by 7  mm. After completion of the scan, the gantry was rotated by 10 deg before the next laser scanning and corresponding data acquisition process were initiated. In total, 324 FMT images were acquired by the EMCCD camera for 36 angular positions.

Fig. 3

(a) Mouse position in transparent holder. The longitudinal axis of the mouse was aligned along the z axis. (b) The laser beam was scanned (in the x-y plane) across the abdominal surface of the mouse.

JBO_21_2_026012_f003.png

After optical acquisition, the holder was transferred to the adjacent CT-scanning system so that CT data could be acquired as structural information. This was then overlaid on the recovered fluorescence data.

2.1.3.

Computed tomography imaging

For anatomical imaging, the CT images were acquired in the form of a digital imaging and communications in medicine stack using an x-ray CT (Emotion Duo, Siemens Medical Systems, Erlangen, Germany) system. The CT stack consisted of 110 axial slices of 512×512  pixels (0.14  mm×0.14  mm), with 1-mm slice thickness. The x-ray CT source was operated at 110 kV and 76 mA with 1500-ms exposure time for each of the 110 acquired projections.

2.1.4.

Fiducial point extraction

A glass capillary was glued to the tube holder to serve as a fiducial point for mapping of the FMT images to the CT images. The capillary coordinates in the FMT images were detected using a preprocessing method,23 which performed three successive steps: (1) a bandpass filter, which consisted of a Gaussian low-pass filter and a boxcar kernel, was used to remove unwanted noise; (2) the brightest pixels were identified through thresholding of the image pixel intensities for a threshold value of 90% of the maximum intensity; and (3) the location estimates were refined through calculation of the centroid-weighted position.

2.2.

Theoretical Background

2.2.1.

Multislice-fluorescence molecular tomography: forward model

The theoretical approach described herein was derived for a CW illumination and fluorescence contrast, which pertains to FMT applications. In FMT, the CW forward model is described by a set of coupled diffusion equations at the excitation and emission wavelengths, λe and λf, respectively,2428

Eq. (2)

[.κ(r,λe)+μa(r,λe)]Φ(r,λe)=q(r,λe)rΩ,

Eq. (3)

[.κ(r,λf)+μa(r,λf)]Φ(r,λf)=Φ(r,λe)h(r,λf)rΩ,
where κ(r,λe)=[3(μs(r,λe)+μa(r,λe))]1 is the diffusion coefficient at position r and λe, and h is the fluorescence yield coefficient. The fluorophore is excited with light at λe emitted by a source q(r,λe). The emission photon density Φ(r,λe) over the Ω domain is obtained by solving Eq. (3).

2.2.2.

Multislice-fluorescence molecular tomography: inverse problem and fast composite splitting algorithm

The objective in solving the FMT inverse problem is to recover the optical properties at each FEM node. The inverse problem solution is an iterative procedure, where the experimental measurements (ΦM) are matched iteratively with the modeled data (AX). However, the fluorescent objects present in the tissue are often quite small, indicating that the fluorophore distribution and collected ΦM can be reasonably considered sparse.29,30

The sparsity can be enforced by imposing TV-L1 regularization. However, the TV-L1 penalty is nonsmooth, and its implementation generates a heavy computational load. To overcome this problem, several splitting operator and variable splitting algorithms have been developed to decompose the TV-L1 into simpler subproblems. For example, the operator-splitting algorithm searches for a zero of the sum of the maximal-monotone operators. The variable splitting algorithm is another option for solving the TV-L1 problem, and it is based on a combination of alternating direction methods under an augmented Lagrangian framework.

Motivated by the strategy of combining both variable and operator splitting techniques, this study utilizes the FCSA algorithm to tackle the TV-L1 problem for sparsity-enforced FMT reconstruction. In the FCSA-based algorithm, the complex TV-L1 problem is decoupled into L1 norm regularization and TV regularization subproblems, so that each subproblem has only one nonsmooth term. The solutions to the TV-L1 problem are obtained via a linear combination of the solutions to these subproblems in an iterative process. The primary advantages of the FCSA-based algorithm include (1) the complexity of the TV-L1 problem is reduced, which is achieved using variable splitting techniques, and (2) the nonsmooth property of the TV and L1 terms is overcome through use of operator splitting methods.

This algorithm decomposes the joint L1 and TV norm regularization problem into simpler subproblems and solves them in parallel, using31,32

Eq. (4)

x^=argmin{f1(x)=12AxΦM22+αxTV+βγx1},
where ARm×n is the system matrix (called the Jacobian or sensitivity matrix) corresponding to the forward model, x^ is the true image to be estimated, γ is the wavelet transform, and α and β are two positive parameters.

To the best of our knowledge, the FCSA solves Eq. (3) more quickly than other available algorithms. Algorithm 1 outlines the FCSA for the MS-FMT image reconstruction problem.

Algorithm 1

Mixed-norm regularized reconstruction based on FCSA.

   Input: ρ=1/Lf,t1=1,α,β,
   Initialization: k=0,t1=1,r1=xo
   for k=1K, calculate:
   xg=rkρf(rk),
   x1=proxρ(2αxTV)(xg),
   x2=proxρ(2βγx1)(xg),
   xk=(x1+x2)/2,
   tk+1=(1+1+4(tk)2/2,
   rk+1=xk+((tk1)/tk+1)(xkxk1),
   end

As shown above, the x1=proxρ(2αxTV)(xr) step can be computed quickly within limited iterations with cost O(n) (where n is the dimension of x). The x2=proxρ(2βγx1)(xr) step has a closed-form solution and can be computed with cost O(nlog(n)). Thus, the total cost of each FCSA iteration is O(nlog(n)). In the algorithm, f(nrk)=AT(Axrkx), as f(rk)=(1/2)Axrkx2, which costs O(nlog(n)). Here, f is the continuously differential function with Lipschitz constant Lf, where f(y1)f(y2)Lfy1y2. Lf was not easily computable and, therefore, we used a backtracking step-size rule, which computes the step size via backtracking starting from any value. The sequence of function values {F(xk)} produced by the FCSA is nonincreasing. Indeed, for every k>1, F(X1)<f1(X1(yk1),yk1), where Lk is chosen using the backtracking rule.

2.3.

Comparison of Methods: Fast Composite Splitting Algorithm Versus Preconditioned Conjugate Gradient Method and Ordered Subset Separable Paraboloidal Surrogate Method

We compared our fast method with the PCG method and with the ordered subset separable paraboloidal surrogate (OSSPS) and OSSPS-PCG methods. All of these algorithms were developed to solve the joint TV-L1 regularized inverse problem of FMT.

2.3.1.

Preconditioned conjugate gradient

The PCG was implemented in order to minimize the cost function and contains three data components: a fitting term, a sparsifying penalty term, and a smoothing penalty term. This method uses the gradient given as15,33

Eq. (5)

g(n)=A(AxΦM)+β1+αC[(Cx)oz(Cx)],
where C is a nn×ns matrix, with ns being the number of pixels in x and nn being the number of neighboring pixel pairs. Each row of C contains one “+1” and one “1” entry, so that Cx corresponds to the difference between two neighboring pixel values. In Eq. (5), the prime (′) symbol and (○) represent the transpose of a matrix and the Hadamard matrix product, respectively. The function z(t)=[κ(t1)κ(t2)κ(tnn)], and

Eq. (6)

κ(t)=t(t/δT)2+1.

The δT parameter was set to a fixed value of 109.15 The preconditioned form of the Polak–Ribiere CG method was then implemented as follows:

Eq. (7)

p(n)=Pg(n),(preconditioned gradient)γ(n)={0,(n=0)p(n)T{g(n)g(n1)}p(n1)Tg(n1),(n>0)d(n)=p(n)+γ(n)d(n1),(search direction)
where P is a preconditioner matrix that approximates the inverse of the diagonal terms of the Hessian of the data-fitting term. Once the descent direction d for the cost function was calculated, a step size τ was determined using an Armijo line search. This was in order to compute the new iteration, using32

Eq. (8)

x(n+1)=x(n)+τ(n)d(n).(update)

2.3.2.

Separable paraboloidal surrogates algorithm

The SPS is a fully parallelizable algorithm in which the optical properties at all nodes (xj) are updated in parallel. To implement the SPS approach, the nodes were separated using the additive convexity technique, permitting 34 simultaneous updates. Using the convexity of qin, we found33

Eq. (9)

qin[Ax]i=jaijxj=qin(jαij(aijαij(xjxjn)+[Axn]i))jαijqin(aijαij(xjxjn)+[Axn]i),
where aij denotes the change in the log of the amplitude of the i’th measurement arising from a small change in μa at the j’th reconstructed node. Therefore, a separable surrogate function was obtained, which was tangent to the negative log-likelihood and lay above it everywhere in the convex range. The surrogate function at an iterate xn was obtained by replacing the data fitting and TV terms in the original cost function, such that

Eq. (10)

Φ(x;xn)=ΦDF(x;xn)+λL11x+λTVΦTV(x;xn).

Owing to the separable nature of this surrogate, we could easily compute its minimizer, xn+1, over the non-negative orthant in closed form. The gradient of this surrogate function could then be computed using the original equation, where

Eq. (11)

xn+1=[xnD1(xn)Φ(xn)].

Here, the notation []+ represents projection onto the non-negative orthant, while D is a p×p diagonal matrix with diagonal entries D(x) computed using33

Eq. (12)

D(x)=diagj[AA1+2λ(|C|z(Cx))j].

The SPS method was then accelerated using an OS. The OS was implemented by grouping the rows of A into subsets.

2.3.3.

Hybrid algorithm: ordered subset separable paraboloidal surrogate-preconditioned conjugate gradient

The OSSPS and PCG algorithms have different advantages and disadvantages at different stages of the reconstruction. The advantages of these algorithms were combined to create a hybrid algorithm. The hybrid algorithm begins with OSSPS and then switches to PCG at an appropriate point. This point was selected by fitting the following exponential function to the objective values:

Eq. (13)

f(n)=a*(1e(n1)*b)+c.

The parameters a, b, and c were estimated from

Eq. (14)

mina,b,cn=NoN[Φ(xn)f(n)]2.

The established algorithm was run for an initial n iterations before the next algorithm was activated. The change occurred when the algorithm objective value became greater than 98% of (a+c).

2.4.

Image Reconstruction Implementation

Image reconstruction in MS-FMT involves solving the forward and inverse problems. In this paper, the forward algorithm implementation was based on the NIRFAST software package developed at Dartmouth University.34,35 The FCSA MATLAB code17 was then modified to yield a solution for the FMT inverse problem. The performance of our image reconstruction algorithm was evaluated through a numerical simulation and using in vivo mouse measurements, which are described below.

2.5.

Simulation

In the numerical simulation, the mesh generation was performed for the forward problem using the flowchart shown in Fig. 4. As can be seen in the figure, the CT images were cropped and converted to a binary mask via thresholding, and the holes within the mask were iteratively filled using the NIRView software package.36 The binary mask was then used to generate the volumetric FEM mesh. The mesh was created using NIRFAST and NIRView, which was also used to localize the source and detector, as shown in Fig. 5. The excitation point source was located one mean free path beneath the edge of the photon transport beneath the surface. For each excitation source, five detectors (CCD camera) were positioned on the opposite side of the specimen with a 10 deg FOV. Therefore, a total of 2116 source–detector pairs were used to provide datasets for reconstruction of the optical properties of the medium.

Fig. 4

Mesh creation workflow for mouse-trunk CT. (a) Sagittal slice. (b) Binary mask. (c) Filled holes. (d) 3-D whole-body mesh. (e) Axial section. (f) Binary mask (axial view). (g) Section with filled holes. (f) 3-D mesh generated from 10 axial CT sections only.

JBO_21_2_026012_f004.png

Fig. 5

Mesh used for simulated data acquisition: radius: 21 mm; height: 12 mm; nodes: 25514; and elements: 141119. The circles and crosses represent source and detector positions, respectively.

JBO_21_2_026012_f005.png

A cylindrical fluorescent target with 2-mm diameter and 10-mm height was then placed at the (70, 60, 10) point (millimeter units), near to the surface of the mesh. For simplicity, the optical properties of the target were assumed to be homogeneous (μa=0.01  mm1 and μs=80  mm1). The generated mesh was used to solve the FMT forward problem. Forward modeling was performed using NIRFAST, and the data obtained through the numerical solution of the forward problem were used to solve the inverse problem. The inverse problem was solved using the FCSA regularization method.

2.6.

Regularization Parameter Selection and Image-Quality Metrics

The accuracy and reliability of the inverse problem solutions can be improved by choosing an optimal regularization parameter. In this paper, the joint L1 and TV regularization parameters were selected using a nested leave-one-out cross-validation (nLOOCV) method.37 The nLOOCV embedded an inner and an outer loop. In the outer loop, the data were split into training and validation sets. According to Eq. (4), the training set was denoted by S={(x1,Φ1M),,(xi,ΦiM),,(xn,ΦnM)}, in which one xi was randomly left out as the validation set (test set). The training and validation sets were crossed over in k rounds of iteration. In each iteration, a different dataset was used for validation, while the remaining k1 sets were used for learning. The training set was then used in an internal cross-validation (CV) and was repeatedly split into construction and validation datasets. Construction objects were used to develop a regression model through variation of the regularization parameters, whereas the validation objects were used to estimate the model error. For each of the 100 internal CV iterations, an internal error estimate was determined for all possible regularization parameters. Finally, the model with the lowest cross-validated error (CV-error) in the inner loop was selected (Fig. 6).3841

Fig. 6

Double cross-validation scheme. In the inner loop, the model parameters and variables are estimated based on an LOOCV method. The model performance for the optimized parameters and selected variables is then evaluated using the validation test set in the outer loop. The outer loop is repeated within the LOOCV procedure.

JBO_21_2_026012_f006.png

For all the inverse methods described in Sec. 2.3, the lowest CV-error was estimated in order to determine the optimal regularization parameter. The selected parameter was then evaluated by computing the reduced scattering coefficient error (%RSCerr, [(μsrμs)/μs]×100), which is the square difference between the real μsr and the μs obtained from the reconstructed images. After the optimal regularization parameter was determined, the performance of the proposed method was compared against that of the L2-regularization, PCG, and OSSPS techniques, in terms of the localized full volume at half maximum (LVHM), focality, signal-to-noise ratio (SNR), projection error (PEr), and the root mean square error (RMSE), which are defined as follows:4245

Eq. (15)

PEr=1Nj=1N|xjx0j|,

Eq. (16)

LVHM=(1/6)det(J),

Eq. (17)

RMSE=1Nj=1N(xjx0j)2,

Eq. (18)

SNR=10log10Var(xo)MSE(xxo).

The LVHM is defined as the four-node tetrahedron volume enclosing the node with the maximum reconstructed fluorescent intensity, along with other adjacent nodes having values above half the maximum. The volume of the corresponding element was calculated from the determinants of the Jacobian matrices J of the corner-node coordinates. Further, the focality is the ratio of the LVHM to the FVHM, where FVHM is defined as the total (sum) volume of all nodes having values above half the maximum reconstructed value. While a focality of one indicates a single recovered fluorescent target, a focality value greater than 0.5 describes a reconstructed activation that is well separated from the background artifacts. The SNR is the ratio of the reference fluorescent intensity variance and the mean squared error of the reference (x0) and reconstructed (x) intensities. Finally, the RMSE measures the square root of the difference between xj and x0j. In Eq. (17), N represents the total number of nodes in the LVHM.

3.

Results

The aim of our study was to realize and evaluate fast MS-FMT using sparsity-inducing regularization. The proposed regularization method was evaluated using both simulated and experimental data. As regards the experimental data collection, the imaging system performance was evaluated in terms of the spatial resolution and OCO.

3.1.

Imaging System Performance Evaluation

The developed MS-FMT system was used for full angular data collection in the MS tomography mode. Therefore, the OCO of the developed FMT imaging system was evaluated by obtaining a sinogram, as described in Sec. 2.1.1. The generated sinogram is shown in Fig. 7.

Fig. 7

OCO evaluation. (a) FMT image of four glass tubes filled with fluorescent. (b) Full revolution sinogram. (c) Sinogram of tubes with s=20. (d) 2-D Fourier space of equal-angle fan-beam sinogram for s=0. (e) Fourier transform of the sinogram obtained with s=20.

JBO_21_2_026012_f007.png

The QSF was changed from 1.19 to 1.26, exhibiting a minimum at the correct alignment. The minimum QSF indicated that the average coarse center deviated from the true rotational center by 0.1 mm.

3.2.

Finite-Element-Method Implementation

3.2.1.

Simulation results

We used simulation data to validate the joint L1+TV regularization strategy for the MS-FMT and compared the results with those of the L2-regularization process. The latter is one of the most popular methods for solution of discrete ill-posed problems. To compare the algorithms, a fluorescent capillary tube inside a mouse mesh was simulated. (Mesh generation is described in detail in Sec. 2.5).

3.2.2.

Regularization parameter selection

The optimal values of the regularization parameters were computed using the double CV method, which involved two nested inner and outer loops. The outer loop assessed the final model performance, while the inner loop was used to optimize the model complexity for fixed regularization parameters. Reconstructed images based on optical properties obtained from the simulated data using the FCSA method are shown in Fig. 8 for a wide range of TV and L1 regularization parameters.

Fig. 8

Reconstructed optical properties of simulated axial section, obtained by solving the inverse problem using the FCSA method with different regularization parameter values. λTV and L1 are (a) 105 and 105; (b) 500 and 105; (c) 105 and 103; (d) 500 and 103; (e) 105 and 500; and 500 and 500, respectively. The color bar indicates variation of the L1 and TV regularization parameters from 106 to 500.

JBO_21_2_026012_f008.png

These images were reconstructed using FCSA regularization with TV and L1 regularization parameters in the 105 to 500 range. Figure 8(a) shows the best possible reconstruction result, which corresponds to the regularization parameters with the lowest CV prediction error and RSCerr. The values of the regularization parameters and the corresponding CV-error and RSCerr values for the methods described in Sec. 2.6 are tabulated in Table 1.

Table 1

LOOCV estimations of the CV-error and RSCerr values for the FCSA, PCG, and OSSPS algorithms for simulated datasets and optimal λTV and L1.

AlgorithmλTV/L1CV-errorRSCerr
FCSA1×105/1×1052.5×1052%
PCG1×101/5×1044.5×1057%
OSSPS5×105/1×1053.2×1056%

Table 1 shows that more accurate quantitative reconstruction of the μs terms can be achieved by selecting the optimal regularization parameters with the lowest CV-error values. The iterative algorithms were implemented in MATLAB® code using the simulated data with a 5% noise level,45 so as to compare the computation costs of the different reconstruction methods. This evaluation was performed by computing the variation of the error projection in response to increased iteration numbers (Fig. 9).

Fig. 9

Normalized error projection against iteration number, comparing convergence of FCSA, PCG, OSSPS, and OSSPS-PCG methods, including transition of OSSPS to PCG.

JBO_21_2_026012_f009.png

Figure 9 shows the variation of the PErs as a function of iteration number, with the green, black, blue, and red lines representing the OSSPS, PCG, OSSPS-PCG, and FCSA methods, respectively. It is apparent that the FCSA exhibited a smaller PEr than the OSSPS-PCG for the same number of iterations, and the OSSPS-PCG PEr was smaller than that of the PCG and OSSPS techniques. This proves that the FCSA has a faster convergence speed than the PCG and OSSPS methods.

For further quantitative evaluation, we computed the central processing unit (CPU) time, SNR, and RMSE of the FCSA, PCG, OSSPS, and OSSPS-PCG methods (Fig. 10).

Fig. 10

Performance metrics of FCSA, OSSPS, PCG, and OSSPS-PCG penalty functions.

JBO_21_2_026012_f010.png

In Fig. 10, the total reconstruction time does not include the FEM forward-problem computational cost. The results show that the FCSA was three times faster than the OSSPS, PCG, and the OSSPS-PCG methods. From Fig. 10, we concluded that the RMSE and SNR levels of the reconstructed images did not differ significantly. Therefore, the FCSA was superior to the other methods in terms of both reconstruction accuracy and computational complexity, as it achieved the same SNR and RMSE for less CPU time. Finally, the FCSA was compared with the L2-regularization (Tikhonov) approach (Table 2).

Table 2

Quantitative comparison of L2 regularization (Tikhonov) and FCSA.

Inverse methodCPU time (s)SNR (db)RMSEFocalityLVHM (mm3)
FCSA3.519.5328.71884
Tikhonov3017.532.111263

From Table 2, it can be seen that the joint TV-L1 method exhibits the lower RMSE. Note that a lower RMSE value indicates that the FCSA exhibits better performance. Note that the LVHM values of the L2-regularization and FCSA methods are 1263 and 884  mm3, respectively. However, the FCSA-based method can reconstruct images with significantly higher SNR and lower RMSE in less time than the L2-regularization method.

Next, the performance of our proposed reconstruction method was tested using experimentally obtained data.

3.3.

In Vivo Results

The simulation results discussed above revealed that the FCSA method could reconstruct the fluorescent source accurately and had the potential to detect lesions, which is important for practical biomedical applications. To validate the feasibility of the proposed method in practical FMT application, an in vivo mouse experiment was conducted. The capillary tube was distinguishable in the MRI images, as can be seen in Fig. 11, where the arrows indicate the capillary position.

Fig. 11

Mouse x-ray CT cross-sectional scans. (a) Sagittal plane. (b) Axial plane. (c) Volumetric image. The arrows indicate the implanted capillary.

JBO_21_2_026012_f011.png

Next, the axial slices (z-slices, 10 cross sections from z=452 to 562) were used to generate a volumetric mesh that contained 4667 nodes and 24,451 tetrahedral elements. Recall that the goal of the inverse problem solution was to recover the optical properties, an example of which is shown in Fig. 12.

Fig. 12

3-D view of FCSA-method reconstruction results. The reconstruction of a 3-D fluorochrome distribution is shown.

JBO_21_2_026012_f012.png

The results of the optical reconstruction conducted using the FCSA method were quantified by computing the full width at half maximum (FWHM) and the center position of the reconstructed inclusion (Table 3 and Fig. 13).

Fig. 13

(a) Axial section. The bright circle (yellow arrow) inside the axial image is the cross section of the glass capillary containing the fluorophore, which forms the fluorescent inclusion in the animal. (b) FMT reconstruction result registered to anatomical CT images. The capillary position is marked by an arrow. (c) Profile plots across the fluorescent target, obtained along the dotted blue line in (b). The green solid and red dotted lines represent the reconstructed distributions obtained using FCSA regularization methods for MRI and FMT+MRI, respectively.

JBO_21_2_026012_f013.png

Table 3

FWHMs and center positions of reconstructed fluorescent sources in Fig. 13.

ImageFWHM (pixels)Center position (pixels)Position error (pixels)
Reconstructed31(x=110, y=185)(Δx=1, Δy=4)
Actual17(x=111, y=181)

Figure 13(b) shows the reconstructed 3-D FMT image fused with a CT image, which was accomplished using the “A Medical Imaging Data Examiner (AMIDE)” software.46 Image fusion of the two datasets was performed using the capillary tube as a fiducial marker. The FWHM of the reconstructed distribution was then obtained along the green-dotted line (Table 3).

The reconstructed and the actual position center were at the (110, 185, 460) and (110.8, 180, 460) positions, respectively. The recovered center-position error was 1 pixel (0.05 mm) along the x-axis and 4 pixels (0.2 mm) along the y-axis; thus, the localization was reasonably accurate. The width of the fluorescence profile was close to that of the actual profile, at 17  pixels (1 mm). Therefore, from Fig. 13, it is clear that the TV-L1 norm regularization solution is confined within a small region with a clean background.

4.

Discussion

CW optical imaging is a rapidly growing field of research, particularly as regards small-animal FMT applications. This technique offers a rich dataset with enticing implications for the improvement of fluorescence image reconstruction methods. However, improvement of the hardware and software design of FMT systems is essential to development of this medical research field. This article describes a new MS-FMT system and applies a combination of L1 and TV penalties to the FMT inverse problem, so as to simultaneously encourage both sparsity and smoothness in the final reconstructed images.

The FMT-system detection methods presented in recent studies are based on optical fiber contact measurements. However, fiber contact measurement often introduces significant error, which originates from imperfect delivery or light collection at the fiber tissue interface.47 In this study, the proposed MS-FMT system was designed to utilize a rotating gantry. With this configuration, full noncontact excitation and detection was achieved using an EMCCD camera. Further, in the MS-FMT imaging system, the specimen holder was fixed (in the z direction) during rotation of the laser beam and the CCD camera. This design provided high measurement density by allowing increased tissue sampling.

The developed configuration allowed a spatial resolution of 0.5 mm (the resolved gap between two capillary tubes) to be achieved, which is three times higher than that of current FMT implementations.18 The high resolution of our FMT system was achieved by applying the prereconstruction algorithm, which included OCO detection. Several studies have already discussed methods to calculate CoR displacement from the image center during FMT acquisition without the use of prior calibration scans. For example, Walls et al.48 have proposed a method that involves reconstruction of the same slice several times with different offset values. The optimum offset value is then chosen, either visually or using the total variance of the reconstructed slice. Another similar proposed method involves a combination of different parameters, such as the center of mass and the maximum variance.49 However, for a real dataset, the variance technique does not always identify the CoR position correctly. Further, the center-of-mass approach exhibits out-of-focus problems that render it unfeasible for application to FMT. Another proposed solution is to use the sample edge features; however, such features are often unavailable in FMT samples.20,50

The OCO evaluation method presented here is based on the Fourier transform of an obtained sinogram and was recently proposed by Vo et al.22 In the Fourier method, the OCO is calculated based on the shifting of a copy of the [π,π] sinogram about the HCoI. The OCO in this study was 0.1 mm and was associated with the rotation of the gantry in a noncircular orbit, which was due to the unbalanced gantry weight. The gantry weight was then uniformly distributed using commercial software.

Once performance enhancement of the imaging system was achieved, an inverse method based on sparse regularization was implemented. This was necessary in order to accommodate the sparse distribution of the fluorescent sources in practical FMT applications. Recently, many sparsity regularization techniques for application to FMT imaging systems have been introduced.15,43,51 For example, a recent work by Dutta et al.15 has shown that a joint L1 and TV regularization approach not only provides superior contrast in comparison to standard L2 regularization but also reduces the RMSE. In order to minimize the joint L1 and TV penalties, a combination of the OSSPS method with the PCG algorithm has been employed. However, the surrogate functions typically have high curvature, which is known to have a slow convergence rate, thus rendering this approach impractical for large-scale problems.

In this work, we used a FCSA algorithm with fast convergence rate. The FCSA-type regularization can efficiently solve a composite regularization problem including both TV and L1 regularization terms. The strengths of the regularization terms are controlled by the regularization parameters, and the regularization parameter selection improves the accuracy and reliability of the inverse problem solutions. Recently, optimization methods that are more subjective have been introduced. However, automatic selection of the regularization parameters is simpler in most cases.52 To the best of our knowledge, the nested LOOCV method is the optimal automatic strategy for determining the regularization parameter, because of its high robustness and stability. The results listed in Table 1 show that more accurate quantitative reconstruction of the μs components can be achieved by selecting the optimal regularization parameters with the lowest CV error. After optimizing the regularization parameters, the FCSA regularization algorithm for the MS-FMT reconstruction was quantitatively evaluated by considering the RMSE, SNR, relative error, focality, and CPU time (Table 2). The RMSE values of the OSSPS-PCG regularization and the FCSA did not differ significantly. Further, it was determined that the FCSA scheme could yield reconstructed images of acceptable quality within 3.5-s CPU time; this is because this method decomposes the composite regularization problem into multiple, simpler subproblems.

Note that, in the in vivo evaluation, the FWHM of the reconstructed targets yielded by the FCSA was 1.8 times greater than the true FWHM. This overestimation of the tube area was because a coarse reconstruction mesh was used.

In conclusion, the noncontact full 360 deg MS-FMT imaging technique can quantify 3-D dye distributions using the FCSA method. The proposed FCSA method exhibits optimal reconstruction performance compared to all previous methods. In the near future, a large mouse population study will be conducted by incorporating image information directly into the inversion matrix regularization.53

Acknowledgments

This work was supported by the Tehran University of Medical Sciences and the Iran Nanotechnology Initiative Council (Grant No. 21004). The authors thank Iraj Baratlo and Hoshiar Sayar for developing mechanical and electronic components.

References

1. 

R. Weissleder and V. Ntziachristos, “Shedding light onto live molecular targets,” Nat. Med., 9 (1), 123 –128 (2003). http://dx.doi.org/10.1038/nm0103-123 1078-8956 Google Scholar

2. 

E. M. Sevick-Muraca and J. C. Rasmussen, “Molecular imaging with optics: primer and case for near-infrared fluorescence techniques in personalized medicine,” J. Biomed. Opt., 13 (4), 041303 (2008). http://dx.doi.org/10.1117/1.2953185 JBOPFO 1083-3668 Google Scholar

3. 

V. Ntziachristos, E. A. Schellenberger and J. Ripoll, “Visualization of antitumor treatment by means of fluorescence molecular tomography with an annexin V-Cy5.5 conjugate,” Proc. Natl. Acad. U. S. A., 101 (3), 12294 –12299 (2004). http://dx.doi.org/10.1073/pnas.0401137101 Google Scholar

4. 

E. E. Graves and J. Ripoll, “A submillimeter resolution fluorescence molecular imaging system for small animal imaging,” Med. Phys., 30 (5), 901 –911 (2003). http://dx.doi.org/10.1118/1.1568977 MPHYA6 0094-2405 Google Scholar

5. 

N. Ducros et al., “Fluorescence molecular tomography of an animal model using structured light rotating view acquisition,” J. Biomed. Opt., 18 (2), 020503 (2013). http://dx.doi.org/10.1117/1.JBO.18.2.020503 JBOPFO 1083-3668 Google Scholar

6. 

T. Lasser and V. Ntziachristos, “Optimization of 360 degrees projection fluorescence molecular tomography,” Med. Image Anal., 11 (4), 389 –399 (2007). http://dx.doi.org/10.1016/j.media.2007.04.003 Google Scholar

7. 

N. C. Deliolanis et al., “In vivo imaging of murine tumors using complete-angle projection fluorescence molecular tomography,” J. Biomed. Opt., 14 030509 (2009). http://dx.doi.org/10.1117/1.3149854 JBOPFO 1083-3668 Google Scholar

8. 

X. Gu, Y. Xu and H. Jang, “Mesh based enhancement schemes in diffuse optical tomography,” Med. Phys., 30 (5), 861 –869 (2003). http://dx.doi.org/10.1118/1.1566389 Google Scholar

9. 

A. Joshi, W. Bangerth and E. Sevick-Muraca, “Adaptive finite element based for fluorescence optical imaging tomography in tissue,” Opt. Express, 12 (22), 5402 –5417 (2004). http://dx.doi.org/10.1364/OPEX.12.005402 OPEXFF 1094-4087 Google Scholar

10. 

J. H. Lee, A. Joshi and E. M. Sevick-Muraca, “Fully adaptive finite element based tomography using tetrahedral dual meshing for fluorescence enhanced optical imaging in tissue,” Opt. Express, 15 (11), 6955 –6975 (2007). http://dx.doi.org/10.1364/OE.15.006955 OPEXFF 1094-4087 Google Scholar

11. 

D. Wang, X. Song and J. Bia, “A novel adaptive mesh based algorithm for fluorescence molecular tomography using analytical solution,” Opt. Express, 15 (15), 9722 –9730 (2007). http://dx.doi.org/10.1364/OE.15.009722 OPEXFF 1094-4087 Google Scholar

12. 

B. A. Brooksby et al., “Near-infrared (NIR) tomography breast image reconstruction with a priori structural information from CT: algorithm development for reconstructing heterogeneities,” IEEE J. Sel. Top. Quantum Electron., 9 199 –209 (2003). http://dx.doi.org/10.1109/JSTQE.2003.813304 IJSQEN 1077-260X Google Scholar

13. 

H. Gao and H. Zhao, “Multilevel bioluminescence tomography based on radiative transfer equation part 1: l1 regularization,” Opt. Express, 18 1854 –1871 (2010). http://dx.doi.org/10.1364/OE.18.001854 OPEXFF 1094-4087 Google Scholar

14. 

J. Shi et al., “Enhanced spatial resolution in fluorescence molecular tomography using restarted L1-regularized nonlinear conjugate gradient algorithm,” J. Biomed. Opt., 19 (4), 046018 (2014). http://dx.doi.org/10.1117/1.JBO.19.4.046018 JBOPFO 1083-3668 Google Scholar

15. 

J. Dutta et al., “Illumination pattern optimization for fluorescence tomography: theory and simulation studies,” Phys. Med. Biol., 55 (10), 2961 –2982 (2010). http://dx.doi.org/10.1088/0031-9155/55/10/011 PHMBA7 0031-9155 Google Scholar

16. 

J. C. Baritaux et al., “Sparsity-driven reconstruction for FDOT with anatomical priors,” IEEE Trans. Med. Imaging, 30 (5), 1143 –1153 (2011). http://dx.doi.org/10.1109/TMI.2011.2136438 ITMID4 0278-0062 Google Scholar

17. 

J. Huang et al., “Composite splitting algorithms for convex optimization,” Comput. Vision Image Understanding, 115 (12), 1610 –1622 (2011). http://dx.doi.org/10.1016/j.cviu.2011.06.011 Google Scholar

18. 

M. Hejazi et al., “Development and evaluation of a multislice fluorescence molecular tomography using finite element method,” Proc. SPIE, 8799 87990R (2013). http://dx.doi.org/10.1117/12.2032441 Google Scholar

19. 

M. Hejazi et al., “Improving the accuracy of a solid spherical source,” Biomed. Eng. Online, 9 28 (2010). http://dx.doi.org/10.1186/1475-925X-9-28 Google Scholar

20. 

U. J. Birk, M. Rieckher and N. Konstantinides, “Correction for specimen movement and rotation errors for in-vivo optical projection tomography,” Biomed. Opt. Express, 1 (1), 87 –96 (2010). http://dx.doi.org/10.1364/BOE.1.000087 BOEICL 2156-7085 Google Scholar

21. 

M. Rieckher et al., “Microscopic optical projection tomography in vivo,” PLoS ONE, 6 (4), e18963 (2011). http://dx.doi.org/10.1371/journal.pone.0018963 POLNCL 1932-6203 Google Scholar

22. 

N. T. Vo, M. Drakopoulos and C. Reinhard, “Reliable method for calculating the center of rotation in parallel-beam tomography,” Opt. Express, 22 (16), 19078 –19086 (2014). http://dx.doi.org/10.1364/OE.22.019078 OPEXFF 1094-4087 Google Scholar

23. 

J. C. Crocker and G. G David, “Methods of digital video microscopy for colloidal studies,” J. Colloid. Interf. Sci., 179 (1), 298 –310 (1996). http://dx.doi.org/10.1006/jcis.1996.0217 Google Scholar

24. 

S. R. Arridge, “Optical tomography in medical imaging,” Inverse Probl., 15 (2), R41 –R93 (1999). http://dx.doi.org/10.1088/0266-5611/15/2/022 INPEEY 0266-5611 Google Scholar

25. 

H. Jiang, K. D. Paulsen and U. Osterberg, “Optical image reconstruction using frequency domain data: simulations and experiments,” J. Opt. Soc. Am. A, 13 (2), 253 –266 (1996). http://dx.doi.org/10.1364/JOSAA.13.000253 JOAOD6 0740-3232 Google Scholar

26. 

S. L. Jacques and B. W. Pogue, “Tutorial on diffuse light transport,” J. Biomed. Opt., 13 (4), 041302 (2008). http://dx.doi.org/10.1117/1.2967535 JBOPFO 1083-3668 Google Scholar

27. 

T. J. Farrell and M. S. Patterson, “Experimental verification of the effect of refractive index mismatch on the light fluence in a turbid medium,” J. Biomed. Opt., 6 (4), 468 –473 (2001). http://dx.doi.org/10.1117/1.1412222 JBOPFO 1083-3668 Google Scholar

28. 

S. R. Arridge and M. Schweiger, “Photon-measurement density functions. Part 2: finite element-method calculations,” Appl. Opt., 34 (34), 8026 –8036 (1995). http://dx.doi.org/10.1364/AO.34.008026 APOPAI 0003-6935 Google Scholar

29. 

C. Darne, Y. Lu and E. M. Sevick-Muraca, “Small animal fluorescence and bioluminescence tomography: a review of approaches, algorithms and technology update,” Phys. Med. Biol., 59 (1), R1 (2014). http://dx.doi.org/10.1088/0031-9155/59/1/R1 PHMBA7 0031-9155 Google Scholar

30. 

P. Mohajerani et al., “Optimal sparse solution for fluorescent diffuse optical tomography: theory and phantom experimental results,” Appl. Opt., 46 (10), 1679 –1685 (2007). http://dx.doi.org/10.1364/AO.46.001679 APOPAI 0003-6935 Google Scholar

31. 

J. Huang, S. Zhang and D. Metaxas, “Efficient MR image reconstruction for compressed MR imaging,” Med. Image Anal., 15 (5), 670 –679 (2011). http://dx.doi.org/10.1016/j.media.2011.06.001 Google Scholar

32. 

J. Fessler and S. D. Booth, “Conjugate-gradient preconditioning methods for shift-variant PET image reconstruction,” IEEE Trans. Med. Imaging, 8 (5), 688 –699 (1999). http://dx.doi.org/10.1109/83.760336 ITMID4 0278-0062 Google Scholar

33. 

A. R. De Pierro, “A modified expectation maximization algorithm for penalized likelihood estimation in emission tomography,” IEEE Trans. Med. Imaging, 14 (1), 132 –137 (1994). http://dx.doi.org/10.1109/42.370409 ITMID4 0278-0062 Google Scholar

34. 

M. Jermyn et al., “Fast segmentation and high-quality three-dimensional volume mesh creation from medical images for diffuse optical tomography,” J. Biomed. Opt., 18 (8), 086007 (2013). http://dx.doi.org/10.1117/1.JBO.18.8.086007 JBOPFO 1083-3668 Google Scholar

35. 

H. Dehghani et al., “Near infrared optical tomography using NIRFAST: algorithm for numerical model and image reconstruction,” Commun. Numer. Methods Eng., 25 (6), 711 –732 (2009). http://dx.doi.org/10.1002/cnm.1162 CANMER 0748-8025 Google Scholar

36. 

M. Jermyn et al., “A user-enabling visual workflow for near-infrared light transport modeling in tissue,” in Biomedical Optics, (2012). Google Scholar

37. 

C. Habermehl et al., “Optimizing the regularization for image reconstruction of cerebral diffuse optical tomography,” J. Biomed. Opt., 19 (9), 096006 (2014). http://dx.doi.org/10.1117/1.JBO.19.9.096006 JBOPFO 1083-3668 Google Scholar

38. 

P. Refaeilzadeh, L. Tang, H. Liu, “Cross-validation,” Encyclopedia of Database Systems, 532 –538 Springer, US (2009). Google Scholar

39. 

S. Lemm et al., “Introduction to machine learning for brain imaging,” Neuroimage, 56 (2), 387 –399 (2011). http://dx.doi.org/10.1016/j.neuroimage.2010.11.004 NEIMEF 1053-8119 Google Scholar

40. 

D. Baumann and K. Baumann, “Reliable estimation of prediction errors for QSAR models under model uncertainty using double cross-validation,” J. Cheminform., 6 (1), 47 (2014). http://dx.doi.org/10.1186/s13321-014-0047-1 Google Scholar

41. 

E. Szymańska et al., “Double-check: validation of diagnostic statistics for PLS-DA models in metabolomics studies,” Metabolomics, 8 (1), 3 –16 (2012). http://dx.doi.org/10.1007/s11306-011-0330-3 Google Scholar

42. 

J. J. Abascal et al., “Fluorescence diffuse optical tomography using the split Bregman method,” Med. Phys., 38 (11), 6275 –6284 (2011). http://dx.doi.org/10.1118/1.3656063 MPHYA6 0094-2405 Google Scholar

43. 

D. Zhu and C. Li, “Nonconvex regularizations in fluorescence molecular tomography for sparsity enhancement,” Phys. Med. Biol., 59 (12), 2901 (2014). http://dx.doi.org/10.1088/0031-9155/59/12/2901 PHMBA7 0031-9155 Google Scholar

44. 

Y. Zhan et al., “Image quality analysis of high-density diffuse optical tomography incorporating a subject-specific head model,” Front. Neuroenergetics, 4 6 (2012). http://dx.doi.org/10.3389/fnene.2012.00006 Google Scholar

45. 

B. W. Pogue et al., “Contrast-detail analysis for detection and characterization with near-infrared diffuse tomography,” Med. Phys., 27 2693 –2700 (2000). http://dx.doi.org/10.1118/1.1323984 MPHYA6 0094-2405 Google Scholar

46. 

A. M. Loening and S. S. Gambhir, “AMIDE: a free software tool for multimodality medical image analysis,” Mol. Imaging, 2 (3), 131 –137 (2003). http://dx.doi.org/10.1162/153535003322556877 Google Scholar

47. 

M. Schweiger et al., “Image reconstruction in optical tomography in the presence of coupling errors,” Appl. Opt., 46 (14), 2743 –2756 (2007). http://dx.doi.org/10.1364/AO.46.002743 APOPAI 0003-6935 Google Scholar

48. 

J. R. Walls et al., “Correction of artefacts in optical projection tomography,” Phys. Med. Biol., 50 (19), 4645 –4665 (2005). http://dx.doi.org/10.1088/0031-9155/50/19/015 PHMBA7 0031-9155 Google Scholar

49. 

D. Dong et al., “Automated recovery of the center of rotation in optical projection tomography in the presence of scattering,” IEEE J. Biomed. Health Inform., 17 (1), 198 –204 (2013). http://dx.doi.org/10.1109/TITB.2012.2219588 Google Scholar

50. 

E. Figueiras et al., “Optical projection tomography as a tool for 3D imaging of hydrogels,” Biomed. Opt. Express, 5 (10), 3443 –3449 (2014). http://dx.doi.org/10.1364/BOE.5.003443 BOEICL 2156-7085 Google Scholar

51. 

D. Zhu and C. Li, “Accelerating spatially non-uniform update for sparse target recovery in fluorescence molecular tomography by ordered subsets and momentum methods,” Proc. SPIE, 9319 93190U (2015). http://dx.doi.org/10.1117/12.2076789 PSISDG 0277-786X Google Scholar

52. 

L. Zhang et al., “Direct regularization from co-registered anatomical images for MRI-guided near-infrared spectral tomographic image reconstruction,” Biomed. Opt. Express, 6 (9), 3618 –3630 (2015). http://dx.doi.org/10.1364/BOE.6.003618 BOEICL 2156-7085 Google Scholar

53. 

Y. Zhao et al., “Optimization of image reconstruction for magnetic resonance imaging-guided near-infrared diffuse optical spectroscopy in breast,” J. Biomed. Opt., 20 (5), 056009 (2015). http://dx.doi.org/10.1117/1.JBO.20.5.056009 JBOPFO 1083-3668 Google Scholar

Biographies for the authors are not available.

© 2016 Society of Photo-Optical Instrumentation Engineers (SPIE) 1083-3668/2016/$25.00 © 2016 SPIE
Sedigheh Marjaneh Hejazi, Saeed Sarkar, and Ziba Darezereshki "Fast multislice fluorescence molecular tomography using sparsity-inducing regularization," Journal of Biomedical Optics 21(2), 026012 (26 February 2016). https://doi.org/10.1117/1.JBO.21.2.026012
Published: 26 February 2016
Lens.org Logo
CITATIONS
Cited by 6 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Reconstruction algorithms

Tomography

Luminescence

Imaging systems

Inverse problems

Fluorescence tomography

X-ray computed tomography

Back to Top