Open Access
6 September 2012 Parallel multigrid solver of radiative transfer equation for photon transport via graphics processing unit
Author Affiliations +
Abstract
A graphics processing unit-based parallel multigrid solver for a radiative transfer equation with vacuum boundary condition or reflection boundary condition is presented for heterogeneous media with complex geometry based on two-dimensional triangular meshes or three-dimensional tetrahedral meshes. The computational complexity of this parallel solver is linearly proportional to the degrees of freedom in both angular and spatial variables, while the full multigrid method is utilized to minimize the number of iterations. The overall gain of speed is roughly 30 to 300 fold with respect to our prior multigrid solver, which depends on the underlying regime and the parallelization. The numerical validations are presented with the MATLAB codes at https://sites.google.com/site/rtefastsolver/.

1.

Introduction

The accurate modeling of photon migration is a prerequisite for quantitative optical imaging. The general approach is through either the Monte Carlo simulation (MC)1 or solving its continuous form, so-called radiative transfer equation (RTE), as an integro-differential equation.2 In practice, the diffusion approximation (DA) of RTE is often used for its simplicity. However, the accuracy of DA for modeling light propagation is considerably degraded in nondiffusive media,3 particularly for small animal imaging. Several groups studied RTE based image reconstructions, which demonstrated the significant improvement over those based on DA.49

Although the numerical methods for solving RTE have been extensively studied,2,10 the development of an efficient RTE solver is nontrivial mainly due to its high dimensionality, e.g., three spatial dimensions plus two angular dimensions for steady-state or frequency-domain RTE in three-dimensional (3-D). An advantage pertinent to this continuous form as RTE is that the solution efficiency can be improved through general techniques for solving numerical partial differential equations.2,10 In our prior work,11 we used the angular finite element method and the spatial Discontinuous Galerkin (DG) method to obtain a second-order scheme,12 which means that the solution accuracy improves quadratically with respect to the mesh refinement. In other words, the same solution accuracy can be met with only the square root of the number of variables in each dimension that would be required for a first-order scheme with the linear accuracy. In addition, to deal with the slow solution convergence in the diffusive regime, where the scattering is considerably larger than the absorption, we use the multigrid method to accelerate the solution convergence. Despite of the acceleration through the high-order scheme and the multigrid method, the speed still needs to be improved in order for this RTE solver to be routinely used.

Graphics processing unit (GPU) is a highly parallel multithreaded, manycore processor.13 It is well-suited to addressing problems that can be expressed as data-parallel computations. In addition, GPU costs much less than parallel computers. It has been employed in various biomedical applications and achieved ten-fold to hundred-fold gains in speed. In particular, GPU algorithms for solving RTE were recently developed based on the standard angular discrete ordinates and the source-iteration approach, respectively for optical imaging14 and in the field of atmospheric radiative transfer.15 Since our RTE solver is also highly parallelizable, the parallelization via GPU is attractive considering its efficiency, cost, and complexity.

In this work, we are going to develop a GPU algorithm to realize the extra gain in speed based on our multigrid solver.11 Here, the parallelization is considerably different from existing works,14,15 such as the spatial parallelization, in order to optimize the speed gain with respect to the current multigrid solver. The major contribution of this work is to use the GPU to accelerate the source-iteration steps, while the proposed parallel algorithms do not degrade the performance of the multigrid method in terms of the number of multigrid iterations. As a result, the execution time per iteration and the number of iterations will be presented separately in the result section.

2.

Multigrid Solver of RTE

Our approach for solving RTE is based on the full multigrid method, which consists of “V” multigrid cycles [Fig. 1(a)] with the purpose of reducing the number of iterations or so-called relaxation steps, especially in nontransport or diffusive regimes. The V cycles consist of the relaxation, the defect (residual), and the interpolation steps with respect to different levels of meshes in terms of either angular or spatial variables. During each V cycle, n1 relaxation steps are performed on the current level of mesh, respectively, before the computation on the next coarser level of mesh. Following n1 relaxations, the residual or defect is computed on the current level and interpolated to the next coarser level of mesh; this is repeated until the coarsest level of mesh. From the coarsest level, the solution due to the residual is projected back to the finer level of mesh and added to the finer solution, and then another n2 relaxation steps are performed; this is repeated until the finest level of mesh. Here, n1=n2=3; and the V cycles that do not include the finest level of mesh serve as to provide a good initial guess for the next finer level. As an example with four levels of meshes shown in Fig. 1, a V cycle including Levels 1 and 2 is performed to provide a good initial guess for Level 3; a V cycle including Levels 1 through 3 is performed to provide a good initial guess for Level 4; then a few V cycles including Levels 1 through 4 are performed until the stopping criterion is satisfied. The interested readers may refer to our prior work11 for the efficiency of the multigrid method in reducing the number of iterations.

Fig. 1

Full multigrid method for solving RTE.

JBO_17_9_096004_f001.png

The computationally dominant steps occur from the relaxation and the defect, since the computation time for interpolations between meshes is much smaller than that for relaxations. On the other hand, the computation for defect and relaxation shares the same structure, which can be divided into two steps–scattering summation and sweeping transport [Fig. 1(b)]. As illustrated in Fig. 1(b), the order of computational complexity is O(Na2·Ns) for the scattering step and O(Na·Ns) for the sweeping step, where Na and Ns are the degrees of freedom in angle and space, respectively. The details are available in Ref. 11. Therefore, in the next section, we are going to consider the parallel algorithms mainly for these two steps.

3.

Parallelization of Scattering Summation

When solving RTE, the scattering integral in RTE is discretized as a summation with respect to angular variables: the scattering contribution from all angles is summed for each spatial node along each angular direction. Overall, this scattering summation is the most computationally dominant step. The goal of the parallelization of scattering summation is to reduce the complexity so that its computational cost is linearly, rather than quadratically, proportional to the angular degrees of freedom. For example, the computational time increases by twice, rather than four times, when the angular degrees of freedom double.

This goal can be achieved on GPU through the standard GPU tricks by using shared memory,13 since the scattering summation is essentially equivalent to the multiplication of the angular scattering weight matrix (1 in Fig. 2) and the photon flux matrix (2 in Fig. 2). That is, we perform the matrix multiplication of block submatrices with the size BLOCK_SIZE×BLOCK_SIZE, so that the memory communication is minimal on the GPU device [Fig. 2(a)]. When the matrix size exceeds the device memory, since the spatial degrees of freedom Ns is often much larger than the angular degrees of freedom Na, it is natural to divide with respect to spatial degrees of freedom, and then sequentially send the partial photon flux matrix with the size Ns×Na to the GPU device for parallel computation [Fig. 2(b)]. For convenience and efficiency, the matrix multiplication functions from CUBLAS library16 are called in implementation.

Fig. 2

Parallelization of scattering summation.

JBO_17_9_096004_f002.png

Please note that, unlike the sweeping transport, the scattering summation is completely independent in both angle and space, and therefore fully parallelizable in theory, which allows the GPU device to reach its capacity in practice.

4.

Parallelization of Sweeping Transport

After the scattering contribution is computed through the scattering summation in each relaxation or defect step, the photon flux or the solution residual is computed through “sweeping” over the spatial mesh for each angular direction. The details are available in Ref. 11.

Although the parallelization of sweeping transport is still (almost) parallelizable in angular variables (which will be discussed next), the sweeping order can be a restriction for the parallelization of sweeping transport with respect to spatial variables. That is, along the sweeping characteristics, the transport computation is supposed to be performed in certain orders on the spatial mesh for transport efficiency. However, this is not the case when the medium is no longer transport-dominant. That is, the transport order along any angular direction is no longer crucial due to the scattering.

When the sweeping ordering is considered, the spatial mesh is divided into different causality groups so that the nodes in each group are independent from the nodes in other groups. Therefore, the nodal sweeping in groups can be done in parallel [Fig. 3(a)]. For example, in Fig. 3(a), according to the flow direction, Triangles 1 to 5 form a causality group, Triangles 6 through 8 form a causality group, and Triangle 9 forms a causality group. During the sweeping transport, only the nodes within the same group are dependent. For example, the computation in the group with Triangles 1 through 5 should go sequentially from Triangle 1 to Triangle 5. Therefore, in parallelization, independent computations can be carried out simultaneously. That is the parallelization can be done on the nodes with the same color, i.e., on Triangle 1, 6, and 9 first, Triangle 2 and 7 next, then Triangle 3 and 8, then Triangle 4, and last Triangle 5.

Fig. 3

Parallelization of sweeping transport.

JBO_17_9_096004_f003.png

However, as indicated by Fig. 3(a), the size of parallelizable groups may vary dramatically, which is not ideal for parallelization. This strict compliance has to be compromised in implementation. That is, instead of strictly following the causality, we use distance metrics to order the spatial nodes and parallel the spatial mesh accordingly [Fig. 3(b)], which provides the uniform group size for parallelization while conforming to the causality. As an example shown in Fig. 3(b), according to the flow direction, the nodes are naturally divided into two groups according to the distance function: Triangles 1 to 2, 6 to 7 and 9 form a parallelizable group; Triangles 3 to 5 and 8 form a parallelizable group. In this way, the efficiency of the parallelization is improved while the transport ordering is approximately maintained. However, since the spatial parallelization groups are different from angle to angle, the GPU computational speed is not optimal due to this complexity in accessing the memory.

Therefore, we consider the sweeping transport without the sweeping ordering [Fig. 3(c)]. When the medium is diffusive or not under the transport regime, as mentioned above, the sweeping ordering is not essential. In this case, the parallelization is straightforward in space [Fig. 3(c)]. In Fig. 3(c), we simply parallel the spatial nodes according to its original ordering. That is Triangles 1 to 5 form a parallelizable group for all angular directions, and Triangles 6 to 9 form a parallelizable group for all angular directions. Unlike in Fig. 3(a) and 3(b), the parallelization of spatial nodes is the same for all angular directions in Fig. 3(c). In this way, the memory access is efficient and the GPU speed is optimal. However, when the medium is under the transport regime, the efficiency of the overall multigrid method could be affected, i.e., the number of multigrid cycles could increase due to the violation of the transport causality or the delay of the upwind flux.

Next, we briefly comment on the angular parallelization of the sweeping transport step, which is fully parallelizable under the vacuum boundary condition. However, under the reflection boundary condition, the mismatch of refraction index at the boundary causes the boundary scattering. That is, the boundary flux along the sweeping direction is also related to the boundary flux from other directions. Therefore, the sweeping transport is no longer independent among angles under the reflection boundary condition. However, we can still precompute the boundary scattering contributions in parallel and then fully parallelize in angle. In this way, although the boundary scattering lags behind, it does not severely degrade the overall performance.

Again, when the matrix size exceeds the device memory, we can sequentially perform the parallelization on GPU with respect to the angular variables, and even spatial variables if necessary.

5.

Parallelized Multigrid Solver of RTE

The parallelization of the interpolation steps is straightforward since the interpolation consists of component-wise operations that are totally independent from each other and therefore fully parallelizable.

There is a subtle change in algorithm due to the parallelization of the relaxation. Previously, the relaxation was equivalent to the Gauss-Seidel iterative method in both angular and spatial variables. After the parallelization, the scattering step is separated from the transport step, and therefore the relaxation is equivalent to a mixed iterative method, which is like the Gauss-Seidel method in spatial variables but like the Jacobi method in angular variables. Due to this change, the number of multigrid iterations may increase in the diffusive regime, since the Jacobi method is not as efficient as the Gauss-Seidel method when solving elliptic type of differential equations with the multigrid method. However, the Jacobi method for RTE does allow the scattering summation to be separately computed from the sweeping transport, which could offer an order-of-magnitude gain in speed even on CPU (See Tables 1 through 6).

Table 1

Performance comparison of RTE solvers in 2-D when μa=0.01 and μs=0.1 (transport regime). In the results, the first number is the computational time (in seconds) of a multigrid iteration, and the second is the number of multigrid iterations.

B.C.Vacuum boundary conditionReflection boundary condition
Na128256256128256256
Ns6144614424,5766144614424,576
CPU8.2/145/1356/18.2/145/1360/1
CPU (fast)3.7/112/149/13.7/212/150/1
GPU0.33/20.69/12.9/10.33/20.69/12.9/1
GPU (fast)0.14/50.30/41.2/70.14/60.30/51.2/9

Table 2

Performance comparison of RTE solvers in 2-D when μa=0.01 and μs=1 (intermediate regime). In the results, the first number is the computational time (in seconds) of a multigrid iteration, and the second is the number of multigrid iterations.

B.C.Vacuum boundary conditionReflection boundary condition
Na128256256128256256
Ns6144614424,5766144614424,576
CPU8.2/144/1358/18.2/145/2354/1
CPU (fast)3.7/112/249/13.7/212/248/1
GPU0.33/20.69/22.9/20.33/30.69/22.9/2
GPU (fast)0.14/20.30/21.2/20.14/20.30/21.2/2

Table 3

Performance comparison of RTE solvers in 2-D when μa=0.01 and μs=10 (diffusive regime). In the results, the first number is the computational time (in seconds) of a multigrid iteration, and the second is the number of multigrid iterations.

B.C.Vacuum boundary conditionReflection boundary condition
Na128256256128256256
Ns6144614424,5766144614424,576
CPU8.2/1144/11344/118.2/1144/11358/12
CPU (fast)3.7/2612/2550/263.7/2712/2649/26
GPU0.33/290.69/282.9/280.33/290.69/282.9/28
GPU (fast)0.14/290.30/281.2/280.14/300.30/291.2/28

Table 4

Performance comparison of RTE solvers in 3-D when μa=0.01 and μs=0.1 (transport regime). In the results, the first number is the computational time (in seconds) of a multigrid iteration, and the second is the number of multigrid iterations.

B.C.Vacuum boundary conditionReflection boundary condition
Na6625825866258258
Ns49,15249,152393,21649,15249,152393,216
CPU41/1602/111673/141/1655/111412/1
CPU (fast)11/195/1784/112/195/2790/1
GPU0.98/14.6/1191/1a0.98/24.6/2193/1a
GPU (fast)0.52/83.0/7NA0.54/112.9/13NA

awhen Na=258 and Ns=393,216, since the required memory exceeds the GPU memory, there are multiple memory passes between CPU and GPU and the GPU speed is degraded.

Table 5

Performance comparison of RTE solvers in 3-D when μa=0.01 and μs=1. In the results, the first number is the computational time (in seconds) of a multigrid iteration, and the second is the number of multigrid iterations.

B.C.Vacuum boundary conditionReflection boundary condition
Na6625825866258258
Ns49,15249,152393,21649,15249,152393,216
CPU41/3654/112048/141/3658/211698/2
CPU (fast)12/394/2790/212/395/3790/3
GPU0.98/34.6/2191/2a1.0/44.6/6195/3a
GPU (fast)0.53/53.0/4NA0.54/72.9/9NA

awhen Na=258 and Ns=393,216, since the required memory exceeds the GPU memory, there are multiple memory passes between CPU and GPU and the GPU speed is degraded.

Table 6

Performance comparison of RTE solvers in 3-D when μa=0.01 and μs=10 (diffusive regime). In the results, the first number is the computational time (in seconds) of a multigrid iteration, and the second is the number of multigrid iterations.

B.C.Vacuum boundary conditionReflection boundary condition
Na6625825866258258
Ns49,15249,152393,21649,15249,152393,216
CPU41/10641/1211544/941/12595/1511930/11
CPU (fast)12/2793/27746/2312/3092/32747/27
GPU0.98/404.6/41183/23a0.99/454.6/48191/27a
GPU (fast)0.52/433.0/42NA0.53/492.9/50NA

awhen Na=258 and Ns=393,216, since the required memory exceeds the GPU memory, there are multiple memory passes between CPU and GPU and the GPU speed is degraded.

Overall, with the above parallelized relaxation, defect, and interpolation steps, the efficiency of the multigrid method is still roughly maintained so that the number of multigrid iterations stays roughly at the same level despite of some performance variations in different regimes.

6.

Results

We consider a two-dimensional (2-D) circular domain centered at the origin with a diameter of 60 mm (millimeters). An isotropic point source is put at (27mm, 0) with the modulation frequency 100 MHz. The Henyey-Greenstein phase function with the anisotropic factor g=0.9 is employed as the scattering kernel. The refraction index of the environment is set to be 1; the refraction index of the medium is set to be 1 under the vacuum boundary condition, and 1.33 under the reflection boundary condition. The homogenous background is assumed with the absorption coefficient μa and the scattering coefficient μs. Typical scattering-to-absorption ratios are selected to represent the transport regime (μa=0.01 and μs=0.1; 0.6 reduced optical depths), the intermediate regime (μa=0.01 and μs=1; 6 reduced optical depths), and the diffusive regime (μa=0.01 and μs=10; 60 reduced optical depths) in Tables 1 through 3, respectively.

We also consider a 3-D cubic domain centered at (20 mm, 20 mm, 20 mm) with the side length of 40 mm each. The setting and the parameters in 3-D are the similar to those in 2-D, with an isotropic point source at (2 mm, 2 mm, 2 mm). Again, typical scattering-to-absorption ratios are selected to represent the transport regime (μa=0.01 and μs=0.1; 0.6 reduced optical depths), the intermediate regime (μa=0.01 and μs=1; 6 reduced optical depths), and the diffusive regime (μa=0.01 and μs=10; 60 reduced optical depths) in Tables 4 to 6, respectively.

In the result tables, Na(Ns) is the angular (spatial) degrees of the freedom; CPU denotes the prior multigrid RTE solver; CPU (fast) denotes the fast multigrid RTE solver, with the accelerated scattering summation via the matrix multiplication of block submatrices [Fig. 2(a)]; GPU denotes the parallelized multigrid RTE solver with transport spatial ordering [Fig. 3(b)]; GPU (fast) denotes the parallelized multigrid RTE solver with natural spatial ordering [Fig. 3(c)]. The same stopping criterion based on the residual11 is used to make sure the solutions from various solvers have the same order of the accuracy.

The above results are based on NVIDIA Tesla C2070 (5.25 GB device memory) on a desktop with Intel Xeon CPU E5620 2.40 GHz, and the solvers are complied through x64 complier from Microsoft Visual Studio with single precision.

7.

Discussions and Conclusions

In terms of the speed gain for a multigrid iteration, it is clear from Tables 1 to 6 that the parallelized solvers (GPU and GPU (fast)) have roughly the linear dependence on the angular degrees of freedom Na, while the prior multigrid solver (CPU) has roughly the quadratic dependence on Na. And the performance of the fast multigrid solver (CPU(fast)) is in between. As a result, the computational speed for a multigrid iteration is significantly improved through the parallelization on GPU. For example, the achieved gain of speed from CPU to GPU (fast) is 60 to 300 fold in 2-D (Tables 1 to 3) and 80 to 200 fold in 3-D (Tables 4 to 6). This linear dependence on Na is crucial, since the computational time can be drastically reduced when a significant number of angular directions are required in order to accurately simulate optically thin medium, the near-source field, the transport region, or strongly forwarded peaking scattering. This is particularly essential for the 3-D RTE computation, since the angular space in 3-D has two dimensions, which means a regular refining of the angular meshes increases Na by four times as comparing with 2-fold increases for 2-D RTE.

Regarding the number of multigrid iterations, the effect of the parallelization on the multigrid method varies under different RTE regimes. In the transport regime (Tables 1 and 4), the violation of the transport causality causes the increase of the number of iterations for GPU (fast), and therefore the parallelized solver GPU with transport ordering [Fig. 3(b)] is recommended. In both the intermediate regime (Tables 2 and 5) and the diffusive regime (Tables 3 and 6), GPU (fast) is recommended, since the natural spatial ordering [Fig. 3(c)] is sufficient in the presence of the strong scattering.

One aspect of our future work will be on the cases where the problem size exceeds the GPU device memory. As shown in the given 3-D examples (Na=258 and Ns=393,216), the speed gain is limited by the multiple passes from the host memory to the device memory. A possible solution is to consider the multi-GPU parallelization. On the other hand, the current GPU solver will allow us to efficiently study a few RTE based inverse problems,1720 so that we can assess the potential gain in the imaging accuracy due to RTE. In addition, we will also spend time to write user documents and develop user-friendly interfaces to share this solver with the community.

In summary, based on our prior multigrid solver of RTE, we have developed a parallelized solver of RTE with considerably improved speed. This parallel solver has the linear computational complexity with respect to both angular and spatial degrees of freedom. As a result of the parallelization, the overall gain of speed is roughly 30 to 300 fold, which depends on the underlying regime and the parallelization. The solver is for heterogeneous media with complex geometry based on 2-D triangular meshes or 3-D tetrahedral meshes. The MATLAB codes are available at https://sites.google.com/site/rtefastsolver/.

Acknowledgments

This research is supported by NIH/NIBIB (R21 EB013387).

References

1. 

J. SpanierE. Gelbard, Monte Carlo Principles and Neutron Transport Problems, Addison-Wesley Pub. Co., Boston, MA (1969). Google Scholar

2. 

E. E. LewisW. F. Miller, Computational Methods of Neutron Transport, Wiley-Interscience, New York (1984). Google Scholar

3. 

K. M. CaseP. F. Zweifel, Linear Transport Theory, Addison-Wesley Pub. Co., Boston, MA (1967). Google Scholar

4. 

A. D. Kloseet al., “Optical tomography using the time-independent equation of radiative transfer- part 1: forward model,” J. Quant. Spectrosc. R.T., 72 (5), 691 –713 (2002). http://dx.doi.org/10.1016/S0022-4073(01)00150-9 JQSRAE 0022-4073 Google Scholar

5. 

K. Renet al., “Algorithm for solving the equation of radiative transfer in the frequency domain,” Opt. Lett., 29 (6), 578 –580 (2004). http://dx.doi.org/10.1364/OL.29.000578 OPLEDP 0146-9592 Google Scholar

6. 

T. Tarvainenet al, “Hybrid radiative-transfer-diffusion model for optical tomography,” Appl. Opt., 44 (6), 876 –886 (2005). http://dx.doi.org/10.1364/AO.44.000876 APOPAI 0003-6935 Google Scholar

7. 

A. Joshiet al, “Radiative transport-based frequency-domain fluorescence tomography,” Phys. Med. Biol., 53 (8), 2069 –2088 (2008). http://dx.doi.org/10.1088/0031-9155/53/8/005 PHMBA7 0031-9155 Google Scholar

8. 

H. GaoH. Zhao, “Multilevel bioluminescence tomography based on radiative transfer equation Part 1: l1 regularization,” Opt. Express, 18 (3), 1854 –1871 (2010). http://dx.doi.org/10.1364/OE.18.001854 OPEXFF 1094-4087 Google Scholar

9. 

H. GaoH. Zhao, “Multilevel bioluminescence tomography based on radiative transfer equation Part 2: total variation and l1 data fidelity,” Opt. Express, 18 (3), 2894 –2912 (2010). http://dx.doi.org/10.1364/OE.18.002894 OPEXFF 1094-4087 Google Scholar

10. 

M. L. AdamsE. W. Larsen, “Fast iterative methods for discrete-ordinates particle transport calculations,” Prog. Nuc. Energ., 40 (1), 3 –159 (2002). http://dx.doi.org/10.1016/S0149-1970(01)00023-3 0149-1970 Google Scholar

11. 

H. GaoH. Zhao, “A fast forward solver of radiative transfer equation,” Transport Theor. Stat., 38 (3), 149 –192 (2009). http://dx.doi.org/10.1080/00411450903187722 TTSPB4 0041-1450 Google Scholar

12. 

H. GaoH. Zhao, “Analysis of a numerical solver for radiative transport equation,” Math. Comput., (2012). Google Scholar

13. 

(2011). Google Scholar

14. 

K. Penget al, “Graphics processing unit parallel accelerated solution of the discrete ordinates for photon transport in biological tissues,” Appl. Opt., 50 (21), 3808 –3823 (2011). http://dx.doi.org/10.1364/AO.50.003808 APOPAI 0003-6935 Google Scholar

15. 

B. Huanget al., “Development of a GPU-based high-performance radiative transfer model for the infrared atmospheric sounding interferometer (IASI),” J. Comput. Phys., 230 (6), 2207 –2221 (2011). http://dx.doi.org/10.1016/j.jcp.2010.09.011 JCTPAH 0021-9991 Google Scholar

16. 

(2011). Google Scholar

17. 

Y. Linet al, “Quantitative fluorescence tomography using a tri-modality system: in vivo validation,” J. Biomed. Opt., 15 (4), 040503 (2010). http://dx.doi.org/10.1117/1.3467495 JBOPFO 1083-3668 Google Scholar

18. 

Y Linet al, “Photo-multiplier tube based hybrid MRI and frequency domain fluorescence tomography system for small animal imaging,” Phys. Med. Biol., 56 (15), 4731 –4747 (2011). http://dx.doi.org/10.1088/0031-9155/56/15/007 PHMBA7 0031-9155 Google Scholar

19. 

Y. Linet al., “Tumor characterization in small animals using magnetic resonance-guided dynamic contrast enhanced diffuse optical tomography,” J. Biomed. Opt., 16 (10), 106015 (2011). http://dx.doi.org/10.1117/1.3643342 JBOPFO 1083-3668 Google Scholar

20. 

H. GaoS. OsherH. Zhao, “Quantitative photoacoustic tomography,” Mathematical Modeling in Biomedical Imaging: II Optical, Ultrasound and Opto-Acoustic Tomographies (Lecture Notes in Mathematics), 131 –158 Springer, New York (2012). Google Scholar
© 2012 Society of Photo-Optical Instrumentation Engineers (SPIE) 0091-3286/2012/$25.00 © 2012 SPIE
Hao Gao, Lan Phan, and Yuting Lin "Parallel multigrid solver of radiative transfer equation for photon transport via graphics processing unit," Journal of Biomedical Optics 17(9), 096004 (6 September 2012). https://doi.org/10.1117/1.JBO.17.9.096004
Published: 6 September 2012
Lens.org Logo
CITATIONS
Cited by 16 scholarly publications and 1 patent.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Scattering

Radiative transfer

Photon transport

Graphics processing units

Algorithm development

Biomedical optics

MATLAB

Back to Top