Open Access
26 July 2017 Multiview Hilbert transformation in full-ring transducer array-based photoacoustic computed tomography
Author Affiliations +
Abstract
Based on the photoacoustic (PA) effect, PA tomography directly measures specific optical absorption, i.e., absorbed optical energy per unit volume. We recently developed a full-ring ultrasonic transducer array-based photoacoustic computed tomography (PACT) system for small-animal whole-body imaging. The system has a full-view detection angle and high in-plane resolution (∼100  μm). However, due to the bandpass frequency response of the piezoelectric transducer elements and the limited elevational detection coverage of the full-ring transducer array, the reconstructed images present bipolar (i.e., both positive and negative) pixel values, which cause ambiguities in image interpretation for physicians and biologists. We propose a multiview Hilbert transformation method to recover the unipolar initial pressure for full-ring PACT. The effectiveness of the proposed algorithm was first validated by numerical simulations and then demonstrated with ex vivo mouse brain structural imaging and in vivo mouse whole-body imaging.

1.

Introduction

Photoacoustic tomography (PAT), an emerging biomedical imaging technique, performs deep tissue imaging with very high optical absorption sensitivity.1 Generally, biological tissue highly scatters light. Multiple scattering events cause photons to deviate from their original propagation direction, which impedes high-resolution optical imaging in deep tissue. By converting highly scattered photons into ultrasonic waves, which are about 3 orders of magnitude less scattered than light, PAT can break the optical diffusion limit (1-mm deep in biological tissue) and form high-resolution images of the object’s optical properties at depths.2 PAT has two major incarnations: focused-scanning-based photoacoustic microscopy (PAM) and reconstruction-based photoacoustic computed tomography (PACT).3 Transducer arrays with multiple elements are widely used for PACT, greatly improving the imaging speed over that achieved by scanning with a single-element transducer.46 Although linear arrays are widely used due to their advantageous hand-held operation,710 they provide only a limited acoustic view,11 which reduces image fidelity of PACT.12,13 In contrast, curved transducer arrays, especially full-ring transducer arrays, have much broader acoustic detection coverage. Since full-ring transducer array-based PACT (FR-PACT) has full-view in-plane coverage, it eliminates the limited view problem in the imaging plane and provides small-animal brain and body images with detailed structures and two-dimensional (2-D) full-view fidelity.1416

However, all transducer elements have limited bandwidth with low-frequency cutoffs.17 Although the full-ring geometry provides 2π in-plane detection coverage, the elevational acceptance is relatively small, determined by the acoustic aperture [normally, the acoustic numerical aperture (NA) is 0.2]. Consequently, the reconstructed images present bipolar (i.e., both positive and negative) pixel values. However, PAT ideally should image the optical energy deposition (which is nonnegative); thus, bipolar values are artificial and cause ambiguities in interpreting images. For example, both positive and negative peaks mean high optical absorption, which is counter-intuitive for biologists and physicians seeking to understand the image. Moreover, bipolar pixel values pose difficulties in quantifying physiological parameters, such as mapping the distribution of blood oxygen saturation (sO2) and the metabolic rate of oxygen (MRO2). To mitigate the bipolar issue, multiple solutions have been reported. One solution is to keep only the positive values and threshold the negative values to zero, but this removes useful structures and induces artifacts.1820 A second solution is to deconvolve the raw channel data with its corresponding transducer element’s electrical impulse response to retrieve the broadband photoacoustic signals. However, in addition to relying on a high signal-to-noise ratio (SNR), deconvolution does not solve the limited elevational acoustic coverage issue and, thus, cannot provide unipolar reconstructed images for FR-PACT. A third solution is to employ iteration-based image reconstruction with a nonnegativity constraint,2123 although this requires accurate modeling of the imaging system and time-consuming computation. In addition to these three methods, Hilbert transformation is widely used in PAM and linear array-based PACT to address bipolarity and extract envelope information,12,2426 and it has been proven to be simple and computationally effective. When applying Hilbert transformation on a 2-D matrix, it is critically important to select the correct transformation direction,27 normally along the acoustic propagation direction for photoacoustic (PA) image processing. Otherwise, unpredictable artifacts may result. In focused-transducer-based PAM, Hilbert transformation is usually taken along the A-line direction (the acoustic receiving direction), and then the absolute value of the transformed A-line is taken to produce its envelope.26,28,29 In linear-array-based PACT, a common choice for the Hilbert transformation direction is the array receiving direction (the depth direction) in the reconstructed images, taking the absolute value extracts the envelope information of images.12,30 However, in FR-PACT, the acoustic signals are received from all directions, up to an angle of 2π in plane, which makes the determination of the transformation direction difficult. To address this issue, we explored multiple ways of implementing Hilbert transformation in FR-PACT (in the Appendix). Finally, we propose a multiview Hilbert transformation (MVHT) that satisfactorily corrects the bipolarity in FR-PACT with both minimum artifacts in the reconstructed images and maximum image contrasts.

2.

Materials and Methods

Here, we present an MVHT algorithm for FR-PACT that converts the reconstructed bipolar images to unipolar images representing the initial pressures. Figure 1(a) shows the setup of the FR-PACT system for small-animal whole-body imaging, which has been reported earlier.4,3133 A ring-shaped laser beam (750-nm, 10-Hz repetition rate) is used for whole-body imaging illumination. The maximum light fluence on the skin of the animal is 8  mJ/cm2, which is well below the American National Standards Institute safety limit. The PA signals are detected by a full-ring transducer array (Imasonic, 5-cm diameter, 512 elements, 5-MHz central frequency, and >80% one-way bandwidth). Each element (10-mm height, 0.3-mm pitch, and 0.1-mm interelement space) is cylindrically focused to produce an elevational focal distance of 19.8 mm (acoustic NA, 0.25). The data acquisition system has 64 channels with eightfold multiplexing. Previously, several inversion methods have been proposed to reconstruct images for the full-ring geometry.3436 Figure 1(b) shows representative bipolar images acquired by FR-PACT using the conventional universal back-projection (UBP) reconstruction.34

Fig. 1

(a) Setup of the FR-PACT system, (b) representative bipolar images acquired using the FR-PACT system, and (c) schematic of the MVHT in FR-PACT. DAQ, data acquisition system and FOV, field of view. The local coordinates are aligned with the global coordinates at the first view angle (i.e., αi=0).

JBO_22_7_076017_f001.png

A schematic of the image reconstruction is shown in Fig. 1(c), illustrating the process of implementing MVHT in FR-PACT. We first select a group of neighboring elements at one view angle i, all of which can be approximately regarded to have the same acoustic receiving direction due to the small angular coverage of those elements. Then, the channel data from the selected elements are used for reconstruction. Elements on the opposite side, sharing the same acoustic receiving axis, are also included because they constructively contribute to the reconstruction process. Two coordinate systems, sharing the same origin in the center of the ring, are used here: the local coordinates rl(i)=[xl(i),yl(i)] attached to the locally selected elements and rotating with the view angle i and the global coordinates r=(x,y) attached to the full-ring array. The raw channel data from the selected neighboring elements are used for local image reconstruction under the local coordinates, using the UBP algorithm

Eq. (1)

p0(i)[rl(i)]=Ω0dΩ0Ω0{b[rl(i),t]}t=|rl(i)rl(i)|/c,
where b[rl(i),t]=2p[rl(i),t]2tp[rl(i),t]t, p[rl(i),t] is the acoustic pressure detected by the selected elements located at rl(i) and time t and c is the speed of sound in tissue.   dΩ0 is the solid angle for one element with respect to the point at rl(i), Ω0 where the solid angle is subtended by the detection aperture of the selected elements with respect to the point at rl(i), and dΩ0/Ω0 is a weighting factor contributing to the construction from the element located at rl(i). p0(i)[rl(i)] is the locally reconstructed initial PA pressure for the point at rl(i), which is bipolar due to the limited detection bandwidth and acoustic receiving aperture. Next, we take Hilbert transformation of the reconstructed image at the local coordinates along the acoustic receiving direction [as the red arrows shown in Fig. 3(c)] and then take the absolute value of transformed image. Rotating the processed image by an angle of αi transfers it to global coordinates.

By repeating the above procedures for all view angles and pixel-wise averaging over all the images, the unipolar full-view image is obtained. The whole process can be mathematically expressed as

Eq. (2)

p0+(r)=1Ni=1NRαi(A{H[p0(i)(rl(i))]}),
where p0+(r) is the final full-view unipolar image with nonnegative pixel values, N is the total number of view angles, H is the Hilbert transformation operator, which takes the Hilbert transformation of the partially reconstructed image along the acoustic receiving direction, and A is the absolute value operator, which takes the absolute value of the Hilbert transformed image. The combination of the H and A operators extracts the envelope image from the partially reconstructed image. Rαi is the rotation operator, which rotates the envelope image by an angle of αi to align it in the global coordinates. The final image p0+(r) is the average of all aligned envelope images.

An important parameter of this method is the number of elements to be bundled in one single-view group. Interestingly, we found that the optimal number of elements at each view is related to the reconstructed field of view (FOV). In our experiment, the diameter of the full-ring array is 50 mm. When the FOV is 16 mm in diameter, the corresponding angle θ [Fig. 1(c)] is about 37 deg. Thus, the number of elements that fall into the range of the red arc (one side) should be 54, given that the total number of elements of full-ring array is 512. We rotated 12 views (N=12) with a step size of 15 deg to complete the full-view reconstruction. To validate our prediction, we conducted a numerical simulation using a leaf skeleton as the object, as shown in Fig. 2(a). The conventional UBP reconstructed image with bipolar pixel values is shown in Fig. 2(b). The forward data of 2-D wave propagation were generated using the k-Wave toolbox.37 In the simulation, the central frequency of the simulated detector was set to 5 MHz, with 100% bandwidth. We employed 512 ideal point detectors to form a ring shape with a radius of 2.5 cm.

Fig. 2

(a) Optical absorption map of a leaf skeleton object, (b) reconstructed image using the conventional UBP method, (c) reconstructed image using 27 elements at each view, (d) reconstructed image using 54 elements at each view, and (e) line profiles corresponding to the red solid lines in (a), (c), and (d).

JBO_22_7_076017_f002.png

As shown in Figs. 2(c) and 2(d), the reconstructed image using 54 elements at each view has higher signal amplitude and better contrast [Fig. 2(e)] than the reconstructed image using 27 elements at each view. However, when further increasing the number of elements at each view to 108 and 216, artifacts appear in the reconstructed images, as shown in Fig. 3. When the number of elements at each view exceeds the optimum number, the acoustic receiving angle becomes so large that the direction of the Hilbert transformation is no longer aligned with the signal-receiving direction. We also note that the reconstructed images [Figs. 3(b)3(d)] have similar signal amplitudes, which means that increasing the number of elements beyond 54 does not provide a higher SNR. Therefore, given an FOV of 16 mm in diameter, the optimal number of elements in each view should be 54, the same as predicted above.

Fig. 3

(a) Optical absorption map of a leaf skeleton object, (b)–(d) reconstructed images using 68 elements, 136 elements, and 256 elements at each view, respectively, (e) line profiles corresponding to the red solid lines in (a)–(d).

JBO_22_7_076017_f003.png

3.

Results

We first quantified the in-plane image resolution of the FR-PACT system using MVHT for reconstruction. Microspheres with 10  μm in diameter were imbedded in agar gel (3% mass concentration, dissolved in deionized water) and imaged by FR-PACT. The MVHT reconstructed image of one microsphere is shown in Fig. 4(a), and the full width at half maximum after Gaussian fitting was calculated to be about 148  μm [Fig. 4(b)]. The in-plane resolution of the UBP reconstructed bipolar images is 100  μm [Figs. 4(c)4(e)]. The in-plane resolution of the MVHT reconstructed unipolar images is slightly worse than the resolution of bipolar images because the envelope extraction process acts as a low-pass filter in the spatial frequency domain.

Fig. 4

(a) MVHT reconstructed image of a 10-μm-diameter microsphere. (b) Line profile of the dashed green line in (a) and its Gaussian fit, showing that the full width at half maximum is 148  μm. (c) Conventional UBP reconstructed bipolar image of a 10-μm-diameter microsphere. (d) Line profile of the dashed green line in (c). (e) The contrast-to-noise ratio (CNR) versus the shift in the sum of the original line profile shown in (d) and the shifted one. The in-plane resolution, defined as the shift corresponding to 6-dB CNR, is 100  μm.

JBO_22_7_076017_f004.png

To further demonstrate the effectiveness of the MVHT method, we imaged both a mouse brain ex vivo and a mouse trunk in vivo. All experimental procedures were carried out in conformity with laboratory animal protocols approved by the Animal Studies Committees of Washington University in St. Louis.

A saline-perfused mouse brain was first imbedded in agar gel and then imaged by FR-PACT with 620-nm illumination. The conventional UBP reconstructed bipolar image is shown in Fig. 5(a), while the MVHT reconstructed image is shown in Fig. 5(b). Features of the two images match well with each other. A magnetic resonance microscopy image of a similar mouse brain with its structural segmentation superimposed as colored lines is shown in Fig. 5(c), to serve as a reference for validation of our results.

Fig. 5

(a) Bipolar image of a saline-perfused mouse brain. (b) MVHT reconstructed unipolar image of the saline-perfused mouse brain. OB, olfactory bulb; NC, neocortex; VT, ventricles; HC, hippocampus; GPD, globus pallidus; and CWM, cerebellum white matter. (c) Another mouse brain image from magnetic resonance microscopy, with its structural segmentation superimposed as colored lines, chosen as a reference for validation of PACT imaging38 (Courtesy of Frontiers in Neuroscience).

JBO_22_7_076017_f005.png

We also demonstrated the performance of the MVHT reconstruction method by imaging in vivo the trunk of an 8-week-old nude mouse (Hsd: Athymic Nude-FoxlNU, Harlan Co., 20- to 30-g body weight). Ring-shape side illumination at 750 nm was used for excitation. In the conventional UBP reconstructed bipolar image shown in Fig. 6(a), most of the internal organs, such as the two kidneys, spleen, gastrointestinal tract, and spinal cord, are resolved. The MVHT reconstructed unipolar image [Fig. 6(b)] maintains all of those features.

Fig. 6

(a) Cross-sectional image of a mouse lower abdominal cavity with bipolar values and (b) MVHT reconstructed unipolar image using the same data as for (a). SC, spinal cord; KD, kidney; SP, spleen; and GI, gastrointestinal tract.

JBO_22_7_076017_f006.png

4.

Discussion

Due to limited acoustic bandwidth and elevational acceptance of the full-ring transducer array, the reconstructed images using the UBP method contain both positive and negative values. It is counter-intuitive for physicians and biologists to interpret such images because both positive and negative peaks represent strong optical absorption. PAT should ideally report the initial pressure or optical energy deposition, proportional to the product of the local fluence and optical absorption. Thus, the pixel values in perfectly reconstructed PAT images should be nonnegative. To solve the bipolarity problem in UBP reconstructed images, we proposed the MVHT method for a full-ring geometry PACT system. MVHT reconstruction successfully recovers the unipolar initial pressure with an in-plane resolution of 148 μm, which is slightly worse than that of the UBP reconstruction due to the spatial frequency low-pass nature of the Hilbert transformation. We also optimized the number of elements for each single-view reconstruction to get the best SNR without inducing artifacts. The performance of the MVHT method was demonstrated by numerical simulation, ex vivo imaging of a mouse brain, and in vivo whole-body imaging. MVHT provides a computationally efficient way to recover the unipolar initial pressure map from bandwidth- and elevational-acoustic-coverage-limited PA measurements.

Appendices

Appendix

Here, we compare and analyze three variant approaches to applying Hilbert transformation for full-ring geometry-based PACT, as illustrated in Figs. 7(a)7(c). The first variant [Fig. 7(a)] is to directly apply Hilbert transformation to b[rl(i),t] [in Eq. (1)] and take the envelope and then reconstruct the image using the enveloped data. The second variant [Fig. 7(b)] is to select a group of neighboring elements (54 elements for each view angle) for reconstruction, take the Hilbert transformation only along the centerline of the reconstructed image, and repeat this procedure for all of the angles to complete the reconstruction. The third variant [Fig. 7(c)] is the method presented in this paper. A simple object, as shown in Fig. 7(d), was the input for the numerical simulation. The forward process was simulated using the k-Wave toolbox. The reconstructed images of the three variants are shown in Figs. 7(e)7(g). The first and second variants result in obvious reconstruction artifacts [Figs. 7(e) and 7(f)], but the proposed MVHT method successfully recovers the input without inducing artifacts [Fig. 7(g)]. The first method creates image artifacts because directly enveloping the channel data removes the phase information of the detected acoustic signal. The second method envelopes the centerlines of the partially reconstructed images, which loses most of the useful information and, thus, induces reconstruction artifacts.

Fig. 7

(a) Schematic of the first variant discussed in the Appendix, (b) schematic of the second variant discussed in the Appendix, (c) schematic of the proposed MVHT reconstruction method, (d) preset optical absorption map of the object for simulation, (e) reconstructed image using the first variant, (f) reconstructed image using the second variant, and (g) reconstructed image using the proposed MVHT reconstruction method.

JBO_22_7_076017_f007.png

Disclosures

L.V.W. has financial interests in Microphotoacoustics, Inc., which however did not support this work.

Acknowledgments

The authors appreciate the close reading of the paper by Professor James Ballard. We also thank Guo Li and Jun Xia for technical support and helpful discussions. This work was sponsored by the National Institutes of Health Grants DP1 EB016986 (NIH Director’s Pioneer Award), R01 CA186567 (NIH Director’s Transformative Research Award), R01 EB016963, U01 NS090579 (NIH BRAIN Initiative), and U01 NS099717 (NIH BRAIN Initiative).

References

1. 

L. V. Wang and J. Yao, “A practical guide to photoacoustic tomography in the life sciences,” Nat. Methods, 13 627 –638 (2016). http://dx.doi.org/10.1038/nmeth.3925 Google Scholar

2. 

L. H. V. Wang and S. Hu, “Photoacoustic tomography: in vivo imaging from organelles to organs,” Science, 335 1458 –1462 (2012). http://dx.doi.org/10.1126/science.1216210 SCIEAS 0036-8075 Google Scholar

3. 

L. Li, J. Yao, L. V. Wang, “Photoacoustic tomography enhanced by nanoparticles,” Wiley Encyclopedia of Electrical and Electronics Engineering, 1 –14 John Wiley & Sons, Inc., Hoboken, New Jersey (2016). Google Scholar

4. 

J. Yao et al., “Multiscale photoacoustic tomography using reversibly switchable bacterial phytochrome as a near-infrared photochromic probe,” Nat. Methods, 13 67 –73 (2016). http://dx.doi.org/10.1038/nmeth.3656 Google Scholar

5. 

D. Razansky, A. Buehler and V. Ntziachristos, “Volumetric real-time multispectral optoacoustic tomography of biomarkers,” Nat. Protoc., 6 1121 –1129 (2011). http://dx.doi.org/10.1038/nprot.2011.351 1754-2189 Google Scholar

6. 

X. L. Deán-Ben, S. J. Ford and D. Razansky, “High-frame rate four dimensional optoacoustic tomography enables visualization of cardiovascular dynamics and mouse heart perfusion,” Sci. Rep., 5 10133 (2015). http://dx.doi.org/10.1038/srep10133 Google Scholar

7. 

H. Zafar et al., “Linear-array-based photoacoustic imaging of human microcirculation with a range of high frequency transducer probes,” J. Biomed. Opt., 20 051021 (2014). http://dx.doi.org/10.1117/1.JBO.20.5.051021 Google Scholar

8. 

Y. Wang et al., “Second generation slit-based photoacoustic tomography system for vascular imaging in human,” J. Biophotonics, (2016). http://dx.doi.org/10.1002/jbio.201600151 Google Scholar

9. 

H. M. Heres et al., “Visualization of vasculature using a hand-held photoacoustic probe: phantom and in vivo validation,” J. Biomed. Opt., 22 041013 (2017). http://dx.doi.org/10.1117/1.JBO.22.4.041013 Google Scholar

10. 

R. A. Kruger et al., “Thermoacoustic computed tomography using a conventional linear transducer array,” Med. Phys., 30 856 –860 (2003). http://dx.doi.org/10.1118/1.1565340 Google Scholar

11. 

Y. Xu et al., “Reconstructions in limited-view thermoacoustic tomography,” Med. Phys., 31 724 –733 (2004). http://dx.doi.org/10.1118/1.1644531 Google Scholar

12. 

G. Li et al., “Multiview Hilbert transformation for full-view photoacoustic computed tomography using a linear array,” J. Biomed. Opt., 20 066010 (2015). http://dx.doi.org/10.1117/1.JBO.20.6.066010 Google Scholar

13. 

D. Yang et al., “Fast full-view photoacoustic imaging by combined scanning with a linear transducer array,” Opt. Express, 15 15566 –15575 (2007). http://dx.doi.org/10.1364/OE.15.015566 Google Scholar

14. 

L. Li et al., “Label-free photoacoustic tomography of whole mouse brain structures ex vivo,” Neurophotonics, 3 035001 (2016). http://dx.doi.org/10.1117/1.NPh.3.3.035001 Google Scholar

15. 

E. Merčep et al., “Whole-body live mouse imaging by hybrid reflection-mode ultrasound and optoacoustic tomography,” Opt. Lett., 40 4643 –4646 (2015). http://dx.doi.org/10.1364/OL.40.004643 Google Scholar

16. 

L. Li et al., “Single-impulse panoramic photoacoustic computed tomography of small-animal whole-body dynamics at high spatiotemporal resolution,” Nat. Biomed. Eng., 1 0071 (2017). http://dx.doi.org/10.1038/s41551-017-0071 Google Scholar

17. 

A. A. Oraevsky and A. A. Karabutov, “Ultimate sensitivity of time-resolved optoacoustic detection,” Proc. SPIE, 3916 228 (2000). http://dx.doi.org/10.1117/12.386326 Google Scholar

18. 

J. Gateau et al., “Three-dimensional optoacoustic tomography using a conventional ultrasound linear detector array: whole-body tomographic system for small animals,” Med. Phys., 40 013302 (2013). http://dx.doi.org/10.1118/1.4770292 Google Scholar

19. 

J. Gateau, A. Chekkoury and V. Ntziachristos, “High-resolution optoacoustic mesoscopy with a 24 MHz multidetector translate-rotate scanner,” J. Biomed. Opt., 18 106005 (2013). http://dx.doi.org/10.1117/1.JBO.18.10.106005 Google Scholar

20. 

J. Gateau, A. Chekkoury and V. Ntziachristos, “Ultra-wideband three-dimensional optoacoustic tomography,” Opt. Lett., 38 4671 –4674 (2013). http://dx.doi.org/10.1364/OL.38.004671 Google Scholar

21. 

C. Huang et al., “Full-wave iterative image reconstruction in photoacoustic tomography with acoustically inhomogeneous media,” IEEE Trans. Med. Imaging, 32 1097 –1110 (2013). http://dx.doi.org/10.1109/TMI.2013.2254496 ITMID4 0278-0062 Google Scholar

22. 

J. Zhang et al., “Effects of different imaging models on least-squares image reconstruction accuracy in photoacoustic tomography,” IEEE Trans. Med. Imaging, 28 1781 –1790 (2009). http://dx.doi.org/10.1109/TMI.2009.2024082 ITMID4 0278-0062 Google Scholar

23. 

R. Ellwood et al., “Orthogonal Fabry-Pérot sensor array system for minimal-artifact photoacoustic tomography,” Proc. SPIE, 9323 932312 (2015). http://dx.doi.org/10.1117/12.2081934 Google Scholar

24. 

J. Yao et al., “High-speed label-free functional photoacoustic microscopy of mouse brain in action,” Nat. Methods, 12 407 (2015). http://dx.doi.org/10.1038/nmeth.3336 Google Scholar

25. 

L. Li et al., “Fully motorized optical-resolution photoacoustic microscopy,” Opt. Lett., 39 2117 –2120 (2014). http://dx.doi.org/10.1364/OL.39.002117 Google Scholar

26. 

C. Zhang et al., “In vivo photoacoustic microscopy with 7.6-μm axial resolution using a commercial 125-MHz ultrasonic transducer,” J. Biomed. Opt., 17 116016 (2012). http://dx.doi.org/10.1117/1.JBO.17.11.116016 Google Scholar

27. 

M. Lacey and X. Li, “Maximal theorems for the directional Hilbert transform on the plane,” Trans. Am. Math. Soc., 358 4099 –4118 (2006). http://dx.doi.org/10.1090/S0002-9947-06-03869-4 Google Scholar

28. 

Y. S. Zhang et al., “Optical-resolution photoacoustic microscopy for volumetric and spectral analysis of histological and immunochemical samples,” Angew. Chem. Int. Ed., 53 8099 –8103 (2014). http://dx.doi.org/10.1002/anie.201403812 Google Scholar

29. 

L. Zhu et al., “Multiview optical resolution photoacoustic microscopy,” Optica, 1 217 –222 (2014). http://dx.doi.org/10.1364/OPTICA.1.000217 Google Scholar

30. 

P. Zhang et al., “High-resolution deep functional imaging of the whole mouse brain by photoacoustic computed tomography in vivo,” J. Biophotonics, (2017). http://dx.doi.org/10.1002/jbio.201700024 Google Scholar

31. 

J. Xia et al., “Whole-body ring-shaped confocal photoacoustic computed tomography of small animals in vivo,” J. Biomed. Opt., 17 050506 (2012). http://dx.doi.org/10.1117/1.JBO.17.5.050506 Google Scholar

32. 

L. Lin et al., “In vivo deep brain imaging of rats using oral-cavity illuminated photoacoustic computed tomography,” J. Biomed. Opt., 20 016019 (2015). http://dx.doi.org/10.1117/1.JBO.20.1.016019 Google Scholar

33. 

L. Lin et al., “In vivo photoacoustic tomography of myoglobin oxygen saturation,” J. Biomed. Opt., 21 061002 (2016). http://dx.doi.org/10.1117/1.JBO.21.6.061002 Google Scholar

34. 

M. Xu and L. V. Wang, “Universal back-projection algorithm for photoacoustic computed tomography,” Phys. Rev. E, 71 016706 (2005). http://dx.doi.org/10.1103/PhysRevE.71.016706 Google Scholar

35. 

L. A. Kunyansky, “Explicit inversion formulae for the spherical mean Radon transform,” Inverse Probl., 23 373 –383 (2007). http://dx.doi.org/10.1088/0266-5611/23/1/021 INPEEY 0266-5611 Google Scholar

36. 

M. Haltmeier, “Inversion of circular means and the wave equation on convex planar domains,” Comput. Math. Appl., 65 1025 –1036 (2013). http://dx.doi.org/10.1016/j.camwa.2013.01.036 Google Scholar

37. 

B. E. Treeby and B. T. Cox, “k-Wave: MATLAB toolbox for the simulation and reconstruction of photoacoustic wave fields,” J. Biomed. Opt., 15 021314 (2010). http://dx.doi.org/10.1117/1.3360308 Google Scholar

38. 

Y. Ma et al., “In vivo 3D digital atlas database of the adult C57BL/6J mouse brain by magnetic resonance microscopy,” Front. Neuroanat., 2 1 (2008). http://dx.doi.org/10.3389/neuro.05.001.2008 Google Scholar

Biography

Lei Li received his BS and MS degrees from Harbin Institute of Technology, China, in 2010 and 2012, respectively. He is working as a graduate research assistant under the tutelage of Dr. Lihong Wang at Washington University. His current research focuses on photoacoustic microscopy and computed tomography, especially improving the photoacoustic small-animal whole-body imaging performance and applying it on functional brain imaging.

Liren Zhu is currently a graduate student in biomedical engineering at Washington University in St. Louis under the supervision of Lihong V. Wang. His research focuses on the development of innovative optical imaging, photoacoustic imaging, and ultrasonic imaging techniques for biomedical research.

Yuecheng Shen received his BSc degree in applied physics from the University of Science and Technology of China and his PhD in electrical engineering from Washington University in St. Louis. He is currently working as a postdoctoral research associate in medical engineering at California Institute of Technology. His current research interests focus on developing new optical techniques based on stepwise wavefront shaping and optical time reversal.

Lihong V. Wang earned his PhD from Rice University, Houston, Texas. He currently holds the Bren Professorship of medical engineering and electrical engineering at the California Institute of Technology. He has published 470 peer-reviewed articles in journals and has delivered 433 keynote, plenary, or invited talks. His Google scholar h-index and citations have reached 111 and 51,000, respectively.

© 2017 Society of Photo-Optical Instrumentation Engineers (SPIE) 1083-3668/2017/$25.00 © 2017 SPIE
Lei Li, Liren Zhu, Yuecheng Shen, and Lihong V. Wang "Multiview Hilbert transformation in full-ring transducer array-based photoacoustic computed tomography," Journal of Biomedical Optics 22(7), 076017 (26 July 2017). https://doi.org/10.1117/1.JBO.22.7.076017
Received: 13 March 2017; Accepted: 12 July 2017; Published: 26 July 2017
Lens.org Logo
CITATIONS
Cited by 35 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
Back to Top