Open Access
1 September 2008 Three-dimensional photoacoustic imaging by sparse-array detection and iterative image reconstruction
Pinhas Ephrat, Lynn Keenlislide, Adam Seabrook, Frank S. Prato, Jeffrey J. L. Carson
Author Affiliations +
Abstract
Photoacoustic imaging (PAI) has the potential to acquire 3-D optical images at high speed. Attempts at 3-D photoacoustic imaging have used a dense 2-D array of ultrasound detectors or have densely scanned a single detector on a 2-D surface. The former approach is costly and complicated to realize, while the latter is inherently slow. We present a different approach based on a sparse 2-D array of detector elements and an iterative reconstruction algorithm. This approach has the potential for fast image acquisition, since no mechanical scanning is required, and for simple and compact construction due to the smaller number of detector elements. We obtained spatial sensitivity maps of the sparse array and used them to optimize the image reconstruction algorithm. We then validated the method on phantoms containing 3-D distributions of optically absorbing point sources. Reconstruction of the point sources from the time-domain signals resulted in images with good contrast and accurate localization (≤1 mm error). Image acquisition time was 1 s. The results suggest that 3-D PAI with a sparse array of detector elements is a viable approach. Furthermore, the rapid acquisition speed indicates the possibility of high frame rate 3-D PAI.

1.

Introduction

1.1.

Background

Photoacoustic Imaging (PAI) is a promising hybrid modality capable of providing image contrast by optical absorption, but with tissue penetration and spatial resolution at depths that are superior to pure optical imaging methods.1 PAI has been employed in vivo to visualize blood-vessel structures,2, 3, 4 detect breast tumors,5, 6, 7 estimate oxygenation levels,8, 9, 10 perform functional imaging,3, 11 and track molecular probes.12

The technique is based on volume irradiation with a short laser pulse followed by preferential energy deposition at the optically absorbing structures within the volume. These structures heat up slightly, but so rapidly that the condition of thermal confinement is met and thermoelastic expansion leads to a transient bipolar pressure wave that propagates outward toward the surface.13 The frequency composition of the pressure transient (i.e., the “photoacoustic wave”) is typically in the ultrasonic range,14 and is detectable by wide-band ultrasound detector(s) placed in acoustic contact with the surface of the volume. Based on the time-domain surface measurements, the distribution of the photoacoustic (PA) sources (structures) inside the volume can be reconstructed using a back-projection algorithm, several of which have been suggested and adapted from medical imaging.15, 16, 17, 18, 19

The photoacoustic process is intrinsically three-dimensional. This is a product of the optical radiation scattered in tissue, which diffusely illuminates a volume, rather than at a single point, line, or plane. Furthermore, the PA waves are of a spherical nature and propagate in all directions from their point of generation. Several approaches have been suggested for 3-D PAI and are differentiated by detection scheme. These can be divided into scanning methods, where a single detector is scanned along the detection surface in two dimensions,2, 3, 4, 20, 21 and staring methods, where a 2-D array of detectors is used and no scanning is necessary.6, 22, 23 A combined scanning–staring approach has been suggested as well, where a linear array or combination of single detectors is mechanically scanned.7, 24, 25, 26, 27

1.2.

Motivation and Approach

We were motivated to develop a method for 3-D PAI that can be used for small animal research with eventual translation to clinical applications. We based the development of the imaging system on the following design goals: (1) real-time image acquisition capability, (2) easy placement and accessibility to the scanned object, (3) simple design (i.e., low detector count), and (4) potential for transportability to permit integration with other imaging modalities for multimodality hybrid imaging and validation [e.g., magnetic resonance imaging (MRI) and x-ray computed tomography (CT)]. Review of the literature suggested that the methods developed to date lack in one or more of these design features. Mechanically scanned approaches require long times for data acquisition, and therefore cannot be used in real time. Depending on the scanning geometry, some also lack accessibility to the subject under investigation and are too cumbersome to be considered candidates for transportability. They are, however, usually inexpensive and simple to set up, since only a single transducer and detection channel is used. Staring methods do not require mechanical scanning and hence, in principle, can acquire all the required data during a single laser pulse, which has the potential to create 3-D photoacoustic images in real time. Staring methods also have open access to the imaging region, and some have been built as transportable systems. 6, 7 However, all published implementations of the staring method have incorporated a dense array of detector elements, which is difficult to manufacture. Furthermore, real-time 3-D imaging with dense detector arrays is prohibitively expensive due to the high channel count required for simultaneous data acquisition from all detector elements. A combined scanning–staring approach can reduce system cost compared to a pure staring approach; however, imaging speed is still limited by the need to scan the detector array mechanically.

Based on the design goals and the implementations described in the literature, we chose to build a system based on a staring 2-D detector array due to its inherent speed and sample accessibility. In order to reduce channel count, a sparse detector array geometry was investigated, where instead of packing detection elements as close together as possible, the elements were spaced to cover a large area but with a low packing ratio. The spread of the detection elements over a large area was necessary to provide a wide range of viewing angles of the volume to be imaged, which is known to improve the boundary reconstruction of PA objects in the field of view.28 The geometry of the sparse array was annular to accommodate backward-mode illumination of the object through the center of the annulus, which was amenable to easy object placement and accessibility.

Due to the small number of detectors, and the limited viewing angles they cover, a simple radial back-projection of the PA signals into volume space would have resulted in streaking image artifacts nearby the reconstructed location of the PA sources. For this reason, we chose to adapt an iterative image reconstruction algorithm, originally developed by Paltauf, 17 to handle data collected with the sparse detector array. In addition, we developed a characterization method29 to measure the 3-D sensitivity distribution of the annular detector array, which was used to constrain the iterative image reconstruction algorithm.

2.

Methods

2.1.

The 3-D Photoacoustic Imaging System

The 3-D photoacoustic imaging system [Fig. 1a ] used backward-mode illumination, where the laser beam illuminated the imaging volume from the same side as the ultrasound detectors. The ultrasound detectors were sparsely arranged around the laser entrance window and pointed toward the volume of interest. The electronic signal generated by the detectors was captured in parallel by a dedicated data acquisition (DAQ) system and was sent to the computer, where it was processed by an iterative image reconstruction algorithm based on back-projection. The imaging system was built from the hardware components described here.

Fig. 1

Overview of the sparse photoacoustic imaging system. (a) Imaging mode, where the object to be imaged was placed on top of and in acoustic contact with the detector array and the laser illumination was delivered through the window in the center. PA signals were acquired once by the DAQ system for all detectors and were sent to the computer for image reconstruction. (b) Setup for characterization, where a point photoacoustic source was scanned in a tank of water across the volume of interest and sensitivity and coverage maps were generated. PA signals were acquired in parallel by the DAQ system for all detectors at each scan point and were sent to the computer for analysis and display. (c) Top view (left) and cross-section side view (right) of the sparse annular detector array. The array consisted of 14PVdF film detectors, 3mm in diameter, arranged on a 30-mm -diam ring, an optical window in the center of the ring (for laser transmission), and an annular delay line with a wedge (shaded) directly on top of each detector. The delay line refracted the acoustic waves arriving at the detectors so that an effective focal zone was created above the center of the array, at a height of 30mm (marked by the crosshair on the side view). The characterization scan volume was marked by the dashed box on the side view. The Cartesian coordinate system shown here was used consistently throughout the text. The array plane was located at z=40mm .

054052_1_055805jbo1.jpg

2.1.1.

Laser system

The laser was a Q-switched Nd:YAG coupled to an OPO (VIBRANT, Opotek, Carlsbad, California). The laser had a 5-ns pulse duration at a repetition rate of 10Hz , with energy per pulse > 40mJ , and was wavelength tunable between 680to950nm .

2.1.2.

Sample table

The platform on which the sample was placed for imaging had a flat, even surface that was fully accessible from the top. The optical window for laser illumination, and the sparse array of detectors set around the window, were integrated into the platform. An electronic card for detector signal amplification was housed in a metallic box immediately below the sample platform and was modified to allow the laser beam to pass through.

2.1.3.

Robotic 3-D gantry

A robotic gantry constructed from three linear slides (T-LLS260 Zaber Technologies, Vancouver, Canada) in an xyz configuration was used to scan a photoacoustic point source through a multitude of locations within the imaging volume. This characterization setup, shown in Fig. 1b and described elsewhere,29 provided the 3-D sensitivity distribution maps of the detector array, which were used to constrain the image reconstruction algorithm.

2.1.4.

Computer and data acquisition system

The imaging system was controlled by a standard Windows-based PC ( 3-GHz CPU and 1-GB RAM). We developed a dedicated DAQ system capable of acquiring multiple signals in parallel with high temporal resolution and wide dynamic range. The system was comprised of a chassis holding multiple boards, each board having eight analog-to-digital converter channels. Each channel had a sampling rate of 50MHz , a resolution of 14bits ( 12-μV resolution with input range of 200mV ), and its own memory where the digital signal was stored after conversion. The digital signals were transferred from the DAQ memory to the computer via a USB communication bus. Data transfer was facilitated by custom control software (LabView, National Instruments, Austin, Texas) that also served to control and synchronize the laser, the robotic gantry, and the DAQ system.

2.1.5.

Sparse detector array

The sparse array concept was tested with an annular geometry of detectors and backward mode laser illumination through the axial center of the array.29 Characterization of the sensitivity distribution revealed that there was much less signal produced by the detectors above the center of the annulus than directly above each element. To improve the sensitivity within the region of the laser beam path, we added an annular delay line to the top of the annular array with a wedge located directly over each detector [see Fig. 1c]. The wedge angle was designed to redirect the line of sight of each detector through the center of the array at a height of 30mm above the plane of the detectors. The delay line was CNC machined from clear acrylic with a wedge angle of 60deg . The detectors themselves were manufactured in-house from 110-μm -thick metallized PVdF film (P/N 3-1003702-4, Measurement Specialties, Inc., Hampton, Virginia).

2.2.

Iterative Image Reconstruction Algorithm

We used an iterative image reconstruction algorithm first proposed by Paltauf 17 The main assumption of the model was that the PA pressure profile measured at each detector could be described as a linear superposition of individual PA pressure profiles generated by spherical sources located on a 3-D grid of voxels. In the forward model, each spherical source generates a pressure profile given by:

Eq. 1

p(τ=trc)1r{τ,τRsc0,τ> Rsc},
where p is the magnitude of the pressure wave propagating radially from the center of the source, c is the speed of sound, r is the distance between the center of the source and the detection point, and Rs is the radius of the spherical source.14 The pressure profile has a bipolar shape with a triangular positive compression phase followed by an inverted negative triangular tension phase.

The velocity potential (VP) is obtained by integrating the pressure profile with respect to τ and has the form:

Eq. 2

VP(τ)12r{Rs2c2τ2,τRsc,0,t> Rsc},
where the VP is a positive inverted parabola, peaking at τ=0 and dropping to zero when τ=±Rsc .

For a given detector located at position ri , the VP calculated for a PA source located at rj can be written as:

Eq. 3

VPij(t)=WjAijBFij(t)rij,
where Wj is the intensity of source j , rij is the source–detector separation, Aij is the sensitivity matrix element indicating the relative sensitivity of detector i to a source at location rj , and

Eq. 4

BFij(t)=3c34Rs3{Rs2c2(trijc)2,trijcRsc0,trijc> Rsc},
is a unit area parabolic basis function of the same shape as 2, centered in time at rijc . When multiple sources are present, the total VPi(t) measured at detector i is obtained by summing Eq. 3 over all the sources (index j ). Note that, in general, BFij(t) is inserted into VPi(t) at different time points for each source.

The back-projection model is based on the assumption that each VP time point VPi(tk) is a result of a linear superposition of sources that lie on a shell around detector i whose distance from the detector is in the range ctkRs<rij<ctk+Rs .

The amount of signal back-projected into each voxel j is given by

Eq. 5

ΔWj(tk)=rijAijBFij(tk)VPi(tk)ni(tk),
where ni(tk) is the number of voxels that are crossed by the shell defined by tk . Note that in practice, ΔWj(tk) is set to zero when BFij(tk) drops below a threshold to maintain stability in ΔWj(tk) for voxels that lie on the edge of the shell.

Experimentally, the measured PA signals were rectified and filtered with a time-domain moving average to compute VPiexp(t) . A first estimate of the master image was formed by setting all voxel intensities to zero. The master image was forward-projected to obtain VPiest(t) . The difference VPiexp(t)VPiest(t) was calculated and back-projected into volume space. The resultant difference image was added to the master image, and the outcome was forward-projected to obtain the new VPiest(t) . The iterative process was repeated until either of two stop criteria was met: (1) a set number of iterations was reached, or (2) a relative change in VPiexp(t)VPiest(t) between successive iterations dropped below a given threshold.

The sensitivity map of the detector array (see the following) was interpolated and used as the weighting function Aij for the forward- and back-projections. This map attenuated VP estimates from voxels where detector sensitivity was low and enhanced VP estimates from voxels where detector sensitivity was high.

All images were reconstructed with the following algorithmic parameters: image volume of 20mm×20mm×20mm , voxel size of 1mm×1mm×1mm , Cartesian coordinate system as defined in Fig. 1c, and 500 iterations. The relative voxel intensities were displayed on a linear grey scale.

2.3.

Sensitivity Mapping of the Detector Array

Kruger have reported the use of a scanned PA point source for calibrating the spatial response of a linear detector array along two orthogonal axes.30 However, to account for the response profile of each detector element and the overall sensitivity profile of the array in the entire imaging volume, we raster-scanned a PA point source in three dimensions [Fig. 1b] to generate volume maps of detector sensitivity and coverage.29 The point source was constructed from a 400-μm -core optical fiber whose tip was polished and coated with black paint. At each point of the scan, the source was motionless and the laser was fired through the fiber, which resulted in a pressure transient generated at the coated tip due to the photoacoustic effect. Each element of the detector array received the outward-propagating pressure wave with a delay time and viewing angle that depended on the source–element separation geometry. Each detector element produced an analog voltage proportional to the pressure profile, which was amplified, digitized, and stored. For each laser pulse, part of the beam was directed toward a photodiode and recorded synchronously with the PA signals to monitor fluctuations in laser power.

The scan was performed in water for acoustic coupling between the source and the detector array. The volume under investigation was a 25mm×25mm×25mm cube, extending from (x,y,z)=(0,0,0) to (x,y,z)=(25,25,25) . The volume was chosen to cover the area of the optical window in the xy plane and to cover the focal zone of the annular array in the z direction [see Fig. 1c]. The acquisition points were spaced 5mm apart in the x , y , and z directions for a total of 216 points (i.e., 63 ) that spanned the volume. The data sets acquired from the array elements at each scan point were fed into the analysis software, where the location and strength of each PA peak was detected, and the effects of laser power variation and acoustic spherical divergence were taken into account. In a separate experiment, the angular emission profile of the point source was measured by rotating the source at a fixed point above one array element and recording the relative change in PA signal. The resulting curve was interpolated and incorporated into the calculations of the sensitivity maps. The software generated a sensitivity map for the detector array that depicted the relative strength of the PA signal detected by each element at each scan point. The sensitivity map was used as a weighting function by the image reconstruction algorithm.

2.4.

Element Coverage Mapping of the Detector Array

The analysis software also generated a coverage map, which depicted the number of elements in the array that received signals above background noise for each voxel. Due to the limited angular response of the detectors and the sparse geometry of the array, it was expected that there would be voxels where not all detectors had enough sensitivity to detect the point source. Visualization of the regions of maximum coverage gave an indication of where the best imaging quality was to be expected, since for voxels where coverage was good, the reconstruction algorithm would be able to incorporate data from a multitude of angles.

2.5.

Phantom Development

To validate the accuracy of the 3-D image reconstruction, we developed a 3-D phantom that had a distribution of PA sources with known spatial locations. Instead of building a physical phantom of small absorbing objects, we used a composite of the PA signals collected during the sensitivity scan to build up a synthetic phantom of point sources. This approach had two advantages: (1) the position of each source was accurately known and referenced to the same coordinate system as the imaging volume, and (2) the PA signal intensity was constant regardless of position, since the source was illuminated through the optical fiber and the signals were corrected for variations in laser energy. This approach provided a means to generate a variety of synthetic phantoms with user-defined distributions of point sources. Although the synthetic sources were accurately localized and uniform in intensity and therefore well suited as test objects for the 3-D PAI system, they did not represent a real imaging condition with external illumination and simultaneous detection of multiple point absorbers. To test 3-D PAI on real objects with external illumination, we first used the coated fiber tip as a point absorber and illuminated it in backward mode through the laser access window instead of through the fiber directly [Fig. 1a]. We then imaged an extended object in the form of a 0.9-mm -diameter graphite rod, which was suspended above the center of the detector ring at a height of 30mm . All phantom experiments were carried out with distilled water as the acoustic coupling medium.

3.

Results

3.1.

Characterization of the Sparse Array

Figure 2a and 2b show typical PA signals acquired at one of the characterization scan points. Figure 2c represents the calculated coverage map. The coverage peaked at the center of each xy plane, as expected from symmetry, but there was also a focusing effect along the z axis. The largest lateral extent was observed in the plane z=10mm , and the coverage diminished toward both higher and lower z values. The coverage distribution in 3-D, hence, took a form similar to an ellipsoid. The center of the ellipsoid was located directly above the array center and at a distance of 30mm from the plane of detection. Its short axis, oriented laterally, and long axis, oriented along the z axis, were 15 and 30mm long, respectively. Figure 2d shows the calculated sensitivity map, which describes the relative sensitivity of each array element to each voxel in the volume under investigation. The map displays the xy distribution for each z plane (map rows) and for each detector element (map columns). In the z direction, the combined effect of the delay line and the detector response profile can be appreciated: starting at the bottom row (i.e., the highest z value and the xy plane closest to the detectors), the most compact distribution is observed. Moving away from the detectors, the distribution becomes extended in the xy plane. The location of the center of the distribution, however, also shifts away from the detector location for increasing distances from the detector plane. For example, at z=25mm , the distribution for detector 13 peaked at the bottom-left corner, which was closest to the detector location; at z=0mm , it peaked on the opposite side, at the top-right corner. These 3-D sensitivity distributions can be explained by the angular response profile of each detector/wedge combination. We can imagine each detector’s sensitivity field as a beam of light (e.g., from a spotlight) and correlate the beam divergence with the angular profile’s FWHM and the pointing angle with the refracted angle of the delay line. It is then easy to visualize that when the beam enters the imaged volume, it is compact, and when it leaves at the farthest z plane, it is broad. Now, thinking of a multitude of light beams, it is evident that the region where they meet (the region of maximum coverage) will have the highest intensity and an ellipsoidal shape.

Fig. 2

Characterization of the detector array. (a) Typical PA signals obtained from the 14 array detectors and photodiode when the PA source was placed at (x,y,z)=(10,15,10) . The signal from the photodiode appears as a sharp negative peak at time zero, and the 14 PA signals, arriving at the detectors at a delay time of 20to25μs , are outlined by the dashed box. (b) A magnified view of the region of interest outlined in panel (a). Three detector channels are highlighted for better visualization (channel 1—thick solid line; channel 3—solid triangles; and channel 13—open circles). Signals from the same three detector channels are presented in Figs. 3, 4, 5. (c) Volume coverage map where each pixel represents a point from the 3-D characterization scan. The 3-D volume is presented as separate 2-D slices along the z axis. The gray scale represents the number of detectors for which there was sufficient detection sensitivity, on the range of 3 to 14 (all detectors). (d) Sensitivity map where each image contains the sensitivity distribution in the xy plane, for the indicated z plane and detector number. The gray scale represents relative units, with white being most sensitive. All dimensions reported in mm.

054052_1_055805jbo2.jpg

Fig. 3

Reconstruction of a point source located at (x,y,z)=(10,15,10) . In panels (a) to (c), readings from a point in the calibration scan were used to synthesize the source. In panels (d) to (f), the coated fiber tip was illuminated from below in backward mode. (a) Measured velocity potentials (VPs). Only three detector channels are presented for clarity. See Fig. 2b caption for details. (b) Estimated VPs from the reconstruction algorithm. The three curves presented correspond to the same channels as in panel (a). (c) Three orthogonal slices of the reconstructed volume, showing the location of the reconstructed point source. Note that the source was brightest in the x=9mm plane. (d) Measured VPs for the source illuminated in backward mode. Note the “missing” signal on channel 1 indicated by the solid curve. (e) Estimated VPs. Note that the signal “reappeared” on channel 1. (f) Corresponding volume slices of the reconstructed image. The gray scale indicates relative voxel intensity on the range of [0,1]. All dimensions reported in mm. Axes tick marks are 5mm apart. Arrows indicate axis origin and direction.

054052_1_055805jbo3.jpg

Fig. 4

(a) to (c) Reconstruction of three point sources arranged vertically on the z axis: (x,y,z)=(10,15,5) , (10,15,10), and (10,15,15). Note that the sources were brightest in the x=9mm plane. (d) to (f) Reconstruction of three point sources arranged horizontally on the y axis: (x,y,z)=(10,5,10) , (10,10,10), and (10,15,10). The gray scale indicates relative voxel intensity on the range of [0,1]. All dimensions reported in mm. Axes tick marks are 5mm apart. Arrows indicate axis origin and direction.

054052_1_055805jbo4.jpg

Fig. 5

Reconstruction of a backward-mode illuminated graphite rod held horizontally above the center of the detector array and at a height of 30mm . (a) Measured velocity potentials (VPs). (b) Estimated VPs output from the reconstruction algorithm. (c) Three orthogonal slices of the reconstructed volume, showing the location of the reconstructed object. The gray scale indicates relative voxel intensity on the range of [0,1]. (d) Photograph of the object taken from the perspective of the laser (from below). The photograph was registered to the same coordinate system. All dimensions reported in mm. Axes tick marks are 5mm apart. Arrows indicate axis origin and direction.

054052_1_055805jbo5.jpg

3.2.

3-D Images of Phantoms

The reconstruction of a single synthetic point source is shown in Figs. 3a, 3b, 3c . Fig. 3a displays three of the VPs calculated from the measured PA signals [VPiexp(t)] . In Fig. 3b, three of the VPiest(t) , obtained after 500 iterations ( 4min run time on the PC) are shown. The magnitude, shape, and arrival time of the peaks in VPiest(t) are well reproduced. Figure 3c presents three orthogonal sections of the reconstructed volume, showing that the point source was reconstructed at the expected location, but with broadened dimensions. It is interesting to note that the broadening was stronger in the x and y directions, and the edges were better defined in the z direction.

Similarly, reconstruction of a single, externally illuminated point source is presented in Figs. 3d, 3e, 3f. In this experiment, only 10 of the 14 detector elements recorded signals above the noise level. Although the exact cause of the signal dropout could not be determined, it may have been related to asymmetry in the PA wave generated by the source. Nevertheless, the algorithm was able to reconstruct the point source at the proper position even with signals from an incomplete set of detectors. Comparison of the VPiest(t) in Fig. 3e with VPiexp(t) in Fig. 3d demonstrates that some detector signals that were not present in the VPiexp(t) (e.g., solid curve) were reconstructed in VPiest(t) .

The same VP signal that “reappeared” in Fig. 3e is also present in Fig. 3b, and at the same arrival time, indicating that the reconstruction could accurately compensate for the missing data. Image artifacts, in the form of scattered dim voxels, appear in most of the images. However, these artifacts seem to be more pronounced in images where VPiest(t) did not reproduce the shape and magnitude of VPiexp(t) well.

Next, we reconstructed an array of three synthetic sources arranged along the z axis [Figs. 4a, 4b, 4c ] and three synthetic sources along the y axis [Figs. 4d, 4e, 4f]. As can be appreciated from the orthogonal slices, the sources along the z axis were better defined and better separated than the sources along the y axis. This was to be expected from the detector arrangement relative to the imaged volume: a separation of 5mm in the z axis created a time-of-flight difference approximately two-times longer than the same separation along the y axis. Therefore, for the sources distributed along the z axis, the reconstruction algorithm was presented with well-separated peaks, which were better reconstructed and resulted in sharper edges.

In both Fig. 4c and Fig. 4f, the intensity of the reconstructed objects varied with location in the image even though the original PA source used to synthesize them was constant in intensity. The objects seemed to reconstruct brighter when closer to the plane of the detectors [Fig. 4c] and when closer to the axis of the detector ring [Fig. 4f]. A potential explanation for the inhomogeneity in voxel intensity could be related to the detection geometry. For example, objects closer to the detectors are better separated in time, and hence better reconstructed. On the other hand, on-axis objects will be covered by a greater number of detectors, which will also result in better reconstruction.

Last, the reconstruction of the graphite rod is presented in Fig. 5 . Comparing Fig. 5a with Fig. 3d, we find that the PA signals recorded on some of the detectors were substantially stronger with the rod than with the point source, while on other detectors, they were weaker. This was likely due to the different propagation of the PA waves from a point source (isotropic) versus a line source (perpendicular to the line). Detectors on the ring that view the rod perpendicularly to its length will hence detect stronger PA signal than detectors that view the rod from a direction parallel to its length. Interestingly, we observe again that even when signals were absent at some detectors, they were generated by the reconstruction algorithm at their expected location. This is another indication of the robustness of the algorithm. The images in Fig. 5c reproduce the rod shape quite reliably. The cross section along the z direction, as can be seen from the xz and yz slices, is only one voxel wide, which agrees well with the rod diameter of 0.9mm . The in-plane orientation of the rod, as indicated by the tilt in the xy slice, is also in very good agreement with the photograph in Fig. 5d. It is worthwhile to note that the PA images of the rod were obtained with a single laser shot and no averaging.

Although the image reconstruction algorithm had robust qualities, some image artifacts were found, appearing as bright voxels near the boundary of the imaging volume [lower-left corner in the xy slice of Fig. 5c] similar to those observed for the synthetic sources.

4.

Discussion

Photoacoustic imaging by sparse-array detection and iterative image reconstruction was investigated and found to be a viable method for generating 3-D photoacoustic images in backward mode. Distributions of synthetic sources were accurately reconstructed from the time-domain PA signals. Furthermore, with backward-mode illumination, 3-D images of a point source and an extended source were obtained.

The images obtained from a point source and an array of point sources confirmed the localization accuracy of 1mm in each dimension. This further validated the accuracy with which the positions of the array elements were registered to the imaging volume using the scanned PA source method.31 Analysis of line profiles through the point-source image (Fig. 3) along the lateral axes ( x and y ) and the vertical axis (z) resulted in estimates of the point spread function (PSF) for the three coordinate directions of the imaging system. Image resolution was determined from the FWHM of the PSF. From this data, the spatial resolution for the 3-D PAI system was 1.5mm in z and 2.5mm in both x and y . This compared well to the spatial resolution reported by other groups that have implemented staring 3-D PAI methods. For example, the Twente photoacoustic mammoscope6 was reported to have axial resolution of 3.2to3.9mm and lateral resolution of 3.1to4.4mm at imaging depths of 15to60mm . Using an acoustic lens and optical stereo imaging system, Niederhauser 22 reported results that implied axial PSF <0.25mm and lateral PSF <0.5mm , measured at depths <5mm . Using a dense 2-D micromachined detector array and delay-and-sum reconstruction, Vaithilingam 23 produced images that had an axial resolution of <0.86mm at a depth of 20mm . (No estimate of the lateral resolution was given.) Our estimated resolution of 1.5mm axially and 2.5mm laterally at a depth of 30mm compares well to these other results.

The 3-D sensitivity distribution of the focused annular sparse array was characterized using a scanned PA point source method. The characterization results indicated that the region of best coverage for the specific detector geometry was at distances of 25to35mm perpendicular to the detector surface, with lateral extent of 15mm centered directly above the array. This confirmed our specification of the wedge angle, which redirected the viewing direction of each element through the center of the detector annulus at a height 30mm above the plane of the array. The success of this approach suggests that the depth and extent of the volume coverage by such an array could be readily adjusted by varying three parameters: the wedge angle, the element diameter (which is inversely related to the extent of the angular response profile), and the annular array diameter. An increase in wedge angle would bring the focus closer to the plane of the detector surface and shorten the depth of focus. An increase in element diameter would reduce the angular extent of each element’s sensitivity profile but at the same time increase the spatial area, resulting in a mixed effect on the coverage, both in depth and in lateral dimensions. An increase in annular diameter would take the focus deeper and slightly increase the coverage. An interesting possibility would be to use several concentric detector rings, each with different combinations of geometric parameters. One could then achieve greater flexibility. In one realization, all detector rings could be designed to focus on the same spot, which would provide higher sensitivity and resolution limited to a small volume. In a second realization, construction of a set of concentric ring detector arrays with an inverse relation between the ring diameter and focal distance could achieve a “collimated” coverage field.

From a manufacturing point of view, there is a strong advantage to the wedge method compared to a curved surface array. Machining of wedged surfaces is simple with modern CNC technology. Additionally, there would be no need to modify the electronics or sensor material if a detector array of a different focal distance was desired; one would need to change only the wedge angle. This would permit a flexible and modular approach to detector design.

The images presented earlier were reconstructed from data acquired over 10 laser pulses for averaging and noise reduction. Hence, image acquisition took 1s (laser repetition rate was 10Hz ). Image reconstruction, however, took on the order of minutes ( 4min for 500 iterations). Reconstruction computations scale linearly with number of detectors, and with number of voxels in the image. Faster reconstruction times are possible by reducing the resolution and/or specifying a smaller image volume.

The iterative image reconstruction has shown the capacity to compensate for missing measurement data, as in the case of the point source reconstruction, where signal was measured on only 10 out of 14 detectors [Fig. 3d]. This indicates the robustness of the algorithm (i.e., as long as most detectors can detect the PA signal, the algorithm is able to compensate for those that do not record signal). This feature is beneficial when combined with the sparse detection geometry, since it provides a means to obtain image estimates with limited data sets.

We have shown that it is possible to acquire a 3-D photoacoustic image with a single laser shot and no averaging (Fig. 5). In principle, 3-D single-shot imaging should carry information at time scales comparable to the laser pulse (i.e., 5ns ), which could be useful for observing fast dynamic processes at an instant in time. Taken one step further, one could fire a sequence of hundreds or perhaps even thousands of laser pulses, and for each pulse, capture and store a 3-D PA frame on the DAQ hardware. The frames could then be reconstructed off-line to be viewed later as a 3-D photoacoustic movie. This, for example, could provide a method for 3-D visualization of optically absorbing tracers in small animals as they transit through the circulation after injection.

5.

Conclusions

We have demonstrated an approach to 3-D PA imaging that uses a sparse array of ultrasound detectors and an iterative reconstruction algorithm. We have shown that the method produces 3-D images of point sources and a line source with good contrast and accurate volume localization. The resolution was estimated to be 1to2mm perpendicular to the detection plane and 2to3mm in the detection plane. However, validation with more complex 3-D absorbing structures and the presence of tissue-mimicking optical scatter is needed. In addition, the issue of artifact suppression on the volume boundary needs to be addressed, possibly through incorporation of an apodizing function into the reconstruction software. It is also not clear yet whether, and to what extent, the small number of detectors limits the complexity and resolution of structures that can be imaged. Theoretical studies of that question would be of great value to determine the range of applications best suited for this method.

Acknowledgments

This project was funded by grants from the Academic Development Fund (ADF) at the University of Western Ontario and the Natural Sciences and Engineering Research Council of Canada (NSERC) to JJLC. PE was funded by the NSERC Canada Graduate Scholarship for Doctoral Studies (CGS-D3). In addition, partial funding was provided by MultiMagnetics, Inc. and by the Ontario Research and Development Challenge Fund (ORDCF).

References

1. 

L. V. Wang, “Ultrasound-mediated biophotonic imaging: a review of acousto-optical tomography and photo-acoustic tomography,” Dis. Markers, 19 (2–3), 123 –138 (2003). 0278-0240 Google Scholar

2. 

E. Z. Zhang, J. Laufer, and P. Beard, “Three-dimensional photoacoustic imaging of vascular anatomy in small animals using an optical detection system,” Proc. SPIE, 6437 64370S (2007). https://doi.org/10.1117/12.700128 0277-786X Google Scholar

3. 

H. F. Zhang, K. Maslov, G. Stoica, and L. V. Wang, “Functional photoacoustic microscopy for high-resolution and noninvasive in vivo imaging,” Nat. Biotechnol., 24 (7), 848 –851 (2006). https://doi.org/10.1038/nbt1220 1087-0156 Google Scholar

4. 

K. H. Song, G. Stoica, and L. V. Wang, “In vivo three-dimensional photoacoustic tomography of a whole mouse head,” Opt. Lett., 31 (16), 2453 –2455 (2006). https://doi.org/10.1364/OL.31.002453 0146-9592 Google Scholar

5. 

S. Ermilov, A. Stein, A. Conjusteau, R. Gharieb, R. Lacewell, T. Miller, S. Thompson, P. Otto, B. McCorvey, T. Khamapirad, M. Leonard, and A. Oraevsky, “Detection and noninvasive diagnostics of breast cancer with 2-color laser optoacoustic imaging system,” Proc. SPIE, 6437 643703 (2007). https://doi.org/10.1117/12.714039 0277-786X Google Scholar

6. 

S. Manohar, S. E. Vaartjes, J. G. C. van Hespen, J. M. Klaase, F. M. van den Engh, A. K. H. The, W. Steenbergen, and T. G. van Leeuwen, “Region-of-interest breast images with the Twente Photoacoustic Mammoscope (PAM),” Proc. SPIE, 6437 643702 (2007). 0277-786X Google Scholar

7. 

R. A. Kruger, K. D. Miller, H. E. Reynolds, W. L. Kiser Jr., D. R. Reinecke, and G. A. Kruger, “Breast cancer in vivo: contrast enhancement with thermoacoustic CT at 434MHz—feasibility study,” Radiology, 216 (1), 279 –283 (2000). 0033-8419 Google Scholar

8. 

W. Xueding, X. Xueyi, K. Geng, L. V. Wang, and G. Stoica, “Noninvasive imaging of hemoglobin concentration and oxygenation in the rat brain using high-resolution photoacoustic tomography,” J. Biomed. Opt., 11 (2), 024015 (2006). https://doi.org/10.1117/1.2192804 1083-3668 Google Scholar

9. 

J. Laufer, E. Zhang, and P. Beard, “Quantitative in vivo measurements of blood oxygen saturation using multiwavelength photoacoustic imaging,” Proc. SPIE, 6437 64371Z (2007). 0277-786X Google Scholar

10. 

H. F. Zhang, K. Maslov, M. Sivaramakrishnan, G. Stoica, and L. V. Wang, “Imaging of hemoglobin oxygen saturation variations in single vessels in vivo using photoacoustic microscopy,” Appl. Phys. Lett., 90 (5), 053901 (2007). https://doi.org/10.1063/1.2435697 0003-6951 Google Scholar

11. 

X. Wang, Y. Pang, G. Ku, X. Xie, G. Stoica, and L. V. Wang, “Noninvasive laser-induced photoacoustic tomography for structural and functional in vivo imaging of the brain,” Nat. Biotechnol., 21 (7), 803 –806 (2003). https://doi.org/10.1038/nbt839 1087-0156 Google Scholar

12. 

L. Li, R. J. Zemp, G. Lungu, G. Stoica, and L. V. Wang, “Photoacoustic imaging of lacZ gene expression in vivo,” J. Biomed. Opt., 12 (2), 020504 (2007). https://doi.org/10.1117/1.2717531 1083-3668 Google Scholar

13. 

G. J. Diebold and T. Sun, “Properties of photoacoustic waves in one, two, and three dimensions,” Acustica, 80 (4), 339 –351 (1994). 0001-7884 Google Scholar

14. 

A. A. Oraevsky and A. A. Karabutov, “Ultimate sensitivity of time-resolved optoacoustic detection,” Proc. SPIE, 3916 228 –239 (2000). https://doi.org/10.1117/12.386326 0277-786X Google Scholar

15. 

C. G. A. Hoelen and F. F. M. de Mul, “Image reconstruction for photoacoustic scanning of tissue structures,” Appl. Opt., 39 (31), 5872 –5883 (2000). https://doi.org/10.1364/AO.39.005872 0003-6935 Google Scholar

16. 

K. P. Kostli, D. Frauchiger, J. J. Niederhauser, G. Paltauf, H. P. Weber, and M. Frenz, “Optoacoustic imaging using a three-dimensional reconstruction algorithm,” IEEE J. Sel. Top. Quantum Electron., 7 (6), 918 –923 (2001). https://doi.org/10.1109/2944.983294 1077-260X Google Scholar

17. 

G. Paltauf, J. A. Viator, S. A. Prahl, and S. L. Jacques, “Iterative reconstruction algorithm for optoacoustic imaging,” J. Acoust. Soc. Am., 112 (4), 1536 –1544 (2002). https://doi.org/10.1121/1.1501898 0001-4966 Google Scholar

18. 

X. Minghua and L. V. Wang, “Universal back-projection algorithm for photoacoustic computed tomography,” Phys. Rev. E, 71 (1), 016706 (2005). https://doi.org/10.1103/PhysRevE.71.016706 1063-651X Google Scholar

19. 

P. Liu, “Image reconstruction from photoacoustic pressure signals,” Proc. SPIE, 2681 285 –296 (1996). https://doi.org/10.1117/12.239585 0277-786X Google Scholar

20. 

C. G. A. Hoelen, F. F. M. de Mul, R. Pongers, and A. Dekker, “Three-dimensional photoacoustic imaging of blood vessels in tissue,” Opt. Lett., 23 (8), 648 –650 (1998). https://doi.org/10.1364/OL.23.000648 0146-9592 Google Scholar

21. 

J.-T. Oh, M.-L. Li, H. F. Zhang, K. Maslov, G. Stoica, and L. V. Wang, “Three-dimensional imaging of skin melanoma in vivo by dual-wavelength photoacoustic microscopy,” J. Biomed. Opt., 11 (3), 034032 (2006). https://doi.org/10.1117/1.2210907 1083-3668 Google Scholar

22. 

J. J. Niederhauser, M. Jaeger, and M. Frenz, “Real-time three-dimensional optoacoustic imaging using an acoustic lens system,” Appl. Phys. Lett., 85 (5), 846 –848 (2004). https://doi.org/10.1063/1.1777820 0003-6951 Google Scholar

23. 

S. Vaithilingam, I. O. Wygant, P. S. Kuo, X. Zhuang, O. Oralkan, P. D. Olcott, and B. T. Khuri-Yakub, “Capacitive micromachined ultrasonic transducers (CMUTs) for photoacoustic imaging,” Proc. SPIE, 6086 608603 (2006). https://doi.org/10.1117/12.659459 0277-786X Google Scholar

24. 

P. Guo, J. Gamelin, S. Yan, A. Aguirre, and Q. Zhu, “Co-registered 3-D ultrasound and photoacoustic imaging using a 1.75D 1280-channel ultrasound system,” Proc. SPIE, 6437 643713 (2007). 0277-786X Google Scholar

25. 

G. Paltauf, R. Nuster, P. Burgholzer, and M. Haltmeier, “Three-dimensional photoacoustic tomography using acoustic line detectors,” Proc. SPIE, 6437 64370N (2007). 0277-786X Google Scholar

26. 

R. J. Zemp, R. Bitton, M.-L. Li, K. K. Shung, G. Stoica, and L. V. Wang, “Photoacoustic imaging of the microvasculature with a high-frequency ultrasound array transducer,” J. Biomed. Opt., 12 (1), 010501 (2007). https://doi.org/10.1117/1.2709850 1083-3668 Google Scholar

27. 

R. A. Kruger, W. L. Kiser, D. R. Reinecke, G. A. Kruger, and K. D. Miller, “Thermoacoustic molecular imaging of small animals,” Mol. Imaging, 2 (2), 113 –123 (2003). https://doi.org/10.1162/153535003322331993 1535-3508 Google Scholar

28. 

Y. Xu, L. V. Wang, G. Ambartsoumian, and P. Kuchment, “Reconstructions in limited-view thermoacoustic tomography,” Med. Phys., 31 (4), 724 –733 (2004). https://doi.org/10.1118/1.1644531 0094-2405 Google Scholar

29. 

P. Ephrat and J. J. L. Carson, “Measurement of photoacoustic detector sensitivity distribution by robotic source placement,” Proc. SPIE, 6856 685610 (2008). https://doi.org/10.1117/12.763825 0277-786X Google Scholar

30. 

R. A. Kruger, J. W. L. Kiser, D. R. Reinecke, and G. A. Kruger, “Thermoacoustic computed tomography using a conventional linear transducer array,” Med. Phys., 30 (5), 856 –860 (2003). https://doi.org/10.1118/1.1565340 0094-2405 Google Scholar

31. 

J. J. L. Carson, P. Ephrat, and A. Seabrook, “Measurement of photoacoustic transducer position by robotic source placement and nonlinear parameter estimation,” Proc. SPIE, 6856 68561Z (2008). https://doi.org/10.1117/12.763769 0277-786X Google Scholar
©(2008) Society of Photo-Optical Instrumentation Engineers (SPIE)
Pinhas Ephrat, Lynn Keenlislide, Adam Seabrook, Frank S. Prato, and Jeffrey J. L. Carson "Three-dimensional photoacoustic imaging by sparse-array detection and iterative image reconstruction," Journal of Biomedical Optics 13(5), 054052 (1 September 2008). https://doi.org/10.1117/1.2992131
Published: 1 September 2008
Lens.org Logo
CITATIONS
Cited by 51 scholarly publications and 4 patents.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Sensors

3D image processing

Signal detection

Detector arrays

Reconstruction algorithms

Photoacoustic imaging

Imaging systems

Back to Top