Open Access
15 September 2014 Graphics processing unit-based quantitative second-harmonic generation imaging
Mohammad M. Kabir, A. S. M. Jonayat, Sanjay Patel, Kimani C. Toussaint Jr.
Author Affiliations +
Abstract
We adapt a graphics processing unit (GPU) to dynamic quantitative second-harmonic generation imaging. We demonstrate the temporal advantage of the GPU-based approach by computing the number of frames analyzed per second from SHG image videos showing varying fiber orientations. In comparison to our previously reported CPU-based approach, our GPU-based image analysis results in ∼10× improvement in computational time. This work can be adapted to other quantitative, nonlinear imaging techniques and provides a significant step toward obtaining quantitative information from fast in vivo biological processes.

1.

Introduction

Second-harmonic generation (SHG) imaging, based on the second-order nonlinear optical process, in which a noncentrosymmetric material (i.e., a material showing no center of inversion symmetry) converts a portion of incident light to scattered light at exactly twice the frequency, has developed into an important nonlinear imaging technique over the past several years. Among its benefits is its inherent ability to produce volumetric images of biological tissues comprising collagen fibers1 without the need for labeling with an exogenous contrast agent. We have previously demonstrated quantitative SHG (Q-SHG) imaging as an effective and accurate modality to ascertain quantitative information from such collagen-based biological tissues.26 For example, with respect to collagen fiber organization, the preferred orientation and orientation anisotropy have provided significant information.2 Indeed, application of quantitative SHG has resulted in assessment of microstructural information of nonpregnant rat cervical tissue, healthy from injured horse tendons,7 age-related changes in porcine cortical bone,8 and dissimilarities in stromal collagen fiber organization in human breast biopsy tissues at various pathological stages.5 Additionally, Q-SHG imaging has also been reported as a suitable method for quantifying the change in dermal collagen fibers in skin burns using a rat skin burn model.9 In spite of these advancements, full applicability of Q-SHG imaging has been confined to static imaging conditions, thereby leaving its utility for dynamic biological processes largely unexplored. To address this, we recently reported on the experimental and computational requirements for carrying out Q-SHG imaging under dynamic conditions,10 i.e., simultaneously computing and displaying quantitative information with image acquisition. We found that for a 512×512-pixel area, the preferred orientation of collagen fibers in a tissue specimen captured by SHG imaging can be computed within 950ms using a standard multicore CPU.

Recently, graphics processing units (GPUs) have emerged as alternate computation devices for faster processing compared to standard CPUs. GPUs comprise several thousand times more processing units or cores compared to conventional computer CPUs, permitting all cores to be used to carry out the same desired instructions in parallel. This facilitates its usage in various computational imaging applications where processing time is expensive. For example, GPU-based algorithms have been used to develop spectral (Fourier) domain optical coherence tomography1114 techniques with a significant reduction in computation time when compared to standard CPU-based algorithms. Similar results have also been observed from using the GPU for reconstruction of x-ray computed tomography images of high contrast and precision15 as well as performing deconvolution of three-dimensional confocal microscopic images.16 Additionally, GPUs have also been used to accelerate and optimize Monte Carlo simulations,14,17 typically used to study the theory of light transport through various media. As such, the GPU would be extremely useful in obtaining quantitative information at the time scales of some of the faster biological processes, such as the propagation of an action potential in neurons occurring on the order of milliseconds, which has been successfully captured with SHG.18 The approach could also be useful for situations where analysis of multiple, quantitative metrics are incorporated with simultaneous image acquisition. In this work, we incorporate an NVIDIA GPU to our image analysis system, enabling parallel processing of our dynamic SHG image analysis algorithm. As proof-of-concept, we present several synthetic experiments in which we apply our GPU-based approach to quantitatively analyze consecutive frames from several videos of 512×512-pixel SHG images. In general, the videos are of varying arrangements of collagen fiber organization. We compare the computation time obtained using GPU versus CPU. The paper is organized as follows. Section 2 describes the experimental methods and image analysis technique used. Section 3 presents the results and discussion, while Sec. 4 provides the conclusion.

2.

Methods

2.1.

Image Analysis

We have previously provided a detailed description of our quantitative SHG image analysis carried out under dynamic conditions, which can be found elsewhere.10 Briefly, after acquiring a 512×512 image, a Gaussian filter is applied and the image is subsequently divided into a 16×16 grid, with each grid containing 32×32pixels. For each grid, a preferred orientation is calculated based on the computed intensity gradient for each pixel within a grid. This information can then be used to estimate a global preferred orientation for the whole image. The accuracy of the calculated orientations is estimated by the circular variance,1922 a detailed description of which is provided in the Appendix. An intensity threshold is set to discriminate the background from the signal in a manner analogous to what we have previously reported.2 This same threshold is used for calculations that are done both within a grid and for the global orientation estimate. Finally, an image is displayed with a gridded overlay, with arrows indicating preferred fiber orientation within each grid and the computed average orientation and circular variance being presented.

MATLAB® coupled with the compute unified device architecture (CUDA) parallel computing platform was used to develop the code. The parallel instructions were written in the C programming language, using the NVIDIA CUDA library version 5.0, and implemented in the GPU while MATLAB® was used as the host function to acquire the image, transfer image data to and from the GPU, and subsequently display the results. The hardware used for implementation consisted of an NVIDIA GTX 590 GPU, running on a Windows 7, Core i7-2600K Quad Core CPU running at 3.40 GHz clock speed, 3.8 GHz of maximum Turbo frequency, and 24 GB of DDR3-1066/1033 RAM. This same computer was used for the comparisons where GPU-based calculations were compared with CPU-based ones. A description of the GPU architecture and the CUDA programming model can be found in the CUDA Programming Guide 4.0.23 To facilitate its adaptation in the GPU architecture, the image analysis procedure was divided into three segments known as CUDA kernels as shown in Fig. 1. In the first kernel, the acquired 512×512 image is divided into a 43×43 image grid, each of which is 12×12pixels in size. For this kernel, the GPU grid contains 43×43 threadblocks, where each threadblock contains 16×16 threads. Each threadblock is assigned to apply a Gaussian filter over one image grid. As the pixels in the boundary of a grid require contributions from neighboring pixels to apply the Gaussian filter, the threadblocks contain one extra thread in each boundary. In the second kernel, the filtered image is divided into a 16×16 grid of 32×32pixels each, where the preferred orientation is calculated for each individual grid. In both these two kernels, individual GPU threadblocks are assigned to individual image grids, where individual pixels inside the grids are computed upon by individual threads in the threadblock; hence, parallel operation of data points is achieved on two separate levels. On the first level, all the threadblocks in the GPU multiprocessors carry out their computation in parallel, while on the second level, the individual threads inside a threadblock also operate in parallel. This essentially means that all pixels in a grid and all grids in the image are computed simultaneously, thus significantly reducing computation time. Finally, in the third kernel, the preferred orientations from the individual grids are used to calculate a global preferred orientation. As there are only 256 (16×16) preferred orientation values, a single GPU threadblock of 256 threads is sufficient. After performing the processing in the GPU, the image along with the quantitative information is returned to the CPU. MATLAB® instructions are used to display the preferred orientation in each block of the 16×16 grid and a circular histogram showing the distribution of the preferred orientation values over the complete image. Finally, the global preferred orientation and the associated circular variance are also displayed.

Fig. 1

Flow chart of the steps performed in the graphics processing unit (GPU)-based quantitative image analysis modality.

JBO_19_9_096009_f001.png

2.2.

Experiment

Both the GPU-based and the CPU-based codes are applied on consecutive frames of three videos comprising 512×512-pixel SHG images. Information on the optical setup used to collect the SHG images can be found elsewhere.7 In the first video, we have consecutive frames showing SHG images of breast biopsy tissue being rotated at increments of 20 deg (relative to the horizontal) between each frame. The frame rate of the video is 10 frames per second (fps) and contains 20 images running for a total duration of 2 s. The contents of the second video are the same as the first one, except that the frame rate of the video is set at 33 fps (note that this is approximately the standard video rate), and it contains 66 images running for 2 s. The third video is of SHG images of various collagen-based biological tissues, namely porcine tendon, rat cervix, and breast biopsy tissues. The frame rate is set at 10 fps, while the number of images and total run time is set at 20 images and 2 s, respectively.

3.

Results and Discussion

Figure 2 depicts the representative frames from the first video, as well as the results obtained by operating the two different computational modalities on it. Figure 2(a) shows the first six frames of the video, while Fig. 2(b) shows the frames that are captured and analyzed by the GPU-based and CPU-based codes in two consecutive rows, respectively. It is clear from Fig. 2(b) that the GPU-based code successfully captures and analyzes all six consecutive frames of the video. For the given video frame rate (10 fps), this is consistent with the expected computation time of 100ms for each image. In comparison to this result, the CPU-based code only acquires the first frame and fails to capture any of the subsequent frames. From Video 1, it is also observed that the CPU is successful in capturing the first and the tenth frames of the video. This observation supports our previously reported fact that the computation time for a CPU-based code is 950ms for a 512×512-pixel image,10 indicating that at 10 fps, it would fail to capture the next eight consecutive frames. The same analytical steps were carried out for the second sample video, the results of which are depicted in Fig. 2(c). The two consecutive rows in Fig. 2(c) show the frames that are captured by the GPU-based and CPU-based modalities, respectively. It is observed from Fig. 2(c) that for the new video frame rate of 33 fps, the GPU fails to capture two out of every three frames of the video. It is also observed from the second row in Fig. 2(c) that the CPU-based code could only analyze the first frame in this case, too, and fails to capture any subsequent frames shown in this figure. Video 2 reveals that the next frame successfully captured by the CPU is frame 32.

Fig. 2

Representative frames from a video of second-harmonic generation (SHG) images of breast biopsy tissues. (a) The first six consecutive frames in the video, showing fibers that are progressively rotated by 20 deg with respect to the horizontal. The frames captured and analyzed by the GPU-based and CPU-based image analysis are shown in (b) for 10 fps and in (c) for 29 fps. The scale bar corresponds to 10μm for all images. These results are also compiled in the videos (Video 1 MPEG, 4.2 MB [URL: http://dx.doi.org/10.1117/1.JBO.19.9.096009.1] and Video 2 MPEG4, 7.3 MB [URL: http://dx.doi.org/10.1117/1.JBO.19.9.096009.2]).

JBO_19_9_096009_f002.png

Figure 3 shows the results obtained from analyzing consecutive frames of a video comprising SHG images of a variety of collagen-based tissues. Here, the goal is to evaluate the performance of the GPU-based code when SHG images vary (in fiber density, orientation, and organization) greatly between each consecutive image. Again, we observe in Fig. 3(b) that the GPU-based code captures and analyzes all consecutive frames from the video irrespective of fiber orientation or density, while in Fig. 3(c) and Video 3, we see that the CPU-based code could only perform its analysis on the first and the eleventh frames. Thus, the processing time for the GPU-based approach is not influenced by fiber density and spatial organization.

Fig. 3

Representative frames from a video of SHG images of several collagen-based tissues (a). Frames 1, 3, and 4 are SHG images of human breast biopsy tissues, while frames 2 and 5 are of rat cervix and porcine tendon tissues, respectively. (b) and (c) show the frames captured and analyzed by the GPU-based and CPU-based codes, respectively. These results are also compiled in a video (Video 3 MPEG4, 5.3 MB) [URL: http://dx.doi.org/10.1117/1.JBO.19.9.096009.3]. The scale bar corresponds to 10μm for all images.

JBO_19_9_096009_f003.png

Figure 4 compares the performance in processing time between the GPU- and CPU-based approaches for each segment of the analysis. To get an accurate estimate of the processing times, the two modalities are used to analyze 20 SHG images of size 512×512pixels showing varying degrees of fiber organization and density. The processing times for each step for all the images are obtained and their average values are used in this comparison. It is observed from Fig. 4 that the calculation of the preferred orientation of individual image grids (kernel 2) takes the longest amount of time for both approaches. As such, to obtain a significant reduction in overall computation time, it would be important to reduce the time required by kernel 2. Our GPU-based implementation achieves a time improvement of 20× for kernel 2, reducing it from 520ms used by the CPU to 18ms. Application of a Gaussian filter and calculation of the global preferred orientation were performed 5× and 35× faster, respectively. The time required to display the results (not shown) is the same for both the modalities and it was observed to be 50ms. Overall, the GPU-based code performs the analysis at an average of 10× faster than the CPU-based code.

Fig. 4

Image processing time required for the GPU-based analysis compared with the CPU-based analysis for three different analysis steps.

JBO_19_9_096009_f004.png

It is worth noting that although 512×512-pixel images were used for the proof-of-concept, the GPU-based code can also be used to analyze images of higher pixel density without any modification. The image grid size and the number of threadblocks in the GPU would scale according to the image size. Individual image grids and threadblocks would also contain the same number of pixels and threads, respectively. However, this would not be possible in the current version of the GPU due to limitations in the number of threadblocks and grids that can operate in parallel. In certain cases, the individual image grid sizes may need to be increased to facilitate clear visualization. This would require each thread in a threadblock to process more than one pixel. For example, if the image grid contains 64×64pixels, a threadblock of 32×32 threads would assign one thread to analyze data obtained from four pixels. Note that this would require an increased amount of memory in the GPU, which is not supported in the current version of the GPU that was used in this work. Apart from this, if any further quantitative analysis is desired from the SHG images, additional CUDA kernels can be constructed and conveniently added to the existing code.

4.

Conclusion

In this paper, we demonstrated GPU-based quantitative analysis of SHG images. We showed that the preferred orientation of collagen fibers can be determined in 100ms, either at the level of individual elements in a 16×16 grid or globally for a 512×512-pixel image. As proof-of-concept, we have applied this modality to analyze consecutive frames of two videos of 10 and 33 fps, respectively. In the first case, the GPU-based system successfully captured and analyzed all the frames of the video, while in the second case, it succeeded in capturing one in every three frames. In contrast, the same analysis using a standard CPU failed to capture all but the first frame from both the videos for the representative frames shown. Both approaches were again compared for a video of SHG images from more complex collagen-based structures, with the GPU-based method clearly outperforming the standard method. This improvement in processing time makes our approach attractive compared to other nonlinear imaging modalities that are modified for quantitative analysis.

Appendices

Appendix:

Calculating Preferred Orientation

The estimation of the preferred orientation of an image grid is carried out in the following steps:

  • 1. In the first step, horizontal [dIx(i,j)] and vertical [dIy(i,j)] intensity gradients are calculated for each pixel. The method used is best demonstrated with the help of a 3×3-pixel image block, schematically displayed in Fig. 5. Based on the location of each pixel within the image block, a centered, forward or a backward difference method is used to calculate the intensity gradient in the horizontal (x axis) and the vertical (y axis) directions. The relevant equations are given in Table 1, where i and j refer to the x- and y-coordinates of the pixel location, respectively, while h represents each pixel width. As an example, the intensity gradient for the pixel located in (1,3) can be considered. In this case, the forward difference method is used to calculate the intensity gradient in the horizontal direction, utilizing the intensities of pixels 1 and 2—I(1,3) and I(2,3). Using a similar reasoning, the backward difference method is used to calculate the intensity gradient in the vertical direction by using the intensities of pixels located in (1,2) and (1,3). But in the case of the pixel located in (2,2), the centered difference method was used to calculate intensity gradient in both the horizontal and vertical directions. Here, to calculate dIx(2,2), we have utilized the intensities I(1,2) and I(3,2), while the intensities of pixels 2 and 8 were used for calculating the dIy for pixel 5.

  • 2. The angular orientation, θ, of each pixel is calculated using

    Eq. (1)

    θdeg=tan1dIydIx.

  • 3. The preferred orientation for each block is calculated using the angular orientation of each of its constituent pixels. The procedure is best explained with an example. Here an image block of 5 pixels is considered with arbitrary angular orientation values for each pixel. The angular orientations are displayed in Table 2 and the orientation is depicted graphically as orientation lines in Fig. 6.

    For clarity in visualization, any angular orientation in the third and fourth quadrant is shifted to the first quadrant. In this example, the angular orientation of pixel 4 (2.61rad) is in the third quadrant, and thus, it is shifted by π to the first quadrant. The new set of angular orientation values are mentioned in Table 3 and graphically displayed in Fig. 7.

    Next, the angular domain of 0 to π is divided into six equally spaced regions. Figure 7 shows the regional divisions with dotted blue lines along with the original orientation lines. The number of regions chosen is based on the level of accuracy required in the calculation of circular variance. For each region, a separate set of angular orientation data is obtained. Here, the lower of the two orientation lines is denoted as the division line (θ) and any pixel with its orientation below θ is shifted by π, while any orientation value above (θ+π) is shifted by π. It should be noted that this step does not change the angular orientation, but rather expresses it in a different angular domain. After this step, the changed set of angular orientations for all the regions is shown in Table 4. As an example, for set 5, the division line is 2.10 rad. So, the angular orientations of the first three pixels—0.52, 0.52, and 1.92—are observed to fall below the division line and were shifted by π to 3.66, 3.66, and 5.06. These results are shown in Table 4 and Fig. 8.

  • 4. In the next step, the circular variance is calculated. A detailed discussion of this procedure is provided elsewhere.19,20 Briefly, for each set, the angular orientations are expressed in the complex form, (Aeiθ). As only the angular orientation is of importance, the amplitude A is considered 1 for all cases.

    Eq. (2)

    eiθ=cosθ+isinθ,

    Eq. (3)

    eiθ=cosθ+isinθ.

    The circular variance is calculated using the following equation:

    Eq. (4)

    C=1R,
    where

    Eq. (5)

    R=1n×(cos2θ+sin2θ).

    The circular variance for the six sets of angular orientations in this example is shown in Table 5.

  • 5. Finally, the mean angular orientation is individually calculated for each set. All the mean orientation values obtained for the different transformed sets correspond to the same original set of angular orientations. However, the mean with the lowest circular variance is identified as the preferred orientation. It is seen in Table 5 that the third and fourth sets have the lowest variance in this case. If the mean orientation for these two sets is calculated, it is observed to be the same value of 2.71 rad or 155deg. This value is defined as the preferred orientation for the given set of angular orientations.

Fig. 5

Schematic diagram of a 3×3pixel image block.

JBO_19_9_096009_f005.png

Table 1

Equations used for calculating intensity gradients of image pixels.

GradientsCentered differenceForward differenceBackward difference
dIx(i,j)dIx=[I(i+1,j)I(i1,j)]/2hdIx=[I(i+1,j)I(i,j)]/hdIx=[I(i,j)I(i1,j)]/h
dIy(i,j)dIy=[I(i,j+1)I(i,j1)]/2hdIy=[I(i,j+1)I(i,j)]/hdIy=[I(i,j)I(i,j1)]/h

Table 2

Angular orientation of pixels in an image block.

Pixel numberAngular orientation, θ (rad)
10.52
22.1
32.36
42.61
51.92

Fig. 6

Angular orientation of the pixels in an image block, represented in a Cartesian co-ordinate system.

JBO_19_9_096009_f006.png

Table 3

Angular orientation of pixels in the first and second quadrants.

Pixel numberAngular orientation, θ (rad)
10.52
22.1
32.36
40.52
51.92

Fig. 7

The angular orientation of all pixels after they are shifted to the first and second quadrants. The angular domain of 0 to π is shown as divided into six regions. See text for details.

JBO_19_9_096009_f007.png

Table 4

Six sets of angular orientation for six regions.

Pixels12345
SetsDivision lineAngular orientation, θ (rad)
100.520.521.922.102.36
20.530.520.521.922.102.36
31.063.663.661.922.101.36
41.593.662.661.922.102.36
52.103.663.665.062.102.36
62.653.663.665.065.245.50

Fig. 8

Angular orientation for set 5 corresponding to a division line of 2.1 rad. For this set, the angular orientation for each pixel falls within a region of 2.10<θ<(2.10+π).

JBO_19_9_096009_f008.png

Table 5

Calculated circular variance for six sets of angular orientation values.

Sets123456
Circular variance0.29890.29890.27480.27480.47270.2989

Acknowledgments

M.M.K. acknowledges support from the National Science Foundation CAREER award (DBI 09-54155).

References

1. 

I. FreundM. Deutsch, “Second-harmonic microscopy of biological tissue,” Opt. Lett., 11 (2), 94 –96 (1986). http://dx.doi.org/10.1364/OL.11.000094 OPLEDP 0146-9592 Google Scholar

2. 

R. A. RaoM. R. MehtaK. C. Toussaint, “Fourier transform-second-harmonic generation imaging of biological tissues,” Opt. Express, 17 (17), 14534 –14542 (2009). http://dx.doi.org/10.1364/OE.17.014534 OPEXFF 1094-4087 Google Scholar

3. 

R. A. R. Raoet al., “Quantitative analysis of forward and backward second-harmonic images of collagen fibers using Fourier transform second-harmonic-generation microscopy,” Opt. Lett., 34 (24), 3779 –3781 (2009). http://dx.doi.org/10.1364/OL.34.003779 OPLEDP 0146-9592 Google Scholar

4. 

R. Ambekar Ramachandra RaoM. R. MehtaJ. K. C. Toussaint, “Quantitative analysis of biological tissues using Fourier transform-second-harmonic generation imaging,” Proc. SPIE, 7569 75692G (2010). http://dx.doi.org/10.1117/12.841208 PSISDG 0277-786X Google Scholar

5. 

R. Ambekaret al., “Quantifying collagen structure in breast biopsies using second-harmonic generation imaging,” Biomed. Opt. Express, 3 (9), 2021 –2035 (2012). http://dx.doi.org/10.1364/BOE.3.002021 BOEICL 2156-7085 Google Scholar

6. 

T. Y. LauR. AmbekarK. C. Toussaint, “Quantification of collagen fiber organization using three-dimensional Fourier transform-second-harmonic generation imaging,” Opt. Express, 20 (19), 21821 –21832 (2012). http://dx.doi.org/10.1364/OE.20.021821 OPEXFF 1094-4087 Google Scholar

7. 

M. Sivaguruet al., “Quantitative analysis of collagen fiber organization in injured tendons using Fourier transform-second harmonic generation imaging,” Opt. Express, 18 (24), 24983 –24993 (2010). http://dx.doi.org/10.1364/OE.18.024983 OPEXFF 1094-4087 Google Scholar

8. 

R. Ambekaret al., “Quantitative second-harmonic generation microscopy for imaging porcine cortical bone: comparison to SEM and its potential to investigate age-related changes,” Bone, 50 (3), 643 –650 (2012). http://dx.doi.org/10.1016/j.bone.2011.11.013 8756-3282 Google Scholar

9. 

R. Tanakaet al., “In vivo visualization of dermal collagen fiber in skin burn by collagen-sensitive second-harmonic-generation microscopy,” J. Biomed. Opt., 18 (6), 061231 (2013). http://dx.doi.org/10.1117/1.JBO.18.6.061231 JBOPFO 1083-3668 Google Scholar

10. 

M. M. Kabiret al., “Application of quantitative second-harmonic generation microscopy to dynamic conditions,” Biomed. Opt. Express, 4 (11), 2546 –2554 (2013). http://dx.doi.org/10.1364/BOE.4.002546 BOEICL 2156-7085 Google Scholar

11. 

C. M. O. Y. Wanget al., “GPU accelerated real-time multi-functional spectral domain optical coherence tomography system at 1300 nm,” Opt. Express, 20 (14), 14797 –14813 (2012). http://dx.doi.org/10.1364/OE.20.014797 OPEXFF 1094-4087 Google Scholar

12. 

N. H. Choet al., “High speed SD-OCT system using GPU accelerated mode for in vivo human eye imaging,” J. Opt. Soc. Korea, 17 (1), 68 –72 (2013). http://dx.doi.org/10.3807/JOSK.2013.17.1.068 1226-4776 Google Scholar

13. 

J. Liet al., “Performance and scalability of Fourier domain optical coherence tomography acceleration using graphics processing units,” Appl. Opt., 50 (13), 1832 –1838 (2011). http://dx.doi.org/10.1364/AO.50.001832 APOPAI 0003-6935 Google Scholar

14. 

E. Alerstamet al., “Next-generation acceleration and code optimization for light transport in turbid media using GPUs,” Biomed. Opt. Express, 1 (2), 658 –675 (2010). http://dx.doi.org/10.1364/BOE.1.000658 BOEICL 2156-7085 Google Scholar

15. 

L. A. Floreset al., “Parallel CT image reconstruction based on GPUs,” Radiat. Phys. Chem., 95 247 –250 (2014). http://dx.doi.org/10.1016/j.radphyschem.2013.03.011 RPCHDM 0969-806X Google Scholar

16. 

M. A. BruceM. J. Butte, “Real-time GPU-based 3D deconvolution,” Opt. Express, 21 (4), 4766 –4773 (2013). http://dx.doi.org/10.1364/OE.21.004766 OPEXFF 1094-4087 Google Scholar

17. 

O. YangB. Choi, “Accelerated rescaling of single Monte Carlo simulation runs with the graphics processing unit (GPU),” Biomed. Opt. Express, 4 (11), 2667 –2672 (2013). http://dx.doi.org/10.1364/BOE.4.002667 BOEICL 2156-7085 Google Scholar

18. 

M. N. ShneiderA. A. VoroninA. M. Zheltikov, “Action-potential-encoded second-harmonic generation as an ultrafast local probe for nonintrusive membrane diagnostics,” Phys. Rev. E Stat. Nonlin. Soft Matter Phys., 81 (3 Pt 1), 031926 (2010). Google Scholar

19. 

N. I. Fisher, Statistical Analysis of Circular Data, Cambridge University Press, Cambridge (1995). Google Scholar

20. 

A. S. S. R. Jammalamadaka, Topics in Circular Statistics, World Scientific, Singapore (2001). Google Scholar

21. 

R. GonzalezR. Woods, Digital Image Processing, 3rd ed.Prentice Hall, New Jersey (2007). Google Scholar

22. 

G. StockmanL. G. Shapiro, Computer Vision, Prentice Hall PTR, New Jersey (2001). Google Scholar

23. 

CUDA Programming Guide 5.0, 2014). Google Scholar

Biography

Mohammad Mahfuzul Kabir is a doctoral candidate in the Department of Electrical and Computer Engineering at University of Illinois at Urbana Champaign (UIUC). He has a master’s degree in mechanical engineering from UIUC and a bachelor’s degree in mechanical engineering from Bangladesh University of Engineering and Technology (BUET), in Dhaka, Bangladesh. His current research focus is on developing quantitative nonlinear harmonic imaging techniques for potential applications in characterizing biological tissues.

A. S. M. Jonayat is a graduate research assistant at Illinois Applied Research Institute. He received his MS in mechanical engineering from University of Illinois at Urbana-Champaign in 2014. He worked in numerical modeling of thermo-fluid phenomena in continuous casting process, spray paint thin-film layer formation, and electro-osmotic flow in nanochannels. His broader research interest includes high-performance computing, numerical methods, and multiscale modeling.

Sanjay Patel is a professor of electrical and computer engineering and Sony Faculty Scholar at the University of Illinois at Urbana-Champaign. He is also CEO and co-founder of Nuvixa, a company that is delivering innovative video communications technologies. He has done architecture, hardware verification, logic design, and performance modeling at Digital Equipment Corporation, Intel Corporation, and HAL Computer Systems, as well as provided consultation for Transmeta, Jet Propulsion Laboratory, HAL, Intel, and AGEIA Technologies.

Kimani C. Toussaint, Jr. is an associate professor in the Department of Mechanical Science and Engineering, and an affiliate in the Departments of Electrical and Computer Engineering, and Bioengineering at the University of Illinois at Urbana-Champaign. He directs an interdisciplinary lab that focuses on developing optical techniques for quantitatively imaging collagen-based tissues, and investigating the properties of plasmonic nanostructures for control of near-field optical forces. He is a senior member in SPIE, OSA, and IEEE.

© 2014 Society of Photo-Optical Instrumentation Engineers (SPIE) 0091-3286/2014/$25.00 © 2014 SPIE
Mohammad M. Kabir, A. S. M. Jonayat, Sanjay Patel, and Kimani C. Toussaint Jr. "Graphics processing unit-based quantitative second-harmonic generation imaging," Journal of Biomedical Optics 19(9), 096009 (15 September 2014). https://doi.org/10.1117/1.JBO.19.9.096009
Published: 15 September 2014
Lens.org Logo
CITATIONS
Cited by 1 patent.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Second-harmonic generation

Video

Video acceleration

Image analysis

Image processing

Tissues

Visualization


CHORUS Article. This article was made freely available starting 15 September 2015

Back to Top