Photoacoustic imaging (PAI) has emerged as a promising technique for various image guidance procedures. While convolutional neural networks (CNNs) trained on simulated radiofrequency (RF) data have been employed for point source reconstruction, their performance on real data remains a challenge. This paper addresses this limitation by introducing a novel deep learning-based method that utilizes a limited amount of experimental laser-diode-based data for the reconstruction of multiple point sources. The proposed approach employs a dual generative adversarial network (Dual-GAN) trained on experimental RF data from a combination of point source images. The Dual-GAN exhibits superior performance compared to the conventional delay-and-sum (DAS) method, demonstrating enhanced image contrast and a reduced full width at half maximum (FWHM). Notably, the axial and lateral localization errors of the Dual-GAN predictions surpass previous studies, measuring 0.028±0.018mm and 0.087±0.096mm, respectively. Additionally, the model demonstrates generalization capability by successfully reconstructing multiple point sources imaged using a different Nd:YAG laser system. This innovative method marks a significant advancement, offering improved accuracy and versatility in PAI applications involving multiple point sources.
|