You have requested a machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Neither SPIE nor the owners and publishers of the content make, and they explicitly disclaim, any express or implied representations or warranties of any kind, including, without limitation, representations and warranties as to the functionality of the translation feature or the accuracy or completeness of the translations.
Translations are not retained in our system. Your use of this feature and the translations is subject to all use restrictions contained in the Terms and Conditions of Use of the SPIE website.
27 February 2019Learned backprojection for sparse and limited view photoacoustic tomography
Filtered backprojection (FBP) is an efficient and popular class of tomographic image reconstruction methods. In photoacoustic tomography, these algorithms are based on theoretically exact analytic inversion formulas which results in accurate reconstructions. However, photoacoustic measurement data are often incomplete (limited detection view and sparse sampling), which results in artefacts in the images reconstructed with FBP. In addition to that, properties such as directivity of the acoustic detectors are not accounted for in standard FBP, which affects the reconstruction quality, too. To account for these issues, in this papers we propose to improve FBP algorithms based on machine learning techniques. In the proposed method, we include additional weight factors in the FBP, that are optimized on a set of incomplete data and the corresponding ground truth photoacoustic source. Numerical tests show that the learned FBP improves the reconstruction quality compared to the standard FBP.
The alert did not successfully save. Please try again later.
Johannes Schwab, Stephan Antholzer, Markus Haltmeier, "Learned backprojection for sparse and limited view photoacoustic tomography," Proc. SPIE 10878, Photons Plus Ultrasound: Imaging and Sensing 2019, 1087837 (27 February 2019); https://doi.org/10.1117/12.2508438