Proc. SPIE. 11642, Photons Plus Ultrasound: Imaging and Sensing 2021
KEYWORDS: Databases, Image processing, Data acquisition, Software development, Photoacoustic imaging, Reconstruction algorithms, Photoacoustic spectroscopy, Data conversion, Data integration, Standards development
The current lack of uniformity in photoacoustic imaging (PAI) data formats hampers inter-device data exchange and comparison. Based on the proposed standardized metadata format of the International Photoacoustic Standardization Consortium (IPASC), IPASC’s Data Acquisition and Management theme has now developed a prototype python software to transform photoacoustic time series data from proprietary data formats into a standardised HDF5 format. The tool provides a centralised application programming interface for vendor-specific conversion module implementation and is available open-source under a commercially friendly licence (BSD-3). By providing this tool, the IPASC hopes to facilitate PAI data management, thereby supporting future developments of the technology.
Photoacoustic imaging (PAI) has the potential to revolutionize healthcare due to the valuable information on tissue physiology that is contained in multispectral signals. Clinical translation of the technology requires conversion of the high-dimensional acquired data into clinically relevant and interpretable information. In this work, we present a deep learning-based approach to semantic segmentation of multispectral PA images to facilitate interpretability of recorded images. Based on a validation study with experimentally acquired data of healthy human volunteers, we show that a combination of tissue segmentation, sO2 estimation, and uncertainty quantification can create powerful analyses and visualizations of multispectral photoacoustic images.
Previous work on 3D freehand photoacoustic imaging has focused on the development of specialized hardware or the use of tracking devices. In this work, we present a novel approach towards 3D volume compounding using an optical pattern attached to the skin. By design, the pattern allows context-aware calculation of the PA image pose in a pattern reference frame, enabling 3D reconstruction while also making the method robust against patient motion. Due to its easy handling optical pattern-enabled context-aware PA imaging could be a promising approach for 3D PA in a clinical environment.
In this work, we present the open source “Simulation and Image Processing for Photoacoustic Imaging (SIMPA)” toolkit that facilitates simulation of multispectral photoacoustic images by streamlining the use of state-of-the-art frameworks that numerically approximate the respective forward models. SIMPA provides modules for all the relevant steps for photoacoustic forward simulation: tissue modelling, optical forward modelling, acoustic modelling, noise modelling, as well as image reconstruction. We demonstrate the capabilities of SIMPA by performing image simulation using MCX and k-Wave for the optical and acoustic forward modelling, as well as an experimentally determined noise model and a custom tissue model.
Photoacoustics Imaging is an emerging imaging modality enabling the recovery of functional tissue parameters such as blood oxygenation. However, quantifying these still remains challenging mainly due to the non-linear influence of the light fluence which makes the underlying inverse problem ill-posed. We tackle this gap with invertible neural networks and present a novel approach to quantifying uncertainties related to reconstructing physiological parameters, such as oxygenation. According to in silico experiments, blood oxygenation prediction with invertible neural networks combined with an interactive visualization could serve as a powerful method to investigate the effect of spectral coloring on blood oxygenation prediction tasks.
Photoacoustic imaging (PAI) is an emerging medical imaging modality that provides high contrast and spatial resolution. A core unsolved problem to effectively support interventional healthcare is the accurate quantification of the optical tissue properties, such as the absorption and scattering coefficients. The contribution of this work is two-fold. We demonstrate the strong dependence of deep learning-based approaches on the chosen training data and we present a novel approach to generating simulated training data. According to initial in silico results, our method could serve as an important first step related to generating adequate training data for PAI applications.
To accelerate the clinical translation of photoacoustic (PA) imaging, IPASC aims to provide open and publicly available reference datasets for testing of data reconstruction and spectral processing algorithms in a widely accepted data format. The International Photoacoustic Standardisation Consortium (IPASC) has identified and agreed on a list of essential metadata parameters to describe raw time series PA data and used it to develop an initial prototype of a standardized PA data format. We aim to apply the proposed format in an open database that provides reference datasets for testing of processing algorithms, thereby facilitating and advancing PA research and translation.
One of the major applications of multispectral photoacoustic imaging is the recovery of functional tissue properties with the goal of distinguishing different tissue classes. In this work, we tackle this challenge by employing a deep learning-based algorithm called learned spectral decoloring for quantitative photoacoustic imaging. With the combination of tissue classification, sO2 estimation, and uncertainty quantification, powerful analyses and visualizations of multispectral photoacoustic images can be created. Consequently, these could be valuable tools for the clinical translation of photoacoustic imaging.
The International Photoacoustic Standardisation Consortium (IPASC) emerged from SPIE 2018, established to drive consensus on photoacoustic system testing. As photoacoustic imaging (PAI) matures from research laboratories into clinical trials, it is essential to establish best-practice guidelines for photoacoustic image acquisition, analysis and reporting, and a standardised approach for technical system validation. The primary goal of the IPASC is to create widely accepted phantoms for testing preclinical and clinical PAI systems. To achieve this, the IPASC has formed five working groups (WGs). The first and second WGs have defined optical and acoustic properties, suitable materials, and configurations of photoacoustic image quality phantoms. These phantoms consist of a bulk material embedded with targets to enable quantitative assessment of image quality characteristics including resolution and sensitivity across depth. The third WG has recorded details such as illumination and detection configurations of PAI instruments available within the consortium, leading to proposals for system-specific phantom geometries. This PAI system inventory was also used by WG4 in identifying approaches to data collection and sharing. Finally, WG5 investigated means for phantom fabrication, material characterisation and PAI of phantoms. Following a pilot multi-centre phantom imaging study within the consortium, the IPASC settled on an internationally agreed set of standardised recommendations and imaging procedures. This leads to advances in: (1) quantitative comparison of PAI data acquired with different data acquisition and analysis methods; (2) provision of a publicly available reference data set for testing new algorithms; and (3) technical validation of new and existing PAI devices across multiple centres.
As a growing number of research groups exploit photoacoustic imaging (PAI), there is an increasing need to establish common standards for photoacoustic data and images in order to facilitate open access, use, and exchange of data between different groups. As part of a working group within the International Photoacoustic Standardisation Consortium (IPASC), we established a minimal list of metadata parameters necessary to ensure inter-group interpretability of image datasets. To this end, we propose that photoacoustic images should at least contain metadata information regarding acquisition wavelengths, pulse-to-pulse laser energy, and information regarding transducer design and illumination geometry. We also suggest recommendations for a standardized data format for both raw time series data as well as processed photoacoustic image data. Specifically, we recommend to use HDF5 as the standard data format for raw time series data, because it is a widely used open and scalable format that enables fast access times. To support long-term clinical translation of photoacoustics we propose to extend DICOM, the prevailing standardized medical image format, to officially support PA images. An international data format standard for photoacoustics will be an important first step towards accelerated system development by facilitating inter-group data exchange and inter-device performance comparison. This effort will thus form a foundation to integrate basic research with clinical translation of PAI.
Multispectral photoacoustic (PA) imaging is a prime modality to monitor hemodynamics and changes in blood oxygenation (sO2). Although sO2 changes can be an indicator of brain activity both in normal and in pathological conditions, PA imaging of the brain has mainly focused on small animal models with lissencephalic brains. Therefore, the purpose of this work was to investigate the usefulness of multispectral PA imaging in assessing sO2 in a gyrencephalic brain. To this end, we continuously imaged a porcine brain as part of an open neurosurgical intervention with a handheld PA and ultrasonic (US) imaging system in vivo. Throughout the experiment, we varied respiratory oxygen and continuously measured arterial blood gases. The arterial blood oxygenation (SaO2) values derived by the blood gas analyzer were used as a reference to compare the performance of linear spectral unmixing algorithms in this scenario. According to our experiment, PA imaging can be used to monitor sO2 in the porcine cerebral cortex. While linear spectral unmixing algorithms are well-suited for detecting changes in oxygenation, there are limits with respect to the accurate quantification of sO2, especially in depth. Overall, we conclude that multispectral PA imaging can potentially be a valuable tool for change detection of sO2 in the cerebral cortex of a gyrencephalic brain. The spectral unmixing algorithms investigated in this work will be made publicly available as part of the open-source software platform Medical Imaging Interaction Toolkit (MITK).
Real-time monitoring of functional tissue parameters, such as local blood oxygenation, based on optical imaging could provide groundbreaking advances in the diagnosis and interventional therapy of various diseases. Although photoacoustic (PA) imaging is a modality with great potential to measure optical absorption deep inside tissue, quantification of the measurements remains a major challenge. We introduce the first machine learning-based approach to quantitative PA imaging (qPAI), which relies on learning the fluence in a voxel to deduce the corresponding optical absorption. The method encodes relevant information of the measured signal and the characteristics of the imaging system in voxel-based feature vectors, which allow the generation of thousands of training samples from a single simulated PA image. Comprehensive in silico experiments suggest that context encoding-qPAI enables highly accurate and robust quantification of the local fluence and thereby the optical absorption from PA images.
Proc. SPIE. 10494, Photons Plus Ultrasound: Imaging and Sensing 2018
KEYWORDS: Signal to noise ratio, Ultrasonography, Image resolution, Chromophores, Transducers, Reconstruction algorithms, Photoacoustic spectroscopy, In vivo imaging, Pulsed laser operation, In vitro testing
Reconstruction of photoacoustic images acquired with clinical ultrasound transducers is traditionally performed using the delay and sum (DAS) beamforming algorithm. Recently, the delay multiply and sum (DMAS) beamforming algorithm has been shown to provide increased contrast, signal to noise ratio (SNR) and resolution in PA imaging. The main reason for the continued use of DAS beamforming in photoacoustics is its linearity in reconstructing the PA signal to the initial pressure generated by the absorbed laser pulse. This is crucial for the identification of different chromophores in multispectral PA applications and DMAS has not yet been demonstrated to provide this property. Furthermore, due to its increased computational complexity, DMAS has not yet been shown to work in real time.
We present an open-source real-time variant of the DMAS algorithm which ensures linearity of the reconstruction while still providing increased SNR and therefore enables use of DMAS for multispectral PA applications. This is demonstrated in vitro and in vivo. The DMAS and reference DAS algorithms were integrated in the open-source Medical Imaging Interaction Toolkit (MITK) and are available to the community as real-time capable GPU implementations.
Proc. SPIE. 10494, Photons Plus Ultrasound: Imaging and Sensing 2018
KEYWORDS: Imaging systems, Blood, Ultrasonography, Scanners, Control systems, Ultrasonics, Medical imaging, Software development, Photoacoustic spectroscopy, In vivo imaging
Photoacoustic (PA) systems based on clinical linear ultrasound arrays have become increasingly popular in translational PA research. Such systems can more easily be integrated in a clinical workflow due to the simultaneous access to ultrasonic imaging and their familiarity of use to clinicians. In contrast to more complex setups, handheld linear probes can be applied to a large variety of clinical use cases. However, most translational work with such scanners is based on proprietary development and as such not accessible to the community.
In this contribution, we present a custom-built, hybrid, multispectral, real-time photoacoustic and ultrasonic imaging system with a linear array probe that is controlled by software developed within the highly customisable and extendable open-source software platform Medical Imaging Interaction Toolkit (MITK). Our software offers direct control of both the laser and the ultrasonic system and may thus serve as a starting point for various translational research and development. To demonstrate the extensibility of our system, we developed an open-source software plugin for real-time in vivo blood oxygenation measurements. Blood oxygenation is estimated by spectral unmixing of hemoglobin chromophores. The performance is demonstrated on in vivo measurements of the common carotid artery as well as peripheral extremity vessels of healthy volunteers.
Proc. SPIE. 10494, Photons Plus Ultrasound: Imaging and Sensing 2018
KEYWORDS: Data modeling, Tissues, Sensors, Image processing, Error analysis, Computer simulations, Medical imaging, Monte Carlo methods, Reconstruction algorithms, Photoacoustic spectroscopy
Quantification of tissue properties with photoacoustic (PA) imaging typically requires a highly accurate representation of the initial pressure distribution in tissue. Almost all PA scanners reconstruct the PA image only from a partial scan of the emitted sound waves. Especially handheld devices, which have become increasingly popular due to their versatility and ease of use, only provide limited view data because of their geometry. Owing to such limitations in hardware as well as to the acoustic attenuation in tissue, state-of-the-art reconstruction methods deliver only approximations of the initial pressure distribution. To overcome the limited view problem, we present a machine learning-based approach to the reconstruction of initial pressure from limited view PA data. Our method involves a fully convolutional deep neural network based on a U-Net-like architecture with pixel-wise regression loss on the acquired PA images. It is trained and validated on in silico data generated with Monte Carlo simulations. In an initial study we found an increase in accuracy over the state-of-the-art when reconstructing simulated linear-array scans of blood vessels.
Quantification of photoacoustic (PA) images is one of the major challenges currently being addressed in PA research. Tissue properties can be quantified by correcting the recorded PA signal with an estimation of the corresponding fluence. Fluence estimation itself, however, is an ill-posed inverse problem which usually needs simplifying assumptions to be solved with state-of-the-art methods. These simplifications, as well as noise and artifacts in PA images reduce the accuracy of quantitative PA imaging (PAI). This reduction in accuracy is often localized to image regions where the assumptions do not hold true. This impedes the reconstruction of functional parameters when averaging over entire regions of interest (ROI). Averaging over a subset of voxels with a high accuracy would lead to an improved estimation of such parameters. To achieve this, we propose a novel approach to the local estimation of confidence in quantitative reconstructions of PA images. It makes use of conditional probability densities to estimate confidence intervals alongside the actual quantification. It encapsulates an estimation of the errors introduced by fluence estimation as well as signal noise. We validate the approach using Monte Carlo generated data in combination with a recently introduced machine learning-based approach to quantitative PAI. Our experiments show at least a two-fold improvement in quantification accuracy when evaluating on voxels with high confidence instead of thresholding signal intensity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.