Transcranial photoacoustic computed tomography (PACT) is an emerging human neuroimaging modality that holds significant potential for clinical and scientific applications. However, accurate image reconstruction remains challenging due to skull-induced aberration of the measurement data. Model-based image reconstruction methods have been proposed that are based on the elastic wave equation. To be effective, such methods require that the elastic and acoustic properties of the skull are known accurately, which can be difficult to achieve in practice. Additionally, such methods are computationally burdensome. To address these challenges, a novel learningbased image reconstruction was proposed. The method involves the use of a deep neural network to map a preliminary image that was computed by use of a computationally efficient but approximate reconstruction method to a high-quality, de-aberrated, estimate of the induced initial pressure distribution within the cortical region of the brain. The method was systematically evaluated via computer-simulations that involved realistic, full-scale, three-dimensional stochastic head phantoms. The phantoms contained physiologically relevant optical and acoustic properties and stochastically synthesized vasculature. The results demonstrated that the learning-based method could achieve comparable performance to a state-of-the-art model-based method when the assumed skull parameters were accurate, and significantly outperformed the model-based method when uncertainty in the skull parameters was present. Additionally, the method can reduce image reconstruction times from days to tens of minutes. This study represents an important contribution to the development of transcranial PACT and will motivate the exploration of learning-based methods to help advance this important technology.
Many imaging systems can be described by a linear operator that maps object properties to a collection of discrete measurements. The null space of such an imaging operator represents the set of object components that are effectively invisible to the imaging system. The ability to extract the object components that lie within the null space of an imaging operator allows one to analyze and optimize not only the measurement capabilities of the system itself, but also its associated reconstruction methods. An orthogonal null space projection operator (ONPO), which maps any object to its corresponding null space component, offers this ability. However, existing methods for producing an ONPO are limited by high memory requirements. In this work, we develop a novel learning-based method for calculating an ONPO. Numerical results show that our method can produce an accurate ONPO using less memory than existing methods, enabling the characterization of the null spaces of larger imaging operators than previously possible.
Image reconstruction algorithms seek to reconstruct a sought-after object from a collection of measurements. However, complete measurements such that an object can be uniquely reconstructed are seldom available. Analysis of the null components of the imaging system can guide both physical design of the imaging system and algorithmic design of reconstruction algorithms to more closely reconstruct the true object. Characterizing the null space of an imaging operator is a computationally demanding task. While computationally efficient methods have been proposed to iteratively estimate the null space components of a single or a small number of images, full characterization of the null space remains intractable for large images using existing methods. This work proposes a novel learning-based framework for constructing a null space projection operator of linear imaging operators utilizing an artificial neural network autoencoder. To illustrate the approach, a stylized 2D accelerated MRI reconstruction problem (for which an analytical representation of the null space is known) was considered. The proposed method was compared to state-of-the-art randomized linear algebra techniques in terms of accuracy, computational cost, and memory requirements. Numerical results show that the proposed framework achieves comparable or better accuracy than randomized singular value decomposition. It also has lower computational cost and memory requirements in many practical scenarios, such as when the dimension of the null space is small compared to the dimension of the object.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.