KEYWORDS: Convolutional neural networks, Image segmentation, Arteries, Magnetic resonance imaging, 3D image processing, Visualization, Data centers, Image processing, Data modeling
Carotid artery vessel wall thickness measurement is an essential step in the monitoring of patients with atherosclerosis. This requires accurate segmentation of the vessel wall, i.e., the region between an artery’s lumen and outer wall, in black-blood magnetic resonance (MR) images. Commonly used convolutional neural networks (CNNs) for semantic segmentation are suboptimal for this task as their use does not guarantee a contiguous ring-shaped segmentation. Instead, in this work, we cast vessel wall segmentation as a multi-task regression problem in a polar coordinate system. For each carotid artery in each axial image slice, we aim to simultaneously find two non-intersecting nested contours that together delineate the vessel wall. CNNs applied to this problem enable an inductive bias that guarantees ring-shaped vessel walls. Moreover, we identify a problem-specific training data augmentation technique that substantially affects segmentation performance. We apply our method to segmentation of the internal and external carotid artery wall, and achieve top-ranking quantitative results in a public challenge, i.e., a median Dice similarity coefficient of 0.813 for the vessel wall and median Hausdorff distances of 0.552 mm and 0.776 mm for lumen and outer wall, respectively. Moreover, we show how the method improves over a conventional semantic segmentation approach. These results show that it is feasible to automatically obtain anatomically plausible segmentations of the carotid vessel wall with high accuracy.
Classical non-learned algorithms for photoacoustic tomography (PAT) reconstructions are mathematically proven to converge, but they can be very slow and inadequate with respect to model and data assumptions. Recently, learned neural networks have shown to surpass the reconstruction quality of non-learned algorithms, but since analysis is challenging, convergence and stability are not guaranteed. To bridge this gap, we investigate the stability of algorithms in which we combine the structure of model-based algorithms with the efficiency of data-driven neural networks.
In the last decade, primal-dual algorithms have become popular due to their ability to employ non-smooth regularisation, which is used to overcome the limited sampling problem in photoacoustic tomography. The algorithm performs updates in both the image domain (primal) and the data domain (dual). These are connected by the photoacoustic operator, which modelling is based on the laws of physics and system settings.
In our approach, we replace the updates with shallow neural networks, while maintaining the primal-dual structure and the information from the photoacoustic operator. This greatly improves reconstruction quality, especially in cases of strong noise and limited sampling. This has the additional benefit that a hand-crafted regularisation does not have to be chosen, but is learned in a data-driven manner.
We show its robustness in simulation and experiment against uncertainty and changes in PAT system settings. This includes the number, placement and calibration of detectors, but also changes in the tissue type that is imaged. The method is stable, computationally efficient and applicable to a generic photoacoustic system with universal applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.