The growth in volume of multi-dimensional imaging and the advent of high resolution scanners is placing a large viewing burden on clinicians. In many situations, summaries of these studies would suffice, particularly for quick viewing, easy transport and procedure planning. One easy way to organize these studies is by viewpoints depicting left and right coronary arteries. This is a difficult problem, however, requiring automated methods to (a) extract coronary arteries, (b) recognize identity of arteries as left or right coronary arteries, and recognize (c) the viewpoints from which they are taken to examine their potential pathologies. In this paper, we present a deep learning solution that addresses this problem by using a segmentation network for detection of coronary arteries and a residual deep learning network for recognizing simultaneously the viewpoint and artery identity. Results show that the deep learning method produces reliable classification for many viewpoints.
Segmenting anatomical structures in the chest is a crucial step in many automatic disease detection applications. Multi-atlas based methods are developed for this task, however, due to the required deformable registration step, they are often computationally expensive and create a bottle neck in terms of processing time. In contrast, convolutional neural networks (CNNs) with 2D or 3D kernels, although slow to train, are very fast in the deployment stage and have been employed to solve segmentation tasks in medical imaging. A recent improvement in performance of neural networks in medical image segmentation was recently reported when dice similarity coefficient (DSC) was used to optimize the weights in a fully convolutional architecture called V-Net. However, in the previous work, only the DSC calculated for one foreground object is optimized, as a result the DSC based segmentation CNNs are only able to perform a binary segmentation. In this paper, we extend the V-Net binary architecture to a multi-label segmentation network and use it for segmenting multiple anatomical structures in cardiac CTA. The method uses multi-label V-Net optimized by the sum over DSC for all the anatomies, followed by a post-processing method to refine the segmented surface. Our method takes averagely less than 3 sec to segment a full CTA volume. In contrast, the fastest multi-atlas based methods published so far take around 10 mins. Our method achieves an average DSC of 76% for 16 segmented anatomies using four-fold cross validation, which is close to the state-of-the-art.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.