In radiotherapy treatment planning, manual annotation of organs-at-risk and target volumes is a difficult and time-consuming task, prone to intra and inter-observer variabilities. Deep learning networks (DLNs) are gaining worldwide attention to automate such annotative tasks because of their ability to capture data hierarchy. However, for better performance DLNs require large number of data samples whereas annotated medical data is scarce. To remedy this, data augmentation is used to increase the training data for DLNs that enables robust learning by incorporating spatial/translational invariance into the training phase. Importantly, performance of DLNs is highly dependent on the ground truth (GT) quality: if manual annotation is not accurate enough, the network cannot learn better than the annotated example. This highlights the need to compensate for possibly insufficient GT quality using augmentation, i.e., by providing more GTs per image, in order to improve performance of DLNs. In this work, small random alterations were applied to GT and each altered GT was considered as an additional annotation. Contour augmentation was used to train a dilated U-Net in multiple GTs per image setting, which was tested on a pelvic CT dataset acquired from 67 patients to segment bladder and rectum in a multi-class segmentation setting. By using contour augmentation (coupled with data augmentation), the network learnt better than with data augmentation only, as it was able to correct slightly offset contours in GT. The segmentation results produced were quantified using spatial overlap, distance-based and probabilistic measures. The Dice score for bladder and rectum are reported as 0.88±0.19 and 0.89±0.04, whereas the average symmetric surface distance are 0.22 ± 0.09 mm and 0.09 ± 0.05 mm, respectively.
For prostate cancer patients, large organ deformations occurring between the sessions of a fractionated radiotherapy treatment lead to uncertainties in the doses delivered to the tumour and the surrounding organs at risk. The segmentation of those structures in cone beam CT (CBCT) volumes acquired before every treatment session is desired to reduce those uncertainties. In this work, we perform a fully automatic bladder segmentation of CBCT volumes with u-net, a 3D fully convolutional neural network (FCN). Since annotations are hard to collect for CBCT volumes, we consider augmenting the training dataset with annotated CT volumes and show that it improves the segmentation performance. Our network is trained and tested on 48 annotated CBCT volumes using a 6-fold cross-validation scheme. The network reaches a mean Dice similarity coefficient (DSC) of 0:801 ± 0:137 with 32 training CBCT volumes. This result improves to 0:848 ± 0:085 when the training set is augmented with 64 CT volumes. The segmentation accuracy increases both with the number of CBCT and CT volumes in the training set. As a comparison, the state-of-the-art deformable image registration (DIR) contour propagation between planning CT and daily CBCT available in RayStation reaches a DSC of 0:744 ± 0:144 on the same dataset, which is below our FCN result.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.