KEYWORDS: Image segmentation, Lung cancer, Magnetic resonance imaging, Education and training, Image resolution, Tumors, Deep learning, Cancer, Evolutionary algorithms, Windows
Magnetic Resonance Imaging (MRI) can be useful in preclinical modeling of cancer initiation and response to novel therapeutics. To measure longitudinal changes in disease progress, accurate segmentation and measurement of cancerous areas is a critical step. In the setting of preclinical lung cancer models, common semi-automated techniques for segmentation include intensity thresholding, of which manual adjustments are time-consuming and prone to inter-reader variation. To address these concerns, we evaluated several deep-learning models for automated lung lesion segmentation. MR images from six different experimental models were collected as part of preclinical research studies, with a total of 183 scans processed with tumor burden segmentations from semi-automated thresholding with manual adjustment by expert researcher. These images included two coronal acquisition protocols, one which images a single mouse per scan (0.057x0.057mm in-plane resolution, 0.5mm slice thickness, n=161 scans) and one which images three mice in same scanning session (0.179x0.179mm in-plane resolution, 0.5mm slice thickness, n=12 scans). Scans were stratified to training (n=143) and validation (n=40). Fifteen newly acquired scans were used as independent experimental testing set, all imaged with multi-mouse acquisition. Several deep learning models were developed under varying conditions including: architecture (UNet, UNETR, and SegResNet), resolution (0.057x0.057x0.5mm vs 0.179x0.179x0.5mm), and window size regions-of-interest [ROIs] (224x224x16 input vs 448x448x16 input). At inference, sliding window overlap was varied between 0.2-0.6. All models were trained to 800-1000 epochs using learning rates ranging 0.001-0.0001. The UNET model with 224x224x16 ROI had the best performance, with average image-level DICE score of 0.6446 in validation and 0.4493 in the test set compared to 0.3544 – 0.6481 in validation and 0.0026 – 0.4816 in the test set for the remaining models. Manual editing of the AI segmentation from the UNET model took a median time of 12.5 minutes (range: 5-38) compared to 27.5 minutes (range: 5-127) for fully manual segmentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.