Poster + Paper
3 April 2024 A deep learning algorithm for segmentation of lung cancer lesions in MR images of mouse models
Katie M. Merriman, Chenran Zhang, Nathan Lay, Mason Belue, Peter L. Choyke, Curtis Harris, Baris Turkbey, Stephanie A. Harmon
Author Affiliations +
Conference Poster
Abstract
Magnetic Resonance Imaging (MRI) can be useful in preclinical modeling of cancer initiation and response to novel therapeutics. To measure longitudinal changes in disease progress, accurate segmentation and measurement of cancerous areas is a critical step. In the setting of preclinical lung cancer models, common semi-automated techniques for segmentation include intensity thresholding, of which manual adjustments are time-consuming and prone to inter-reader variation. To address these concerns, we evaluated several deep-learning models for automated lung lesion segmentation. MR images from six different experimental models were collected as part of preclinical research studies, with a total of 183 scans processed with tumor burden segmentations from semi-automated thresholding with manual adjustment by expert researcher. These images included two coronal acquisition protocols, one which images a single mouse per scan (0.057x0.057mm in-plane resolution, 0.5mm slice thickness, n=161 scans) and one which images three mice in same scanning session (0.179x0.179mm in-plane resolution, 0.5mm slice thickness, n=12 scans). Scans were stratified to training (n=143) and validation (n=40). Fifteen newly acquired scans were used as independent experimental testing set, all imaged with multi-mouse acquisition. Several deep learning models were developed under varying conditions including: architecture (UNet, UNETR, and SegResNet), resolution (0.057x0.057x0.5mm vs 0.179x0.179x0.5mm), and window size regions-of-interest [ROIs] (224x224x16 input vs 448x448x16 input). At inference, sliding window overlap was varied between 0.2-0.6. All models were trained to 800-1000 epochs using learning rates ranging 0.001-0.0001. The UNET model with 224x224x16 ROI had the best performance, with average image-level DICE score of 0.6446 in validation and 0.4493 in the test set compared to 0.3544 – 0.6481 in validation and 0.0026 – 0.4816 in the test set for the remaining models. Manual editing of the AI segmentation from the UNET model took a median time of 12.5 minutes (range: 5-38) compared to 27.5 minutes (range: 5-127) for fully manual segmentation.
© (2024) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Katie M. Merriman, Chenran Zhang, Nathan Lay, Mason Belue, Peter L. Choyke, Curtis Harris, Baris Turkbey, and Stephanie A. Harmon "A deep learning algorithm for segmentation of lung cancer lesions in MR images of mouse models", Proc. SPIE 12927, Medical Imaging 2024: Computer-Aided Diagnosis, 129272L (3 April 2024); https://doi.org/10.1117/12.3005446
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image segmentation

Lung cancer

Education and training

Magnetic resonance imaging

Image resolution

Deep learning

Tumors

Back to Top