KEYWORDS: Magnetic resonance imaging, Bone, Image segmentation, Deep learning, Data modeling, Injuries, Visual process modeling, Machine learning, Performance modeling
Anterior cruciate ligament (ACL) is one of the most common injuries associated with sports. Knee osseous morphology can play a role in increased knee instability. Our hypothesis is that the morphological features of the knee, as seen in knee osseous morphology, can contribute to increased knee instability and, thus, increase the likelihood of ACL tear. To test this relationship, it is necessary to segment the femur and tibia bones and extract relevant imaging features. However, manual annotation of 3D medical images, such as on magnetic resonance imaging (MRI) scans, can be a time-consuming and challenging task. In this work, we propose an automated pipeline for creating pseudo-masks of the femur and tibia bones in knee MRI. Our approach involves unsupervised segmentation and deep learning models to classify ACL integrity (intact or torn). Our results demonstrate a high agreement between the automated pseudo-masks and a radiologist’s manual segmentation, which also leads to comparable AUC values for the ACL integrity classification.
Breast cancer risk prediction is becoming increasingly important especially after recent advances in deep learning models. In breast cancer screening, it is common that patients have multiple longitudinal mammogram examinations, where the longitudinal imaging data may provide additional information to boost the learning of a risk prediction model. In this study, we aim to leverage quantitative imaging features extracted from prior mammograms to augment the training of a risk prediction model, through two technical approaches: 1) prior data-enabled transfer learning, and 2) multi-task learning. We evaluated the two approaches on a study cohort of 306 patients in a case-control setting, where each patient has 3 longitudinal screening mammogram examinations. Our results show that both two approaches improved the 1-, 2-, and 3-year risk prediction, indicating that additional knowledge can be learned by our approaches from longitudinal imaging data to improve near-term risk prediction.
Deep learning models based on Convolutional Neural Networks (CNN) are known as successful tools in many classification and segmentation studies. Although these kinds of tools can achieve impressive performance, we still lack effective means to interpret the models, features, and the associated input data on how a model can work well in a data-driven manner. In this paper, we propose a novel investigation to interpret a deep-learningbased model for breast cancer risk prediction using screening digital mammogram images. First, we build a CNN-based risk prediction model by using normal screening mammogram images. Then we developed two different/separate schemes to explore the interpretability. In Scheme 1, we apply a sliding window-based approach to modify the input images; that is, we only keep the sub-regional imaging data inside the sliding window but padding other regions with zeros, and we observe how such an effective sub-regional input may lead to changes in the model’s performance. We generated heatmaps of the AUCs with regards to all sliding windows and showed that the heatmaps can help interpret a potential correlation/response between given sliding windows and the model AUC variation. In Scheme 2, we followed the saliency map-based approach to create a Contribution Map (CM), where the CM value of each pixel reflects the strength of that pixels contributes to the prediction of the output label. Then over a CM, we identify a bounding box around the most informative sub-area of a CM to interpret the corresponding sub-area in the images as the region that is most predictive of the risk. This preliminary study demonstrates a proof of concept on developing an effective means to interpret deep learning CNN models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.