Presentation + Paper
4 April 2022 Multi-modal learning with missing data for cancer diagnosis using histopathological and genomic data
Author Affiliations +
Abstract
Multi-modal learning (e.g., integrating pathological images with genomic features) tends to improve the accuracy of cancer diagnosis and prognosis as compared to learning with a single modality. However, missing data is a common problem in clinical practice, i.e., not every patient has all modalities available. Most of the previous works directly discarded samples with missing modalities, which might lose information in these data and increase the likelihood of overfitting. In this work, we generalize the multi-modal learning in cancer diagnosis with the capacity of dealing with missing data using histological images and genomic data. Our integrated model can utilize all available data from patients with both complete and partial modalities. The experiments on the public TCGA-GBM and TCGA-LGG datasets show that the data with missing modalities can contribute to multi-modal learning, which improvesthe model performance in grade classification of glioma cancer.
Conference Presentation
© (2022) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Can Cui, Zuhayr Asad, William F. Dean, Isabelle T. Smith, Christopher Madden, Shunxing Bao, Bennett A. Landman, Joseph T. Roland, Lori A. Coburn, Keith T. Wilson, Jeffrey P. Zwerner, Shilin Zhao, Lee E. Wheless, and Yuankai Huo "Multi-modal learning with missing data for cancer diagnosis using histopathological and genomic data", Proc. SPIE 12033, Medical Imaging 2022: Computer-Aided Diagnosis, 120331D (4 April 2022); https://doi.org/10.1117/12.2612318
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Data modeling

Feature extraction

Integrated modeling

Genomics

Cancer

Data fusion

Image fusion

Back to Top