Clinical deployment of systems based on deep neural networks is hampered by sensitivity to domain shift, caused by e.g. new scanners or rare events, factors usually overcome by human supervision. We suggest a correct-thenpredict approach, where the user labels a few samples of the new data for each slide, which is used to update the network. This few-shot meta-learning method is based on Model-Agnostic Meta-Learning (MAML), with the goal of training to adapt quickly to new tasks. Here we adapt and apply the method to the histopathological setting by identifying a task as a whole-slide image with its corresponding classification problem. We evaluated the method on three datasets, while purposefully leaving out-of-distribution data out from the training data, such as whole-slide images from other centers, scanners or with different tumor classes. Our results show that MAML outperforms conventionally trained baseline networks on all our datasets in average accuracy per slide. Furthermore, MAML is useful as a robustness mechanism to out-of-distribution data. The model becomes less sensitive to difference between whole-slide images and is viable for clinical implementation when used with the correct-then-predict workflow. This offers a reduced need for data annotation when training networks, and a reduced risk of performance loss when domain shift data occurs after deployment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.