PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
We demonstrate a new approach for blind domain adaptation by employing classic feature descriptors as a first step in a deep learning pipeline. One advantage of our approach over other domain adaptation methods is that no target domain data are required. Therefore, the trained models perform well on a multitude of different datasets as opposed to one specific target dataset. We test our approach on the task of abdominal CT and MR organ segmentation and transfer the models from the training dataset to multiple other CT and MR datasets. We show that modality independent neighborhood descriptors applied prior to a DeepLab segmentation pipeline can yield high accuracies when the model is applied on other datasets including those with a different imaging modality.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The alert did not successfully save. Please try again later.
Christian N. Kruse, Mattias P. Heinrich, "Bridging the domain gap for medical image segmentation with multimodal MIND features," Proc. SPIE 12032, Medical Imaging 2022: Image Processing, 1203231 (4 April 2022); https://doi.org/10.1117/12.2612041