Accurate characterization of lesion boundaries is a critical task in diagnosing cancer. General large vision models (LVMs) such as the Segment Anything Model (SAM) have demonstrated promising performance in the semantic segmentation of natural images, but their test-time performance on medical images has been suboptimal, particularly in images with noise or poor contrast. To address this issue, we propose a test-time domain adaptation strategy that combines LVMs trained on large-scale datasets with the level-set active contours model, which can be tuned on a small subset of cases from the target dataset to yield more robust medical image segmentation performance at test-time. We showed the performance of our strategy on two datasets ISIC 2018 and CHASED DB1, which yielded a 0.3% and 0.11% improvement in segmentation accuracy, respectively, measured by Intersection over Union. We conclude that using active contours on top of LVMs for test-time domain adaptation can improve segmentation performance.
|