Domain adaptation (DA) aims to reduce the effects of the distribution gap between the source domain where a model is trained and the target domain where the model is deployed. When a deep learning model is deployed on an aerial platform, it may face gradually degrading weather conditions during its operation, leading to gradually widening gaps between the source training data and the encountered target data. Because there are no existing datasets with gradually degrading weather, we generate four datasets by introducing progressively worsening clouds and snowflakes on aerial images. During deployment, unlabeled target domain samples are acquired in small batches, and adaptation is performed continually with each batch of incoming data, instead of assuming that the entire target dataset is available. We evaluate two continual DA models against a baseline standard DA model under gradually degrading conditions. All of these models are source-free, i.e., they operate without access to the source training data during adaptation. We utilize both convolutional and transformer architectures in the models for comparison. In our experiments, we find that continual DA methods perform better but sometimes encounter stability issues during adaptation. We propose gradient normalization as a simple but effective solution for managing instability during adaptation. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Data modeling
Performance modeling
Education and training
Clouds
Transformers
Visual process modeling
Data storage