Decorate the Newcomers: Visual Domain Prompt for Continual Test Time Adaptation
AAAIDec 8, 2022Outstanding Student Paper
Continual Test-Time Adaptation (CTTA) aims to adapt the source model to
continually changing unlabeled target domains without access to the source
data. Existing methods mainly focus on model-based adaptation in a
self-training manner, such as predicting pseudo labels for new domain datasets.
Since pseudo labels are noisy and unreliable, these methods suffer from
catastrophic forgetting and error accumulation when dealing with dynamic data
distributions. Motivated by the prompt learning in NLP, in this paper, we
propose to learn an image-level visual domain prompt for target domains while
having the source model parameters frozen. During testing, the changing target
datasets can be adapted to the source model by reformulating the input data
with the learned visual prompts. Specifically, we devise two types of prompts,
i.e., domains-specific prompts and domains-agnostic prompts, to extract current
domain knowledge and maintain the domain-shared knowledge in the continual
adaptation. Furthermore, we design a homeostasis-based prompt adaptation
strategy to suppress domain-sensitive parameters in domain-invariant prompts to
learn domain-shared knowledge more effectively. This transition from the
model-dependent paradigm to the model-free one enables us to bypass the
catastrophic forgetting and error accumulation problems. Experiments show that
our proposed method achieves significant performance gains over
state-of-the-art methods on four widely-used benchmarks, including CIFAR-10C,
CIFAR-100C, ImageNet-C, and VLCS datasets.