Skip to yearly menu bar Skip to main content


Poster

Forget More to Learn More: Domain-specific Feature Unlearning for Semi-supervised and Unsupervised Domain Adaptation

Hritam Basak · Zhaozheng Yin

Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ]
Thu 3 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

Semi-supervised Domain Adaptation (SSDA) encompasses the process of adapting representations acquired from the source domain to a new target domain, utilizing a limited number of labeled samples in conjunction with an abundance of unlabeled data from the target domain. Simple aggregation of domain adaptation (DA) and semi-supervised learning (SSL) falls short of optimal performance due to two primary challenges: (1) skewed training data distribution favoring the source representation learning, and (2) the persistence of superfluous domain-specific features, hindering effective domain-agnostic (i.e., task-specific) feature extraction. In pursuit of greater generalizability and robustness, we present an SSDA framework with a new episodic learning strategy: \lq\lq learn, forget, then learn more\rq\rq. First, we train two encoder-classifier pairs, one for the source and the other for the target domain, aiming to learn domain-specific features. This involves minimizing classification loss for in-domain images and maximizing uncertainty loss for out-of-domain images. Subsequently, we transform the images into a new space, strategically unlearning (forgetting) the domain-specific representations while preserving their structural similarity to the originals. This proactive removal of domain-specific attributes is complemented by learning more domain-agnostic features using a Gaussian-guided latent alignment (GLA) strategy that uses a prior distribution to align domain-agnostic source and target representations. The proposed SSDA framework can be further extended to unsupervised domain adaptation (UDA). Evaluation across {two} domain adaptive image classification tasks reveals our method's superiority over state-of-the-art (SoTA) methods in both SSDA and UDA scenarios. Code will be released.

Live content is unavailable. Log in and register to view live content