Skip to yearly menu bar Skip to main content


Poster

SeA: Semantic Adversarial Augmentation for Last Layer Features from Unsupervised Representation Learning

Qi Qian · Yuanhong Xu · Juhua Hu

Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ]
Thu 3 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract: With the success of deep learning, deep features that are extracted as outputs from the last few layers of a pre-trained deep model have attracted much attention. Unlike hand-crafted features, deep features are data/task-dependent, while still performing well on different downstream tasks. With the recent advancement in unsupervised representation learning, in this work, we revisit the performance of the last layer features extracted from self-supervised pre-trained models. Compared with fine-tuning that can explore diverse augmentations, e.g., random crop/flipping, in the original input space, obtaining appropriate semantic augmentation in the feature space of extracted deep features is challenging. To unleash the potential of deep features, we propose a novel semantic adversarial augmentation (SeA) in the feature space for learning with fixed deep features. Experiments are conducted on $11$ benchmark downstream classification tasks with $4$ popular pre-trained models. Our method is $2\%$ better than the baseline without SeA on average. Moreover, compared to the expensive fine-tuning that is expected to give better performance, SeA shows a comparable performance on $6$ out of $11$ tasks, demonstrating the effectiveness of our proposal in addition to its efficiency.

Live content is unavailable. Log in and register to view live content