Skip to yearly menu bar Skip to main content


Poster

Dynamic Data Selection for Efficient SSL via Coarse-to-Fine Refinement

Aditay Tripathi · Pradeep Shenoy · Anirban Chakraborty

Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ]
Fri 4 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

Self-supervised learning (SSL) is critical for learning high-quality representations from unlabeled images at scale. Earlier efforts at reducing the compute requirements of SSL have focused on identifying subsets of training data that are sufficient for training. In addition to using a static representative subset, these methods also require small amounts of labeled data for scoring instances. In this work, we design a new family of algorithms that exploits the training dynamics of SSL methods and adjusts the selected subset throughout the training process. Our proposal has two key components: a) a \textit{coarse-to-fine refinement} schedule for training data, where initial training rounds are performed on larger subsets of data, and the selected subset shrinks throughout the training process, and b) the use of an \textit{unsupervised proxy model} that dynamically selects training instances based on their informativeness for the model’s current state. We also use the proxy model to speed up initial learning by aligning the representations of the primary and proxy models using an additional regularization loss. We validate our method on public benchmarks (CIFAR100, CIFAR10, TinyImagenet, and STL10) and document significant gains in our compute-accuracy tradeoff compared to previous approaches. Notably, we show a 31.6\% reduction in computational load on TinyImagenet while maintaining classification accuracy.

Live content is unavailable. Log in and register to view live content