Skip to yearly menu bar Skip to main content


Poster

Robust Calibration of Large Vision-Language Adapters

Balamurali Murugesan · Julio Silva-Rodríguez · Ismail Ben Ayed · Jose Dolz

Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ] [ Project Page ]
Wed 2 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

This paper addresses the critical issue of miscalibration in CLIP-based adaptation models in the challenging scenario of out-of-distribution samples, which has been overlooked in the existing literature on CLIP adaptation. We empirically demonstrate that popular CLIP adaptation approaches, such as adapters, prompt learning, and test-time prompt tuning, substantially degrade the calibration capabilities of the zero-shot baseline in the presence of distributional drift. We identify the increase in logit ranges as the underlying cause of miscalibration of CLIP adaptation methods, contrasting with previous work on calibrating fully supervised models. Motivated by these observations, we present a simple and model-agnostic solution to mitigate miscalibration, by scaling the logit range of each sample based on its zero-shot prediction logits. We explore three different alternatives to achieve this, which can be either integrated during adaptation, or directly used at inference time. Comprehensive experiments on popular OOD classification benchmarks demonstrate the effectiveness of the proposed approaches in mitigating miscalibration while maintaining discriminative performance, whose improvements are consistent across the three families of these increasingly popular approaches.

Live content is unavailable. Log in and register to view live content