Abstract:
We present CLIP-DPO, a preference optimization method that leverages pretrained V-L (Vision-Language) embeddings models, such as CLIP, for DPO-based optimization of Vision LLMs. Starting from the initial pool of supervised fine-tuning data, we generate a diverse set of predictions, which are then ranked based on their CLIP image-text similarities to obtain a set of positive and negative pairs for DPO-based training. We show that this simple approach offers notable performance gains over a diverse set of benchmarks and vision-language tasks.
Live content is unavailable. Log in and register to view live content