Skip to yearly menu bar Skip to main content


Poster

Meerkat: Audio-Visual Large Language Model for Grounding in Space and Time

Sanjoy Chowdhury · Sayan Nag · Subhrajyoti Dasgupta · Jun Chen · Mohamed Elhoseiny · Ruohan Gao · Dinesh Manocha

Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ]
Fri 4 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

Leveraging Large Language Models’ remarkable proficiency in text-based tasks, recent works on Multimodal-LLMs (MLLMs) extend them to other modalities like vision and audio. However, the progress in these directions has been mostly focused on tasks that only require a coarse-grained understanding of the audio-visual semantics. We present Meerkat, an audio-visual LLM equipped with a fine-grained understanding of image and audio both spatially and temporally. With a new modality alignment module based on optimal transport and a cross-attention module that enforces audio-visual consistency, Meerkat can tackle challenging tasks such as audio referred visual grounding, image guided audio temporal localization, and audio-visual fact-checking. Moreover, we carefully curate a large dataset AVFIT that comprises 3M instruction tuning samples collected from open-source datasets, and introduce MeerkatBench that unifies five challenging audio-visual tasks. We achieve state-of-the-art performance on all these downstream tasks with a relative improvement of up to 37.12%.

Live content is unavailable. Log in and register to view live content