Poster
VISA: Reasoning Video Object Segmentation via Large Language Model
Cilin Yan · haochen wang · Shilin Yan · Xiaolong Jiang · Yao Hu · Guoliang Kang · Weidi Xie · Efstratios Gavves
# 18
Strong Double Blind |
Existing Video Object Segmentation(VOS) relies on explicit user instructions, such as categories, masks, or short phrases, restricting their ability to perform complex video segmentation requiring reasoning with world knowledge. In this paper, we introduce a new task, Reasoning Video Object Segmentation(ReasonVOS). This task aims to generate a sequence of segmentation masks in response to implicit text queries that require complex reasoning abilities based on world knowledge and video contexts, which is crucial for structured environment understanding and object-centric interactions, pivotal in the development of embodied AI. To tackle ReasonVOS, we introduce VISA(Video-based large language Instructed Segmentation Assistant), to leverage the world knowledge reasoning capabilities of multi-modal LLMs while possessing the ability to segment and track objects in videos with a mask decoder. Moreover, we establish a comprehensive benchmark consisting of 12,709 instruction-mask sequence pairs from 1,038 diverse videos, which incorporates complex world knowledge reasoning into segmentation tasks for instruction-tuning and evaluation purposes of ReasonVOS models. Experiments conducted on 8 datasets demonstrate the effectiveness of VISA in tackling complex reasoning segmentation and vanilla referring segmentation in both video and image domains. The code and dataset are available at https://anonymous.4open.science/r/VISA-36D6.
Live content is unavailable. Log in and register to view live content