Skip to yearly menu bar Skip to main content


Poster

OpenIns3D: Snap and Lookup for 3D Open-vocabulary Instance Segmentation

Zhening Huang · Xiaoyang Wu · Xi Chen · Hengshuang Zhao · Lei Zhu · Joan Lasenby

[ ]
Tue 1 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

In this work, we introduce OpenIns3D, a new framework for 3D open-vocabulary scene understanding at instance level. Unlike all existing methods, the proposed pipeline requires no well-aligned images as input and works effectively on a wide range of scenarios. The OpenIns3D framework employs a Mask-Snap-Lookup'' scheme, where theMask'' module learns class-agnostic mask proposals in 3D point clouds, the Snap'' module generates synthetic scene-level images at multiple scales and leverages 2D vision language models to extract interesting objects, and theLookup'' module searches through the outcomes of ``Snap'' to assign category names to the proposed masks. This approach, free from 2D input requirements yet simple and flexible, achieves state-of-the-art performance across a wide range of 3D open-vocabulary tasks, including recognition, object detection, and instance segmentation, on both indoor and outdoor datasets. Moreover, OpenIns3D facilitates effortless switching between different 2D detectors without requiring retraining. When integrated with powerful 2D open-world models, it achieves excellent results in scene understanding tasks. Furthermore, when combined with LLM-powered 2D models, OpenIns3D exhibits a remarkable capability to comprehend and process highly complex text queries that demand intricate reasoning and real-world knowledge.

Live content is unavailable. Log in and register to view live content