Skip to yearly menu bar Skip to main content


Poster

Multi-branch Collaborative Learning Network for 3D Visual Grounding

Zhipeng Qian · Yiwei Ma · Zhekai Lin · Jiayi Ji · Xiawu Zheng · Xiaoshuai Sun · Rongrong Ji

Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ]
Thu 3 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

3D referring expression comprehension (3DREC) and segmentation (3DRES) have overlapping objectives, indicating the potential for collaboration between them. However, existing collaborative approaches predominantly depend on the predictions of one task to make predictions for the other, limiting effective collaboration. We argue that employing separate branches for 3DREC and 3DRES tasks enhances the model's capacity to learn specific information for each task, enabling them to acquire complementary knowledge. Thus, we propose the MCLN framework, which includes independent branches for 3DREC and 3DRES tasks. This enables dedicated exploration of each task and effective coordination between the branches. Furthermore, to facilitate mutual reinforcement between these branches, we introduce a Relative Superpoint Aggregation (RSA) module and an Adaptive Soft Alignment (ASA) module. These modules significantly contribute to the precise alignment of prediction results from the two branches, directing the module to allocate increased attention to key positions. Comprehensive experimental evaluation demonstrates that our proposed method achieves state-of-the-art performance on both the 3DREC and 3DRES tasks, with an increase of 3.27% in Acc@0.5 for 3DREC and 5.22% in mIOU for 3DRES.

Live content is unavailable. Log in and register to view live content