Skip to yearly menu bar Skip to main content


Poster

MeshSegmenter: Zero-Shot Mesh Segmentation via Texture Synthesis

ziming zhong · Yanyu Xu · Jing Li · Jiale Xu · Zhengxin Li · Chaohui Yu · Shenghua Gao

# 207
Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ] [ Paper PDF ]
Tue 1 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

We present MeshSegmenter, a pioneering framework designed for 3D Zero-shot semantic segmentation. This innovative model successfully extends the powerful capabilities of 2D segmentation models to 3D meshes, delivering accurate 3D segmentation across diverse meshes and segment descriptions. Our contributions are threefold. Firstly, we introduce the MeshSegmenter framework, which consistently produces precise 3D segmentation results. Secondly, we propose the generation of textures based on object descriptions to augment 2D segmentation models with additional texture information, thereby improving their accuracy. By harnessing latent texture information unearthed from generative models based on 3D meshes, our model can perform accurate 3D segmentation in geometrically non-prominent areas, such as segmenting a car door within a car mesh. Lastly, we develop a multi-view revoting module that integrates 2D detection results and confidence scores from various views onto the 3D mesh, ensuring the 3D consistency of segmentation results and eliminating inaccuracies from specific perspectives. Through these innovations, MeshSegmenter offers stable and reliable 3D segmentation results, highlighting its potential as a transformative tool in the field of 3D zero-shot segmentation.

Live content is unavailable. Log in and register to view live content