Skip to yearly menu bar Skip to main content


Poster

Free-Editor: Zero-shot Text-driven 3D Scene Editing

Md Nazmul Karim · Hasan Iqbal · Umar Khalid · Chen Chen · Jing Hua

# 302
[ ] [ Project Page ] [ Paper PDF ]
Tue 1 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract: Text-to-Image (T2I) diffusion models have gained popularity recently due to their multipurpose and easy-to-use nature, e.g. image and video generation as well as editing. However, training a diffusion model specifically for 3D scene editing is not straightforward due to the lack of large-scale datasets. To date, editing 3D scenes requires either re-training the model to adapt to various 3D edited scenes or design-specific methods for each special editing type. Furthermore, state-of-the-art (SOTA) methods require multiple synchronized edited images from the same scene to facilitate the scene editing. Due to the current limitations of T2I models, it is very challenging to apply consistent editing effects to multiple images, i.e. multi-view inconsistency in editing. This in turn compromises the desired 3D scene editing performance if these images are used. In our work, we propose a novel training-free 3D scene editing technique, \textsc{Free-Editor}, which allows users to edit 3D scenes without further re-training the model during test time. Our proposed method successfully avoids the \emph{multi-view style inconsistency} issue in SOTA methods with the help of a ``single-view editing" scheme. Specifically, we show that editing a particular 3D scene can be performed by only modifying a single view. To this end, we introduce an \emph{Edit Transformer} that enforces intra-view consistency and inter-view style transfer by utilizing self-view and cross-view attention, respectively. Since it is no longer required to re-train the model and edit every view in a scene, the editing time, as well as memory resources, are reduced significantly, e.g., the runtime being $\sim \textbf{20} \times$ faster than SOTA. We have conducted extensive experiments on a wide range of benchmark datasets and achieved diverse editing capabilities with our proposed technique.

Live content is unavailable. Log in and register to view live content