Skip to yearly menu bar Skip to main content


Poster

Lazy Diffusion Transformer for Interactive Image Editing

Yotam Nitzan · Zongze Wu · Richard Zhang · Eli Shechtman · Danny Cohen-Or · Taesung Park · MichaĆ«l Gharbi

[ ]
Tue 1 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

We introduce a novel diffusion transformer, LazyDiffusion, that generates partial image updates efficiently, targeting interactive image editing applications. Starting from a blank canvas or an image, a user specifies a sequence of localized image modifications using a binary mask and a text prompt. Our generator operates in two phases. First, a context encoder processes the current canvas and user mask to produce a compact global context tailored to the region to generate. Second, conditioned on this global context, a diffusion-based decoder synthesizes the masked pixels in a ``lazy'' fashion, i.e., it only generates the masked region. This contrasts with previous works that either regenerate the full canvas, wasting time and computation, or confine processing to a tight rectangular crop around the mask, ignoring the global image context altogether. Our decoder's runtime and computation cost scale with the mask size, which is typically small for interactive edits. Since the diffusion process dominates the runtime and cost, our encoder introduces negligible overhead. Our approach amortizes the generation cost over several user interactions, making the editing experience more interactive, especially at high-resolution. We train LazyDiffusion on a large scale text-to-image dataset at 1024x1024 resolution. We demonstrate that our approach is competitive with state-of-the-art inpainting methods in terms of quality and fidelity while providing a x10 speedup for typical user interactions, where the editing mask represents 10% of the image.

Live content is unavailable. Log in and register to view live content