Skip to yearly menu bar Skip to main content


Poster

Face Adapter for Pre-Trained Diffusion Models with Fine-Grained ID and Attribute Control

Yue Han · Junwei Zhu · Keke He · Xu Chen · Yanhao Ge · Wei Li · Xiangtai Li · Jiangning Zhang · Chengjie Wang · Yong Liu

[ ]
Tue 1 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

Previous techniques for face reenactment and swapping predominantly rely on GAN frameworks. However, recent research has shifted its focus towards leveraging large diffusion models for these tasks, owing to their superior generation capabilities. Nonetheless, training these models incurs significant computational costs, and the results have not yet attained satisfactory performance levels. To address this issue, we introduce Face-Adapter, an efficient and effective adapter designed for high-precision and high-fidelity face editing by pretrained diffusion models, which contains: 1) Spatial Condition Generator provides precise landmarks and background; 2) Plug-and-play Identity Encoder transfers face embeddings to the text space by a transformer decoder. 3) Attribute Controller integrates spatial condition and detailed attributes. Face-Adapter achieves comparable or even superior performance in terms of motion control precision, ID retention capability, and generation quality compared to fully fine-tuned models in face reenactment/swapping tasks. Additionally, Face-Adapter seamlessly integrates with popular pretrained diffusion models such as StableDiffusion. Full codes will be available.

Live content is unavailable. Log in and register to view live content