Skip to yearly menu bar Skip to main content


Poster

ZipLoRA: Any Subject in Any Style by Effectively Merging LoRAs

Viraj Shah · Nataniel Ruiz · Forrester Cole · Erika Lu · Svetlana Lazebnik · Yuanzhen Li · Varun Jampani

[ ]
Thu 3 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

Methods for finetuning generative models for concept-driven personalization generally achieve strong results for subject-driven or style-driven generation. Recently, low-rank adaptations (LoRA) have been proposed as a parameter-efficient way of achieving concept-driven personalization. While recent work explores the combination of separate LoRAs to achieve joint generation of learned styles and subjects, existing techniques do not reliably address the problem, so that either the subject-fidelity or style-fidelity are compromised. We propose ZipLoRA, a method to cheaply and effectively merge independently trained style and subject LoRAs in order to achieve generation of any user-provided subject in any user-provided style. Experiments on wide range of subject and style combinations show that ZipLoRA can generate compelling results with meaningful improvements over baselines in subject and style fidelity while preserving the ability to recontextualize.

Live content is unavailable. Log in and register to view live content