Poster
DiffBIR: Toward Blind Image Restoration with Generative Diffusion Prior
Xinqi Lin · Jingwen He · Ziyan Chen · Zhaoyang Lyu · Bo Dai · Fanghua Yu · Yu Qiao · Wanli Ouyang · Chao Dong
# 244
We present DiffBIR, a two-stage restoration pipeline that handles blind image restoration tasks in a unified framework. In the first stage, we use restoration modules to remove degradations and obtain high-fidelity restored results. For the second stage, we propose IRControlNet that leverages the generative ability of latent diffusion models to generate realistic details. Specifically, IRControlNet is trained based on specially produced condition images without distracting noisy content for stable generation performance. Moreover, we design a region-adaptive restoration guidance that can modify the denoising process during inference without model re-training, allowing users to balance realness and fidelity through a tunable guidance scale. Extensive experiments have demonstrated DiffBIR's superiority over state-of-the-art approaches for blind image super-resolution, blind face restoration and blind image denoising tasks on both synthetic and real-world datasets.