Skip to yearly menu bar Skip to main content


Poster

Noise Calibration: Plug-and-play Content-Preserving Video Enhancement using Pre-trained Video Diffusion Models

Qinyu Yang · Haoxin Chen · Yong Zhang · Menghan Xia · Xiaodong Cun · Zhixun Su · Ying Shan

Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ]
Tue 1 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

In order to improve the quality of synthesized videos, currently, one predominant method involves retraining an expert diffusion model and then implementing a noising-denoising process for refinement. Despite the significant training costs, maintaining content consistency between the original and enhanced videos remains a major challenge. To tackle this challenge, we propose a novel formulation that considers both visual quality and content consistency. Content consistency is ensured by a proposed loss fuction that maintains the structure of the input, while visual quality is improved by utilizing the denoising process of pretrained diffusion models. To address the formulated optimization problem, we have developed a plug-and-play noise optimization strategy, which we refer to as \textbf{Noise Calibration}. By simply refining the initial random noise through a few iterations, the original video content can be largely preserved, and the enhancement effect is significantly improved. Extensive experiments have demonstrated the effectiveness of the proposed method.

Live content is unavailable. Log in and register to view live content