Skip to yearly menu bar Skip to main content


Poster

Overcoming Distribution Mismatch in Quantizing Image Super-Resolution Networks

Cheeun Hong · Kyoung Mu Lee

[ ]
Thu 3 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

Quantization is a promising approach to reduce the high computational complexity of image super-resolution (SR) networks. However, low-bit quantization leads to severe accuracy loss in SR networks compared to high-level tasks such as image classification. This is because the feature distributions of the SR networks are significantly divergent for each channel or input image, making it difficult to determine a quantization range. Existing SR quantization works approach this distribution mismatch problem by dynamically adapting quantization ranges to the variant distributions during the test time. However, such a dynamic adaptation incurs additional computational costs that limit the benefits of quantization. Instead, we propose a new quantization-aware training framework that effectively overcomes the distribution mismatch problem in SR networks without the need for dynamic adaptation. Intuitively, the mismatch can be reduced by directly regularizing the distance between the feature to be quantized and the quantization grids during training. However, we observe that mismatch regularization can collide with reconstruction loss during training and adversely affect the SR accuracy. Thus, we avoid the conflict between two losses by regularizing the mismatch only when the gradients of mismatch regularization are cooperative with those of reconstruction loss. Additionally, we introduce a layer-wise weight clipping correcting scheme to find a better quantization range for layer-wise different weights. Experimental results show that our algorithm effectively reduces the distribution mismatch, achieving state-of-the-art performance with minimal computational overhead. Our code will be released.

Live content is unavailable. Log in and register to view live content