Poster
Tight and Efficient Upper Bound on Spectral Norm of Convolutional Layers
Ekaterina Grishina · Mikhail Gorbunov · Maxim Rakhuba
# 27
Strong Double Blind |
[
Poster]
[
Supplemental]
Wed 2 Oct 1:30 a.m. PDT
— 3:30 a.m. PDT
Abstract:
Controlling the spectral norm of convolution layers has been shown to enhance generalization and robustness in CNNs, as well as training stability and the quality of generated samples in GANs. Existing methods for computing singular values either result in loose bounds or lack scalability with input and kernel sizes. In this paper, we obtain a new upper bound that is independent of input size, differentiable and can be efficiently computed during training. Through experiments, we demonstrate how this new bound can be used to improve the performance of convolutional architectures.
Chat is not available.