This paper introduces a pioneering approach to linearly controllable generative adversarial network (LC-GAN) driven by unsupervised learning. Departing from traditional methods relying on supervision signals or post-processing for latent feature disentanglement, our proposed technique enables unsupervised learning using only image data through contrastive feature categorization and spectral regularization. In our framework, the discriminator constructs geometry- and appearance-related feature spaces using a combination of image augmentation and contrastive representation learning. Leveraging these feature spaces, the generator autonomously categorizes input latent codes into geometry- and appearance-related features. Subsequently, the categorized features undergo projection into a subspace via our proposed spectral regularization, with each component adeptly controlling a distinct aspect of the generated image. Beyond providing fine-grained control over the generative model, our approach achieves state-of-the-art image generation quality on benchmark datasets, including FFHQ, CelebA-HQ, and AFHQ-V2.
Live content is unavailable. Log in and register to view live content