Skip to yearly menu bar Skip to main content


Poster

Towards compact reversible image representations for neural style transfer

Xiyao Liu · Siyu Yang · Jian Zhang · Gerald Schaefer · Jiya Li · Xunli FAN · Songtao Wu · Hui Fang

Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ]
Wed 2 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

Arbitrary neural style transfer aims to stylise a content image by referencing a provided style image. Despite various efforts to achieve both content preservation and style transferability, learning effective representations for this task remains challenging since the redundancy of content and style features leads to unpleasant image artefacts. In this paper, we learn compact neural representations for style transfer motivated from an information theoretical perspective. In particular, we enforce compressive representations across sequential modules of a reversible flow network in order to reduce feature redundancy without losing content preservation capability. We use a Barlow twins loss to reduce channel dependency and thus to provide better content expressiveness, and optimise the Jensen-Shannon divergence of style representations between reference and target images to avoid under- and over-stylisation. We demonstrate the effectiveness of our proposed method in comparison to other state-of-the-art style transfer methods.

Live content is unavailable. Log in and register to view live content