Skip to yearly menu bar Skip to main content


Poster

Efficient Neural Video Representation with Temporally Coherent Modulation

Seungjun Shin · Suji Kim · Dokwan Oh

Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ]
Wed 2 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

Implicit neural representations (INR) has found successful applications across diverse domains. To employ INR in real-life, it is important to speed up training. In the field of INR for video applications, the state-of-the-art approach [26] employs grid-type trainable parameters and successfully achieves a faster encoding speed in comparison to its predecessors [5]. Despite its time efficiency, using grid-types without considering the dynamic nature of the videos results in performance limitations. To enable learning video representation rapidly and effectively, we propose Neural Video representation with Temporally coherent Modulation (NVTM), a novel framework that can capture the dynamic characteristics by decomposing the spatio-temporal 3D video data into a set of 2D grids. Through this mapping, our framework enables to process temporally corresponding pixels at once, resulting in a more than 3× faster video encoding speed for a reasonable video quality. Also, it remarks an average of 1.54dB/0.019 improvements in PSNR/LPIPS on UVG datasets (even with 10% fewer parameters) and an average of 1.84dB/0.013 improvements in PSNR/LPIPS on MCL-JCV dataset, compared to previous work. By expanding this to compression tasks, we demonstrate comparable performance to video compression standards (H.264, HEVC) and recent INR approaches for video compression. Additionally, we perform extensive experiments demonstrating the superior performance of our algorithm across diverse tasks, encompassing super resolution, frame interpolation and video inpainting.

Live content is unavailable. Log in and register to view live content