Skip to yearly menu bar Skip to main content


Poster

Learning-based Axial Video Motion Magnification

Kwon Byung-Ki · Oh Hyun-Bin · Kim Jun-Seong · Hyunwoo Ha · Tae-Hyun Oh

[ ] [ Project Page ]
Wed 2 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

Video motion magnification amplifies invisible small motions to be perceptible, which provides humans with a spatially dense and holistic understanding of small motions in the scene of interest. This is based on the premise that magnifying small motions enhances the legibility of motions. In the real world, however, vibrating objects often possess convoluted systems that have complex natural frequencies, modes, and directions. Existing motion magnification often fails to improve legibility since the intricate motions still retain complex characteristics even after being magnified, which likely distracts us from analyzing them. In this work, we focus on improving legibility by proposing a new concept, axial video motion magnification, which magnifies decomposed motions along the user-specified direction. Axial video motion magnification can be applied to various applications where motions of specific axes are critical, by providing simplified and easily readable motion information. To achieve this, we propose a novel Motion Separation Module that enables the disentangling and magnifying of motion representation along axes of interest. Furthermore, we build a new synthetic training dataset for our task that is generalized to real data. Our proposed method improves the legibility of resulting motions along certain axes by adding a new feature: user controllability. In addition, axial video motion magnification is a more generalized concept; thus, our method can be directly adapted to the generic motion magnification and achieves favorable performance against competing methods. The code and dataset are available on our project page: https://axial-momag.github.io/axial momag/.

Live content is unavailable. Log in and register to view live content