Skip to yearly menu bar Skip to main content


Poster

COM Kitchens: An Unedited Overhead-view Procedural Videos Dataset a Vision-Language Benchmark

Atsushi Hashimoto · Koki Maeda · Tosho Hirasawa · Jun Harashima · Leszek Rybicki · Yusuke Fukasawa · Yoshitaka Ushiku

Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ]
Fri 4 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

Procedural video understanding is gaining attention in the vision and language community. Deep learning-based video analysis requires extensive data. Consequently, existing works often use web videos as training resources, making it challenging to query contents from raw video observations. To address this issue, we propose a new dataset, COM Kitchens. The dataset consists of unedited overhead-view videos captured by smartphones, in which participants performed food preparation based on given recipes. Fixed-viewpoint video datasets often lack environmental diversity due to high camera setup costs. We used modern wide-angle smartphone lenses to cover cooking counters from sink to cooktop in an overhead view, capturing activity without in-person assistance. With this setup, we collected a diverse dataset by distributing smartphones to participants. With this dataset, we propose the novel video-to-text retrieval task, Online Recipe Retrieval (OnRR), and new video captioning domain, Dense Video Captioning on unedited Overhead-View videos (DVC-OV). Our experiments verified the capabilities and limitations of current web-video-based SOTA methods in handling these tasks.

Live content is unavailable. Log in and register to view live content