Skip to yearly menu bar Skip to main content


Poster

Nymeria: A Massive Collection of Egocentric Multi-modal Human Motion in the Wild

Lingni Ma · Yuting Ye · Rowan Postyeni · Alexander J Gamino · Vijay Baiyya · Luis Pesqueira · Kevin M Bailey · David Soriano Fosas · Fangzhou Hong · Vladimir Guzov · Yifeng Jiang · Hyo Jin Kim · Jakob Engel · Karen Liu · Ziwei Liu · Renzo De Nardi · Richard Newcombe

[ ]
Tue 1 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

We introduce Nymeria - a large-scale, diverse, richly annotated human motion dataset collected in the wild with multi-modal egocentric devices. The dataset comes with a) full-body motion ground truth; b) egocentric multimodal recordings from Project Aria devices, including color, grayscale and eye-tracking cameras, IMUs, magnetometer, barometer, and multi-channel microphones; and c) an additional "observer" device providing a third-person viewpoint. We compute world-aligned 6DoF transformations for all sensors, across devices and capture sessions. The dataset also provides 3D scene point clouds and calibrated eye gaze. We derive a protocol to annotate hierarchical language descriptions of in-context human motion, from fine-grain dense body pose narrations, to simplified atomic actions and coarse activity summarization. To the best of our knowledge, Nymeria dataset is the world's largest collection of human motion in the wild with natural and diverse activities; first of its kind to provide synchronized and localized multi-device multimodal egocentric data; and also the world’s largest dataset of motion with language descriptions. It contains 1200 recordings of 300 hours daily activities from 264 participants 50 locations. The accumulated trajectory from participants is 399.2Km for the head and 1053.3Km for both wrists. To facilitate research, we define multiple research tasks in egocentric body tracking, motion synthesis, and action recognition. The performance of several state-of-the-art algorithms are reported. We will open-source the data and code to empower future exploration of the research community.

Live content is unavailable. Log in and register to view live content