Skip to yearly menu bar Skip to main content


Poster

RT-Pose: A 4D Radar-Tensor based 3D Human Pose Estimation and Localization Benchmark

Yuan-Hao Ho · Jen-Hao Cheng · Sheng Yao Kuan · Zhongyu Jiang · Wenhao Chai · Hsiang-Wei Huang · Chih-Lung Lin · Jenq-Neng Hwang

Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ]
Thu 3 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

Traditional methods for human localization and pose estimation (HPE), which mainly rely on RGB images as an input modality, confront substantial limitations in real-world applications due to privacy concerns. In contrast, radar-based HPE methods emerge as a promising alternative, characterized by distinctive attributes such as through-wall recognition and privacy-preserving, rendering the method more conducive to practical deployments. This paper presents a Radar Tensor-based human pose (RT-Pose) dataset and an open-source benchmarking framework. RT-Pose dataset comprises 4D radar tensors, LiDAR point clouds, and RGB images, and is collected for a total of 72k frames across 240 sequences with six different complexity level actions. The 4D radar tensor provides raw spatio-temporal information, differentiating it from other radar point cloud-based datasets. We develop a semi-automatic annotation process, which uses RGB images and LiDAR point clouds to accurately label 3D human skeletons.In addition, we propose HRRadarPose, the first single-stage architecture that extracts the high-resolution representation of 4D radar tensors in 3D space to aid human keypoint estimation. HRRadarPose outperforms previous radar-based HPE work on the RT-Pose benchmark. The overall HRRadarPose performance on the RT-Pose dataset, as reflected in a mean per joint position error (MPJPE) of 9.91 cm, indicates the persistent challenges in achieving accurate HPE in complex real-world scenarios.

Live content is unavailable. Log in and register to view live content