Skip to yearly menu bar Skip to main content


Poster

Improving Agent Behaviors with RL Fine-tuning for Autonomous Driving

Zhenghao Peng · Wenjie Luo · Yiren Lu · Tianyi Shen · Cole Gulino · Ari Seff · Justin Fu

Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ]
Tue 1 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

A major challenge in autonomous vehicle research is agents behavior modeling, which has critical applications including constructing realistic and reliable simulations for off-board evaluation and motion forecasting of traffic agents for onboard planning. Motion prediction models trained via supervised learning have recently proven effective at modeling agents across many domains. However, these models are subject to distribution shift when deployed at test-time. In this work, we improve the reliability of agent behaviors by closed-loop fine-tuning of behavior models with reinforcement learning. We demonstrate that we can improve overall performance, as well as targeted metrics such as collision rate, on the Waymo Open Sim Agents challenge benchmark. We also introduce a novel policy evaluation ranking benchmark for directly evaluating the ability of sim agents to measure planning quality and demonstrate the effectiveness of our approach on this new benchmark.

Live content is unavailable. Log in and register to view live content