Skip to yearly menu bar Skip to main content


Poster

See and Think: Embodied Agent in Virtual Environment

Zhonghan Zhao · Xuan Wang · Wenhao Chai · Boyi Li · Shengyu Hao · Shidong Cao · Tian Ye · Gaoang Wang

[ ]
Thu 3 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

Large language models (LLMs) have achieved impresx0002sive progress on several open-world tasks. Recently, usx0002ing LLMs to build embodied agents has been a hotspot. In this paper, we propose STEVE, a comprehensive and visionx0002ary embodied agent in the Minecraft virtual environment. STEVE consists of three key components: vision perception, language instruction, and code action. Vision perception involves the interpretation of visual information in the envix0002ronment, which is then integrated into the LLMs component with agent state and task instruction. Language instrucx0002tion is responsible for iterative reasoning and decomposx0002ing complex tasks into manageable guidelines. Code action generates executable skill actions based on retrieval in skill database, enabling the agent to interact effectively within the Minecraft environment. We also collect STEVE-21K dataset, which includes 600+ vision-environment pairs, 20K knowledge question-answering pairs, and 200+ skillx0002code pairs. We conduct continuous block search, knowledge question and answering, and tech tree mastery to evaluate the performance. Extensive experiments show that STEVE achieves at most 1.5× faster unlocking key tech trees and 2.5× quicker in block search tasks compared to previous state-of-the-art methods.

Live content is unavailable. Log in and register to view live content