Building Open-ended Embodied Agent Via Language-policy Bidirectional Adaptation · The Large Language Model Bible Contribute to LLM-Bible

Building Open-ended Embodied Agent Via Language-policy Bidirectional Adaptation

Zhai Shaopeng, Wang Jie, Zhang Tianyi, Huang Fuxian, Zhang Qi, Zhou Ming, Hou Jing, Qiao Yu, Liu Yu. Arxiv 2023

[Paper]    
Agentic Fine Tuning Pretraining Methods RAG Reinforcement Learning Tools Training Techniques

Building embodied agents on integrating Large Language Models (LLMs) and Reinforcement Learning (RL) have revolutionized human-AI interaction: researchers can now leverage language instructions to plan decision-making for open-ended tasks. However, existing research faces challenges in meeting the requirement of open-endedness. They typically either train LLM/RL models to adapt to a fixed counterpart, limiting exploration of novel skills and hindering the efficacy of human-AI interaction. To this end, we present OpenPAL, a co-training framework comprising two stages: (1) fine-tuning a pre-trained LLM to translate human instructions into goals for planning, and goal-conditioned training a policy for decision-making; (2) co-training to align the LLM and policy, achieving instruction open-endedness. We conducted experiments using Contra, an open-ended FPS game, demonstrating that an agent trained with OpenPAL not only comprehends arbitrary instructions but also exhibits efficient execution. These results suggest that OpenPAL holds the potential to construct open-ended embodied agents in practical scenarios.

Similar Work