Can Language Agents Be Alternatives To PPO? A Preliminary Empirical Study On Openai Gym · The Large Language Model Bible Contribute to LLM-Bible

Can Language Agents Be Alternatives To PPO? A Preliminary Empirical Study On Openai Gym

Sheng Junjie, Huang Zixiao, Shen Chuyun, Li Wenhao, Hua Yun, Jin Bo, Zha Hongyuan, Wang Xiangfeng. Arxiv 2023

[Paper] [Code]    
Agentic Few Shot Has Code RAG Reinforcement Learning Tools

The formidable capacity for zero- or few-shot decision-making in language agents encourages us to pose a compelling question: Can language agents be alternatives to PPO agents in traditional sequential decision-making tasks? To investigate this, we first take environments collected in OpenAI Gym as our testbeds and ground them to textual environments that construct the TextGym simulator. This allows for straightforward and efficient comparisons between PPO agents and language agents, given the widespread adoption of OpenAI Gym. To ensure a fair and effective benchmarking, we introduce \(5\) levels of scenario for accurate domain-knowledge controlling and a unified RL-inspired framework for language agents. Additionally, we propose an innovative explore-exploit-guided language (EXE) agent to solve tasks within TextGym. Through numerical experiments and ablation studies, we extract valuable insights into the decision-making capabilities of language agents and make a preliminary evaluation of their potential to be alternatives to PPO in classical sequential decision-making problems. This paper sheds light on the performance of language agents and paves the way for future research in this exciting domain. Our code is publicly available at~\url{https://github.com/mail-ecnu/Text-Gym-Agents}.

Similar Work