Can Large Language Models Play Text Games Well? Current State-of-the-art And Open Questions · The Large Language Model Bible Contribute to LLM-Bible

Can Large Language Models Play Text Games Well? Current State-of-the-art And Open Questions

Tsai Chen Feng, Zhou Xiaochen, Liu Sierra S., Li Jing, Yu Mo, Mei Hongyuan. Arxiv 2023

[Paper]    
GPT Model Architecture RAG Reinforcement Learning

Large language models (LLMs) such as ChatGPT and GPT-4 have recently demonstrated their remarkable abilities of communicating with human users. In this technical report, we take an initiative to investigate their capacities of playing text games, in which a player has to understand the environment and respond to situations by having dialogues with the game world. Our experiments show that ChatGPT performs competitively compared to all the existing systems but still exhibits a low level of intelligence. Precisely, ChatGPT can not construct the world model by playing the game or even reading the game manual; it may fail to leverage the world knowledge that it already has; it cannot infer the goal of each step as the game progresses. Our results open up new research questions at the intersection of artificial intelligence, machine learning, and natural language processing.

Similar Work