STARLING: Self-supervised Training Of Text-based Reinforcement Learning Agent With Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

STARLING: Self-supervised Training Of Text-based Reinforcement Learning Agent With Large Language Models

Basavatia Shreyas, Murugesan Keerthiram, Ratnakar Shivam. Arxiv 2024

[Paper]    
Agent Agentic GPT Model Architecture Reinforcement Learning Tools Training Techniques

Interactive fiction games have emerged as an important application to improve the generalization capabilities of language-based reinforcement learning (RL) agents. Existing environments for interactive fiction games are domain-specific or time-consuming to generate and do not train the RL agents to master a specific set of skills. In this work, we introduce an interactive environment for self-supervised RL, STARLING, for text-based games that bootstraps the text-based RL agents with automatically generated games (based on the seed set of game ideas) to boost the performance and generalization capabilities to reach a goal of the target environment. These games let the agent hone their skills on a predefined set of tasks. We create and test an environment with 100 games, generated using this automated framework that uses large language models (GPT-3) and an interactive fiction game engine (based on Inform7) to provide the user with the ability to generate more games under minimal human supervision. Experimental results based on both the human participants and baseline text-based RL agents reveal that current state-of-the-art text-based RL agents cannot use previously learned skills in new situations at the level humans can. These results enforce STARLING’s potential to serve as a sandbox environment for further research in self-supervised text-based RL.

Similar Work