World Models With Hints Of Large Language Models For Goal Achieving · The Large Language Model Bible Contribute to LLM-Bible

World Models With Hints Of Large Language Models For Goal Achieving

Liu Zeyuan, Huan Ziyu, Wang Xiyao, Lyu Jiafei, Tao Jian, Li Xiu, Huang Furong, Xu Huazhe. Arxiv 2024

[Paper]    
Agentic Fine Tuning RAG Reinforcement Learning

Reinforcement learning struggles in the face of long-horizon tasks and sparse goals due to the difficulty in manual reward specification. While existing methods address this by adding intrinsic rewards, they may fail to provide meaningful guidance in long-horizon decision-making tasks with large state and action spaces, lacking purposeful exploration. Inspired by human cognition, we propose a new multi-modal model-based RL approach named Dreaming with Large Language Models (DLLM). DLLM integrates the proposed hinting subgoals from the LLMs into the model rollouts to encourage goal discovery and reaching in challenging tasks. By assigning higher intrinsic rewards to samples that align with the hints outlined by the language model during model rollouts, DLLM guides the agent toward meaningful and efficient exploration. Extensive experiments demonstrate that the DLLM outperforms recent methods in various challenging, sparse-reward environments such as HomeGrid, Crafter, and Minecraft by 27.7%, 21.1%, and 9.9%, respectively.

Similar Work