Hierarchical In-context Reinforcement Learning With Hindsight Modular Reflections For Planning · The Large Language Model Bible Contribute to LLM-Bible

Hierarchical In-context Reinforcement Learning With Hindsight Modular Reflections For Planning

Sun Chuanneng, Huang Songjun, Pompili Dario. Arxiv 2024

[Paper]    
Agentic Efficiency And Optimization In Context Learning Prompting Reinforcement Learning Tools

Large Language Models (LLMs) have demonstrated remarkable abilities in various language tasks, making them promising candidates for decision-making in robotics. Inspired by Hierarchical Reinforcement Learning (HRL), we propose Hierarchical in-Context Reinforcement Learning (HCRL), a novel framework that decomposes complex tasks into sub-tasks using an LLM-based high-level policy, in which a complex task is decomposed into sub-tasks by a high-level policy on-the-fly. The sub-tasks, defined by goals, are assigned to the low-level policy to complete. Once the LLM agent determines that the goal is finished, a new goal will be proposed. To improve the agent’s performance in multi-episode execution, we propose Hindsight Modular Reflection (HMR), where, instead of reflecting on the full trajectory, we replace the task objective with intermediate goals and let the agent reflect on shorter trajectories to improve reflection efficiency. We evaluate the decision-making ability of the proposed HCRL in three benchmark environments–ALFWorld, Webshop, and HotpotQA. Results show that HCRL can achieve 9%, 42%, and 10% performance improvement in 5 episodes of execution over strong in-context learning baselines.

Similar Work