RAP: Retrieval-augmented Planning With Contextual Memory For Multimodal LLM Agents · The Large Language Model Bible Contribute to LLM-Bible

RAP: Retrieval-augmented Planning With Contextual Memory For Multimodal LLM Agents

Kagaya Tomoyuki, Yuan Thong Jing, Lou Yuxuan, Karlekar Jayashree, Pranata Sugiri, Kinose Akira, Oguri Koki, Wick Felix, You Yang. Arxiv 2024

[Paper]    
Agentic Applications Multimodal Models RAG Reinforcement Learning Tools

Owing to recent advancements, Large Language Models (LLMs) can now be deployed as agents for increasingly complex decision-making applications in areas including robotics, gaming, and API integration. However, reflecting past experiences in current decision-making processes, an innate human behavior, continues to pose significant challenges. Addressing this, we propose Retrieval-Augmented Planning (RAP) framework, designed to dynamically leverage past experiences corresponding to the current situation and context, thereby enhancing agents’ planning capabilities. RAP distinguishes itself by being versatile: it excels in both text-only and multimodal environments, making it suitable for a wide range of tasks. Empirical evaluations demonstrate RAP’s effectiveness, where it achieves SOTA performance in textual scenarios and notably enhances multimodal LLM agents’ performance for embodied tasks. These results highlight RAP’s potential in advancing the functionality and applicability of LLM agents in complex, real-world applications.

Similar Work