HELPER-X: A Unified Instructable Embodied Agent To Tackle Four Interactive Vision-language Domains With Memory-augmented Language Models · The Large Language Model Bible Contribute to LLM-Bible

HELPER-X: A Unified Instructable Embodied Agent To Tackle Four Interactive Vision-language Domains With Memory-augmented Language Models

Sarch Gabriel, Somani Sahil, Kapoor Raghav, Tarr Michael J., Fragkiadaki Katerina. Arxiv 2024

[Paper]    
Agentic Few Shot Multimodal Models Prompting Tools Training Techniques

Recent research on instructable agents has used memory-augmented Large Language Models (LLMs) as task planners, a technique that retrieves language-program examples relevant to the input instruction and uses them as in-context examples in the LLM prompt to improve the performance of the LLM in inferring the correct action and task plans. In this technical report, we extend the capabilities of HELPER, by expanding its memory with a wider array of examples and prompts, and by integrating additional APIs for asking questions. This simple expansion of HELPER into a shared memory enables the agent to work across the domains of executing plans from dialogue, natural language instruction following, active question asking, and commonsense room reorganization. We evaluate the agent on four diverse interactive visual-language embodied agent benchmarks: ALFRED, TEACh, DialFRED, and the Tidy Task. HELPER-X achieves few-shot, state-of-the-art performance across these benchmarks using a single agent, without requiring in-domain training, and remains competitive with agents that have undergone in-domain training.

Similar Work