DELTA: Decomposed Efficient Long-term Robot Task Planning Using Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

DELTA: Decomposed Efficient Long-term Robot Task Planning Using Large Language Models

Liu Yuchen, Palmieri Luigi, Koch Sebastian, Georgievski Ilche, Aiello Marco. Arxiv 2024

[Paper]    
Efficiency And Optimization GPT Interpretability And Explainability Pretraining Methods RAG

Recent advancements in Large Language Models (LLMs) have sparked a revolution across various research fields. In particular, the integration of common-sense knowledge from LLMs into robot task and motion planning has been proven to be a game-changer, elevating performance in terms of explainability and downstream task efficiency to unprecedented heights. However, managing the vast knowledge encapsulated within these large models has posed challenges, often resulting in infeasible plans generated by LLM-based planning systems due to hallucinations or missing domain information. To overcome these challenges and obtain even greater planning feasibility and computational efficiency, we propose a novel LLM-driven task planning approach called DELTA. For achieving better grounding from environmental topology into actionable knowledge, DELTA leverages the power of scene graphs as environment representations within LLMs, enabling the fast generation of precise planning problem descriptions. For obtaining higher planning performance, we use LLMs to decompose the long-term task goals into an autoregressive sequence of sub-goals for an automated task planner to solve. Our contribution enables a more efficient and fully automatic task planning pipeline, achieving higher planning success rates and significantly shorter planning times compared to the state of the art.

Similar Work