Graph-enhanced Large Language Models In Asynchronous Plan Reasoning · The Large Language Model Bible Contribute to LLM-Bible

Graph-enhanced Large Language Models In Asynchronous Plan Reasoning

Lin Fangru, La Malfa Emanuele, Hofmann Valentin, Yang Elle Michelle, Cohn Anthony, Pierrehumbert Janet B.. Arxiv 2024

[Paper] [Code]    
Agent Agentic GPT Has Code Model Architecture Prompting Reinforcement Learning

Planning is a fundamental property of human intelligence. Reasoning about asynchronous plans is challenging since it requires sequential and parallel planning to optimize time costs. Can large language models (LLMs) succeed at this task? Here, we present the first large-scale study investigating this question. We find that a representative set of closed and open-source LLMs, including GPT-4 and LLaMA-2, behave poorly when not supplied with illustrations about the task-solving process in our benchmark AsyncHow. We propose a novel technique called Plan Like a Graph (PLaG) that combines graphs with natural language prompts and achieves state-of-the-art results. We show that although PLaG can boost model performance, LLMs still suffer from drastic degradation when task complexity increases, highlighting the limits of utilizing LLMs for simulating digital devices. We see our study as an exciting step towards using LLMs as efficient autonomous agents. Our code and data are available at https://github.com/fangru-lin/graph-llm-asynchow-plan.

Similar Work