Look Further Ahead: Testing The Limits Of GPT-4 In Path Planning · The Large Language Model Bible Contribute to LLM-Bible

Look Further Ahead: Testing The Limits Of GPT-4 In Path Planning

Aghzal Mohamed, Plaku Erion, Yao Ziyu. Arxiv 2024

[Paper]    
GPT Model Architecture Prompting Tools

Large Language Models (LLMs) have shown impressive capabilities across a wide variety of tasks. However, they still face challenges with long-horizon planning. To study this, we propose path planning tasks as a platform to evaluate LLMs’ ability to navigate long trajectories under geometric constraints. Our proposed benchmark systematically tests path-planning skills in complex settings. Using this, we examined GPT-4’s planning abilities using various task representations and prompting approaches. We found that framing prompts as Python code and decomposing long trajectory tasks improve GPT-4’s path planning effectiveness. However, while these approaches show some promise toward improving the planning ability of the model, they do not obtain optimal paths and fail at generalizing over extended horizons.

Similar Work