Exploring And Benchmarking The Planning Capabilities Of Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

Exploring And Benchmarking The Planning Capabilities Of Large Language Models

Bohnet Bernd, Nova Azade, Parisi Aaron T, Swersky Kevin, Goshvadi Katayoon, Dai Hanjun, Schuurmans Dale, Fiedel Noah, Sedghi Hanie. Arxiv 2024

[Paper]    
Fine Tuning In Context Learning Pretraining Methods Prompting Training Techniques

We seek to elevate the planning capabilities of Large Language Models (LLMs)investigating four main directions. First, we construct a comprehensive benchmark suite encompassing both classical planning domains and natural language scenarios. This suite includes algorithms to generate instances with varying levels of difficulty, allowing for rigorous and systematic evaluation of LLM performance. Second, we investigate the use of in-context learning (ICL) to enhance LLM planning, exploring the direct relationship between increased context length and improved planning performance. Third, we demonstrate the positive impact of fine-tuning LLMs on optimal planning paths, as well as the effectiveness of incorporating model-driven search procedures. Finally, we investigate the performance of the proposed methods in out-of-distribution scenarios, assessing the ability to generalize to novel and unseen planning challenges.

Similar Work