On The Planning, Search, And Memorization Capabilities Of Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

On The Planning, Search, And Memorization Capabilities Of Large Language Models

Yang Yunhao, Tomar Anshul. Arxiv 2023

[Paper]    
Applications Fine Tuning GPT Model Architecture Pretraining Methods Security Tools Training Techniques Transformer

The rapid advancement of large language models, such as the Generative Pre-trained Transformer (GPT) series, has had significant implications across various disciplines. In this study, we investigate the potential of the state-of-the-art large language model (GPT-4) for planning tasks. We explore its effectiveness in multiple planning subfields, highlighting both its strengths and limitations. Through a comprehensive examination, we identify areas where large language models excel in solving planning problems and reveal the constraints that limit their applicability. Our empirical analysis focuses on GPT-4’s performance in planning domain extraction, graph search path planning, and adversarial planning. We then propose a way of fine-tuning a domain-specific large language model to improve its Chain of Thought (CoT) capabilities for the above-mentioned tasks. The results provide valuable insights into the potential applications of large language models in the planning domain and pave the way for future research to overcome their limitations and expand their capabilities.

Similar Work