A Survey On Integration Of Large Language Models With Intelligent Robots · The Large Language Model Bible Contribute to LLM-Bible

A Survey On Integration Of Large Language Models With Intelligent Robots

Kim Yeseung, Kim Dohyun, Choi Jieun, Park Jisang, Oh Nayoung, Park Daehyung. Arxiv 2024

[Paper]    
Applications GPT Model Architecture Multimodal Models Prompting RAG Reinforcement Learning Survey Paper

In recent years, the integration of large language models (LLMs) has revolutionized the field of robotics, enabling robots to communicate, understand, and reason with human-like proficiency. This paper explores the multifaceted impact of LLMs on robotics, addressing key challenges and opportunities for leveraging these models across various domains. By categorizing and analyzing LLM applications within core robotics elements – communication, perception, planning, and control – we aim to provide actionable insights for researchers seeking to integrate LLMs into their robotic systems. Our investigation focuses on LLMs developed post-GPT-3.5, primarily in text-based modalities while also considering multimodal approaches for perception and control. We offer comprehensive guidelines and examples for prompt engineering, facilitating beginners’ access to LLM-based robotics solutions. Through tutorial-level examples and structured prompt construction, we illustrate how LLM-guided enhancements can be seamlessly integrated into robotics applications. This survey serves as a roadmap for researchers navigating the evolving landscape of LLM-driven robotics, offering a comprehensive overview and practical guidance for harnessing the power of language models in robotics development.

Similar Work