Chatlogic: Integrating Logic Programming With Large Language Models For Multi-step Reasoning · The Large Language Model Bible Contribute to LLM-Bible

Chatlogic: Integrating Logic Programming With Large Language Models For Multi-step Reasoning

Wang Zhongsheng, Liu Jiamou, Bao Qiming, Rong Hongfei, Zhang Jingfeng. Arxiv 2024

[Paper] [Code]    
Ethics And Bias GPT Has Code Model Architecture RAG Reinforcement Learning Tools

Large language models (LLMs) such as ChatGPT and GPT-4 have demonstrated impressive capabilities in various generative tasks. However, their performance is often hampered by limitations in accessing and leveraging long-term memory, leading to specific vulnerabilities and biases, especially during long interactions. This paper introduces ChatLogic, an innovative framework specifically targeted at LLM reasoning tasks that can enhance the performance of LLMs in multi-step deductive reasoning tasks by integrating logic programming. In ChatLogic, the language model plays a central role, acting as a controller and participating in every system operation stage. We propose a novel method of converting logic problems into symbolic integration with an inference engine. This approach leverages large language models’ situational understanding and imitation skills and uses symbolic memory to enhance multi-step deductive reasoning capabilities. Our results show that the ChatLogic framework significantly improves the multi-step reasoning capabilities of LLMs. The source code and data are available at \url{https://github.com/Strong-AI-Lab/ChatLogic}

Similar Work