Enhancing Logical Reasoning In Large Language Models To Facilitate Legal Applications · The Large Language Model Bible Contribute to LLM-Bible

Enhancing Logical Reasoning In Large Language Models To Facilitate Legal Applications

Nguyen Ha-thanh, Fungwacharakorn Wachara, Satoh Ken. Arxiv 2023

[Paper]    
Agentic Applications Bias Mitigation Ethics And Bias Fairness Reinforcement Learning Tools

Language serves as a vehicle for conveying thought, enabling communication among individuals. The ability to distinguish between diverse concepts, identify fairness and injustice, and comprehend a range of legal notions fundamentally relies on logical reasoning. Large Language Models (LLMs) attempt to emulate human language understanding and generation, but their competency in logical reasoning remains limited. This paper seeks to address the philosophical question: How can we effectively teach logical reasoning to LLMs while maintaining a deep understanding of the intricate relationship between language and logic? By focusing on bolstering LLMs’ capabilities in logical reasoning, we aim to expand their applicability in law and other logic-intensive disciplines. To this end, we propose a Reinforcement Learning from Logical Feedback (RLLF) approach, which serves as a potential framework for refining LLMs’ reasoning capacities. Through RLLF and a revised evaluation methodology, we explore new avenues for research in this domain and contribute to the development of LLMs capable of handling complex legal reasoning tasks while acknowledging the fundamental connection between language and logic.

Similar Work