Legal Syllogism Prompting: Teaching Large Language Models For Legal Judgment Prediction · The Large Language Model Bible Contribute to LLM-Bible

Legal Syllogism Prompting: Teaching Large Language Models For Legal Judgment Prediction

Jiang Cong, Yang Xiaolei. Arxiv 2023

[Paper]    
Fine Tuning GPT Interpretability And Explainability Model Architecture Pretraining Methods Prompting Training Techniques

Legal syllogism is a form of deductive reasoning commonly used by legal professionals to analyze cases. In this paper, we propose legal syllogism prompting (LoT), a simple prompting method to teach large language models (LLMs) for legal judgment prediction. LoT teaches only that in the legal syllogism the major premise is law, the minor premise is the fact, and the conclusion is judgment. Then the models can produce a syllogism reasoning of the case and give the judgment without any learning, fine-tuning, or examples. On CAIL2018, a Chinese criminal case dataset, we performed zero-shot judgment prediction experiments with GPT-3 models. Our results show that LLMs with LoT achieve better performance than the baseline and chain of thought prompting, the state-of-art prompting method on diverse reasoning tasks. LoT enables the model to concentrate on the key information relevant to the judgment and to correctly understand the legal meaning of acts, as compared to other methods. Our method enables LLMs to predict judgment along with law articles and justification, which significantly enhances the explainability of models.

Similar Work