Large Language Models As Tax Attorneys: A Case Study In Legal Capabilities Emergence · The Large Language Model Bible Contribute to LLM-Bible

Large Language Models As Tax Attorneys: A Case Study In Legal Capabilities Emergence

Nay John J., Karamardian David, Lawsky Sarah B., Tao Wenting, Bhat Meghana, Jain Raghav, Lee Aaron Travis, Choi Jonathan H., Kasai Jungo. Arxiv 2023

[Paper]    
Efficiency And Optimization Few Shot GPT In Context Learning Merging Model Architecture Prompting RAG Reinforcement Learning Responsible AI

Better understanding of Large Language Models’ (LLMs) legal analysis abilities can contribute to improving the efficiency of legal services, governing artificial intelligence, and leveraging LLMs to identify inconsistencies in law. This paper explores LLM capabilities in applying tax law. We choose this area of law because it has a structure that allows us to set up automated validation pipelines across thousands of examples, requires logical reasoning and maths skills, and enables us to test LLM capabilities in a manner relevant to real-world economic lives of citizens and companies. Our experiments demonstrate emerging legal understanding capabilities, with improved performance in each subsequent OpenAI model release. We experiment with retrieving and utilising the relevant legal authority to assess the impact of providing additional legal context to LLMs. Few-shot prompting, presenting examples of question-answer pairs, is also found to significantly enhance the performance of the most advanced model, GPT-4. The findings indicate that LLMs, particularly when combined with prompting enhancements and the correct legal texts, can perform at high levels of accuracy but not yet at expert tax lawyer levels. As LLMs continue to advance, their ability to reason about law autonomously could have significant implications for the legal profession and AI governance.

Similar Work