Intention And Context Elicitation With Large Language Models In The Legal Aid Intake Process · The Large Language Model Bible Contribute to LLM-Bible

Intention And Context Elicitation With Large Language Models In The Legal Aid Intake Process

Goodson Nick, Lu Rongfei. Arxiv 2023

[Paper]    
Agentic Fine Tuning Pretraining Methods Prompting Reinforcement Learning Training Techniques

Large Language Models (LLMs) and chatbots show significant promise in streamlining the legal intake process. This advancement can greatly reduce the workload and costs for legal aid organizations, improving availability while making legal assistance more accessible to a broader audience. However, a key challenge with current LLMs is their tendency to overconfidently deliver an immediate ‘best guess’ to a client’s question based on the output distribution learned over the training data. This approach often overlooks the client’s actual intentions or the specifics of their legal situation. As a result, clients may not realize the importance of providing essential additional context or expressing their underlying intentions, which are crucial for their legal cases. Traditionally, logic based decision trees have been used to automate intake for specific access to justice issues, such as immigration and eviction. But those solutions lack scalability. We demonstrate a proof-of-concept using LLMs to elicit and infer clients’ underlying intentions and specific legal circumstances through free-form, language-based interactions. We also propose future research directions to use supervised fine-tuning or offline reinforcement learning to automatically incorporate intention and context elicitation in chatbots without explicit prompting.

Similar Work