Gpts And Language Barrier: A Cross-lingual Legal QA Examination · The Large Language Model Bible Contribute to LLM-Bible

Gpts And Language Barrier: A Cross-lingual Legal QA Examination

Nguyen Ha-thanh, Yamada Hiroaki, Satoh Ken. Arxiv 2024

[Paper]    
GPT Model Architecture Pretraining Methods Prompting Transformer

In this paper, we explore the application of Generative Pre-trained Transformers (GPTs) in cross-lingual legal Question-Answering (QA) systems using the COLIEE Task 4 dataset. In the COLIEE Task 4, given a statement and a set of related legal articles that serve as context, the objective is to determine whether the statement is legally valid, i.e., if it can be inferred from the provided contextual articles or not, which is also known as an entailment task. By benchmarking four different combinations of English and Japanese prompts and data, we provide valuable insights into GPTs’ performance in multilingual legal QA scenarios, contributing to the development of more efficient and accurate cross-lingual QA solutions in the legal domain.

Similar Work