Repliqa: A Question-answering Dataset For Benchmarking Llms On Unseen Reference Content · The Large Language Model Bible Contribute to LLM-Bible

Repliqa: A Question-answering Dataset For Benchmarking Llms On Unseen Reference Content

Monteiro Joao, Noel Pierre-andre, Marcotte Etienne, Rajeswar Sai, Zantedeschi Valentina, Vazquez David, Chapados Nicolas, Pal Christopher, Taslakian Perouz. Arxiv 2024

[Paper]    
Language Modeling RAG Reinforcement Learning Tools Training Techniques Uncategorized

Large Language Models (LLMs) are trained on vast amounts of data, most of which is automatically scraped from the internet. This data includes encyclopedic documents that harbor a vast amount of general knowledge (e.g., Wikipedia) but also potentially overlap with benchmark datasets used for evaluating LLMs. Consequently, evaluating models on test splits that might have leaked into the training set is prone to misleading conclusions. To foster sound evaluation of language models, we introduce a new test dataset named RepLiQA, suited for question-answering and topic retrieval tasks. RepLiQA is a collection of five splits of test sets, four of which have not been released to the internet or exposed to LLM APIs prior to this publication. Each sample in RepLiQA comprises (1) a reference document crafted by a human annotator and depicting an imaginary scenario (e.g., a news article) absent from the internet; (2) a question about the document’s topic; (3) a ground-truth answer derived directly from the information in the document; and (4) the paragraph extracted from the reference document containing the answer. As such, accurate answers can only be generated if a model can find relevant content within the provided document. We run a large-scale benchmark comprising several state-of-the-art LLMs to uncover differences in performance across models of various types and sizes in a context-conditional language modeling setting. Released splits of RepLiQA can be found here: https://huggingface.co/datasets/ServiceNow/repliqa.

Similar Work