Lissard: Long And Simple Sequential Reasoning Datasets · The Large Language Model Bible Contribute to LLM-Bible

Lissard: Long And Simple Sequential Reasoning Datasets

Bueno Mirelle, Lotufo Roberto, Nogueira Rodrigo. Arxiv 2024

[Paper] [Code]    
GPT Has Code Model Architecture Training Techniques Uncategorized

Language models are now capable of solving tasks that require dealing with long sequences consisting of hundreds of thousands of tokens. However, they often fail on tasks that require repetitive use of simple rules, even on sequences that are much shorter than those seen during training. For example, state-of-the-art LLMs can find common items in two lists with up to 20 items but fail when lists have 80 items. In this paper, we introduce Lissard, a benchmark comprising seven tasks whose goal is to assess the ability of models to process and generate wide-range sequence lengths, requiring repetitive procedural execution. Our evaluation of open-source (Mistral-7B and Mixtral-8x7B) and proprietary models (GPT-3.5 and GPT-4) show a consistent decline in performance across all models as the complexity of the sequence increases. The datasets and code are available at https://github.com/unicamp-dl/Lissard

Similar Work