Efficient Solutions For An Intriguing Failure Of Llms: Long Context Window Does Not Mean Llms Can Analyze Long Sequences Flawlessly · The Large Language Model Bible Contribute to LLM-Bible

Efficient Solutions For An Intriguing Failure Of Llms: Long Context Window Does Not Mean Llms Can Analyze Long Sequences Flawlessly

Hosseini Peyman, Castro Ignacio, Ghinassi Iacopo, Purver Matthew. Arxiv 2024

[Paper]    
GPT Model Architecture Tools

Large Language Models (LLMs) have demonstrated remarkable capabilities in comprehending and analyzing lengthy sequential inputs, owing to their extensive context windows that allow processing millions of tokens in a single forward pass. However, this paper uncovers a surprising limitation: LLMs fall short when handling long input sequences. We investigate this issue using three datasets and two tasks (sentiment analysis and news categorization) across various LLMs, including Claude 3, Gemini Pro, GPT 3.5 Turbo, Llama 3 Instruct, and Mistral Instruct models. To address this limitation, we propose and evaluate ad-hoc solutions that substantially enhance LLMs’ performance on long input sequences by up to 50%, while reducing API cost and latency by up to 93% and 50%, respectively.

Similar Work