Insights Into LLM Long-context Failures: When Transformers Know But Don't Tell · The Large Language Model Bible Contribute to LLM-Bible

Insights Into LLM Long-context Failures: When Transformers Know But Don't Tell

Lu Taiming, Gao Muhan, Yu Kuai, Byerly Adam, Khashabi Daniel. Arxiv 2024

[Paper]    
Applications Ethics And Bias Model Architecture Pretraining Methods RAG Reinforcement Learning Transformer

Large Language Models (LLMs) exhibit positional bias, struggling to utilize information from the middle or end of long contexts. Our study explores LLMs’ long-context reasoning by probing their hidden representations. We find that while LLMs encode the position of target information, they often fail to leverage this in generating accurate responses. This reveals a disconnect between information retrieval and utilization, a “know but don’t tell” phenomenon. We further analyze the relationship between extraction time and final accuracy, offering insights into the underlying mechanics of transformer models.

Similar Work