Can Perplexity Reflect Large Language Model's Ability In Long Text Understanding? · The Large Language Model Bible Contribute to LLM-Bible

Can Perplexity Reflect Large Language Model's Ability In Long Text Understanding?

Hu Yutong, Huang Quzhe, Tao Mingxu, Zhang Chen, Feng Yansong. Arxiv 2024

[Paper]    
Attention Mechanism Language Modeling Model Architecture Reinforcement Learning

Recent studies have shown that Large Language Models (LLMs) have the potential to process extremely long text. Many works only evaluate LLMs’ long-text processing ability on the language modeling task, with perplexity (PPL) as the evaluation metric. However, in our study, we find that there is no correlation between PPL and LLMs’ long-text understanding ability. Besides, PPL may only reflect the model’s ability to model local information instead of catching long-range dependency. Therefore, only using PPL to prove the model could process long text is inappropriate. The local focus feature of PPL could also explain some existing phenomena, such as the great extrapolation ability of the position method ALiBi. When evaluating a model’s ability in long text, we might pay more attention to PPL’s limitation and avoid overly relying on it.

Similar Work