Probing Large Language Models From A Human Behavioral Perspective · The Large Language Model Bible Contribute to LLM-Bible

Probing Large Language Models From A Human Behavioral Perspective

Wang Xintong, Li Xiaoyu, Li Xingshan, Biemann Chris. Arxiv 2023

[Paper]    
Attention Mechanism Model Architecture Transformer

Large Language Models (LLMs) have emerged as dominant foundational models in modern NLP. However, the understanding of their prediction processes and internal mechanisms, such as feed-forward networks (FFN) and multi-head self-attention (MHSA), remains largely unexplored. In this work, we probe LLMs from a human behavioral perspective, correlating values from LLMs with eye-tracking measures, which are widely recognized as meaningful indicators of human reading patterns. Our findings reveal that LLMs exhibit a similar prediction pattern with humans but distinct from that of Shallow Language Models (SLMs). Moreover, with the escalation of LLM layers from the middle layers, the correlation coefficients also increase in FFN and MHSA, indicating that the logits within FFN increasingly encapsulate word semantics suitable for predicting tokens from the vocabulary.

Similar Work