Roles Of Scaling And Instruction Tuning In Language Perception: Model Vs. Human Attention · The Large Language Model Bible Contribute to LLM-Bible

Roles Of Scaling And Instruction Tuning In Language Perception: Model Vs. Human Attention

Gao Changjiang, Huang Shujian, Li Jixing, Chen Jiajun. Arxiv 2023

[Paper]    
Attention Mechanism Model Architecture Pretraining Methods Training Techniques Transformer

Recent large language models (LLMs) have revealed strong abilities to understand natural language. Since most of them share the same basic structure, i.e. the transformer block, possible contributors to their success in the training process are scaling and instruction tuning. However, how these factors affect the models’ language perception is unclear. This work compares the self-attention of several existing LLMs (LLaMA, Alpaca and Vicuna) in different sizes (7B, 13B, 30B, 65B), together with eye saccade, an aspect of human reading attention, to assess the effect of scaling and instruction tuning on language perception. Results show that scaling enhances the human resemblance and improves the effective attention by reducing the trivial pattern reliance, while instruction tuning does not. However, instruction tuning significantly enhances the models’ sensitivity to instructions. We also find that current LLMs are consistently closer to non-native than native speakers in attention, suggesting a sub-optimal language perception of all models. Our code and data used in the analysis is available on GitHub.

Similar Work