Fantastic Semantics And Where To Find Them: Investigating Which Layers Of Generative Llms Reflect Lexical Semantics · The Large Language Model Bible Contribute to LLM-Bible

Fantastic Semantics And Where To Find Them: Investigating Which Layers Of Generative Llms Reflect Lexical Semantics

Liu Zhu, Kong Cunliang, Liu Ying, Sun Maosong. Arxiv 2024

[Paper] [Code]    
BERT Has Code Language Modeling Model Architecture Prompting Reinforcement Learning

Large language models have achieved remarkable success in general language understanding tasks. However, as a family of generative methods with the objective of next token prediction, the semantic evolution with the depth of these models are not fully explored, unlike their predecessors, such as BERT-like architectures. In this paper, we specifically investigate the bottom-up evolution of lexical semantics for a popular LLM, namely Llama2, by probing its hidden states at the end of each layer using a contextualized word identification task. Our experiments show that the representations in lower layers encode lexical semantics, while the higher layers, with weaker semantic induction, are responsible for prediction. This is in contrast to models with discriminative objectives, such as mask language modeling, where the higher layers obtain better lexical semantics. The conclusion is further supported by the monotonic increase in performance via the hidden states for the last meaningless symbols, such as punctuation, in the prompting strategy. Our codes are available at https://github.com/RyanLiut/LLM_LexSem.

Similar Work