Large Language Model Recall Uncertainty Is Modulated By The Fan Effect · The Large Language Model Bible Contribute to LLM-Bible

Large Language Model Recall Uncertainty Is Modulated By The Fan Effect

Roberts Jesse, Moore Kyle, Pham Thao, Ewaleifoh Oseremhen, Fisher Doug. Arxiv 2024

[Paper]    
Training Techniques Uncategorized

This paper evaluates whether large language models (LLMs) exhibit cognitive fan effects, similar to those discovered by Anderson in humans, after being pre-trained on human textual data. We conduct two sets of in-context recall experiments designed to elicit fan effects. Consistent with human results, we find that LLM recall uncertainty, measured via token probability, is influenced by the fan effect. Our results show that removing uncertainty disrupts the observed effect. The experiments suggest the fan effect is consistent whether the fan value is induced in-context or in the pre-training data. Finally, these findings provide in-silico evidence that fan effects and typicality are expressions of the same phenomena.

Similar Work