Primacy Effect Of Chatgpt · The Large Language Model Bible Contribute to LLM-Bible

Primacy Effect Of Chatgpt

Wang Yiwei, Cai Yujun, Chen Muhao, Liang Yuxuan, Hooi Bryan. Arxiv 2023

[Paper] [Code]    
Ethics And Bias Fine Tuning GPT Has Code Model Architecture Pretraining Methods Prompting Reinforcement Learning Training Techniques

Instruction-tuned large language models (LLMs), such as ChatGPT, have led to promising zero-shot performance in discriminative natural language understanding (NLU) tasks. This involves querying the LLM using a prompt containing the question, and the candidate labels to choose from. The question-answering capabilities of ChatGPT arise from its pre-training on large amounts of human-written text, as well as its subsequent fine-tuning on human preferences, which motivates us to ask: Does ChatGPT also inherits humans’ cognitive biases? In this paper, we study the primacy effect of ChatGPT: the tendency of selecting the labels at earlier positions as the answer. We have two main findings: i) ChatGPT’s decision is sensitive to the order of labels in the prompt; ii) ChatGPT has a clearly higher chance to select the labels at earlier positions as the answer. We hope that our experiments and analyses provide additional insights into building more reliable ChatGPT-based solutions. We release the source code at https://github.com/wangywUST/PrimacyEffectGPT.

Similar Work