Aligning Language Models To User Opinions · The Large Language Model Bible Contribute to LLM-Bible

Aligning Language Models To User Opinions

Hwang Eunjeong, Majumder Bodhisattwa Prasad, Tandon Niket. Arxiv 2023

[Paper]    
Prompting Survey Paper Uncategorized

An important aspect of developing LLMs that interact with humans is to align models’ behavior to their users. It is possible to prompt an LLM into behaving as a certain persona, especially a user group or ideological persona the model captured during its pertaining stage. But, how to best align an LLM with a specific user and not a demographic or ideological group remains an open question. Mining public opinion surveys (by Pew Research), we find that the opinions of a user and their demographics and ideologies are not mutual predictors. We use this insight to align LLMs by modeling both user opinions as well as user demographics and ideology, achieving up to 7 points accuracy gains in predicting public opinions from survey questions across a broad set of topics. In addition to the typical approach of prompting LLMs with demographics and ideology, we discover that utilizing the most relevant past opinions from individual users enables the model to predict user opinions more accurately.

Similar Work