Towards Dialogues For Joint Human-ai Reasoning And Value Alignment · The Large Language Model Bible Contribute to LLM-Bible

Towards Dialogues For Joint Human-ai Reasoning And Value Alignment

Bezou-vrakatseli Elfia, Cocarascu Oana, Modgil Sanjay. Arxiv 2024

[Paper]    
Reinforcement Learning

We argue that enabling human-AI dialogue, purposed to support joint reasoning (i.e., ‘inquiry’), is important for ensuring that AI decision making is aligned with human values and preferences. In particular, we point to logic-based models of argumentation and dialogue, and suggest that the traditional focus on persuasion dialogues be replaced by a focus on inquiry dialogues, and the distinct challenges that joint inquiry raises. Given recent dramatic advances in the performance of large language models (LLMs), and the anticipated increase in their use for decision making, we provide a roadmap for research into inquiry dialogues for supporting joint human-LLM reasoning tasks that are ethically salient, and that thereby require that decisions are value aligned.

Similar Work