Human Choice Prediction In Language-based Persuasion Games: Simulation-based Off-policy Evaluation · The Large Language Model Bible Contribute to LLM-Bible

Human Choice Prediction In Language-based Persuasion Games: Simulation-based Off-policy Evaluation

Shapira Eilam, Apel Reut, Tennenholtz Moshe, Reichart Roi. Arxiv 2023

[Paper] [Code]    
Agentic Has Code Tools Training Techniques

Recent advances in Large Language Models (LLMs) have spurred interest in designing LLM-based agents for tasks that involve interaction with human and artificial agents. This paper addresses a key aspect in the design of such agents: Predicting human decision in off-policy evaluation (OPE), focusing on language-based persuasion games, where the agent’s goal is to influence its partner’s decisions through verbal messages. Using a dedicated application, we collected a dataset of 87K decisions from humans playing a repeated decision-making game with artificial agents. Our approach involves training a model on human interactions with one agents subset to predict decisions when interacting with another. To enhance off-policy performance, we propose a simulation technique involving interactions across the entire agent space and simulated decision makers. Our learning strategy yields significant OPE gains, e.g., improving prediction accuracy in the top 15% challenging cases by 7.1%. Our code and the large dataset we collected and generated are submitted as supplementary material and publicly available in our GitHub repository: https://github.com/eilamshapira/HumanChoicePrediction

Similar Work