Limits Of Large Language Models In Debating Humans · The Large Language Model Bible Contribute to LLM-Bible

Limits Of Large Language Models In Debating Humans

Flamino James, Modi Mohammed Shahid, Szymanski Boleslaw K., Cross Brendan, Mikolajczyk Colton. Arxiv 2024

[Paper]    
Agentic

Large Language Models (LLMs) have shown remarkable promise in their ability to interact proficiently with humans. Subsequently, their potential use as artificial confederates and surrogates in sociological experiments involving conversation is an exciting prospect. But how viable is this idea? This paper endeavors to test the limits of current-day LLMs with a pre-registered study integrating real people with LLM agents acting as people. The study focuses on debate-based opinion consensus formation in three environments: humans only, agents and humans, and agents only. Our goal is to understand how LLM agents influence humans, and how capable they are in debating like humans. We find that LLMs can blend in and facilitate human productivity but are less convincing in debate, with their behavior ultimately deviating from human’s. We elucidate these primary failings and anticipate that LLMs must evolve further before being viable debaters.

Similar Work