Exploring Human-llm Conversations: Mental Models And The Originator Of Toxicity · The Large Language Model Bible Contribute to LLM-Bible

Exploring Human-llm Conversations: Mental Models And The Originator Of Toxicity

Schneider Johannes, Flores Arianna Casanova, Kranz Anne-catherine. Arxiv 2024

[Paper]    
GPT Model Architecture Reinforcement Learning Tools

This study explores real-world human interactions with large language models (LLMs) in diverse, unconstrained settings in contrast to most prior research focusing on ethically trimmed models like ChatGPT for specific tasks. We aim to understand the originator of toxicity. Our findings show that although LLMs are rightfully accused of providing toxic content, it is mostly demanded or at least provoked by humans who actively seek such content. Our manual analysis of hundreds of conversations judged as toxic by APIs commercial vendors, also raises questions with respect to current practices of what user requests are refused to answer. Furthermore, we conjecture based on multiple empirical indicators that humans exhibit a change of their mental model, switching from the mindset of interacting with a machine more towards interacting with a human.

Similar Work