"it's A Fair Game", Or Is It? Examining How Users Navigate Disclosure Risks And Benefits When Using Llm-based Conversational Agents · The Large Language Model Bible Contribute to LLM-Bible

"it's A Fair Game", Or Is It? Examining How Users Navigate Disclosure Risks And Benefits When Using Llm-based Conversational Agents

Zhiping Zhang et al.. Arxiv 2023 – 26 citations

[Paper]    
GPT RAG Reinforcement Learning Agentic Model Architecture

The widespread use of Large Language Model (LLM)-based conversational agents (CAs), especially in high-stakes domains, raises many privacy concerns. Building ethical LLM-based CAs that respect user privacy requires an in-depth understanding of the privacy risks that concern users the most. However, existing research, primarily model-centered, does not provide insight into users’ perspectives. To bridge this gap, we analyzed sensitive disclosures in real-world ChatGPT conversations and conducted semi-structured interviews with 19 LLM-based CA users. We found that users are constantly faced with trade-offs between privacy, utility, and convenience when using LLM-based CAs. However, users’ erroneous mental models and the dark patterns in system design limited their awareness and comprehension of the privacy risks. Additionally, the human-like interactions encouraged more sensitive disclosures, which complicated users’ ability to navigate the trade-offs. We discuss practical design guidelines and the needs for paradigm shifts to protect the privacy of LLM-based CA users.

Similar Work