Trust No Bot: Discovering Personal Disclosures In Human-llm Conversations In The Wild · The Large Language Model Bible Contribute to LLM-Bible

Trust No Bot: Discovering Personal Disclosures In Human-llm Conversations In The Wild

Mireshghallah Niloofar, Antoniak Maria, More Yash, Choi Yejin, Farnadi Golnoosh. Arxiv 2024

[Paper]    
GPT Model Architecture

Measuring personal disclosures made in human-chatbot interactions can provide a better understanding of users’ AI literacy and facilitate privacy research for large language models (LLMs). We run an extensive, fine-grained analysis on the personal disclosures made by real users to commercial GPT models, investigating the leakage of personally identifiable and sensitive information. To understand the contexts in which users disclose to chatbots, we develop a taxonomy of tasks and sensitive topics, based on qualitative and quantitative analysis of naturally occurring conversations. We discuss these potential privacy harms and observe that: (1) personally identifiable information (PII) appears in unexpected contexts such as in translation or code editing (48% and 16% of the time, respectively) and (2) PII detection alone is insufficient to capture the sensitive topics that are common in human-chatbot interactions, such as detailed sexual preferences or specific drug use habits. We believe that these high disclosure rates are of significant importance for researchers and data curators, and we call for the design of appropriate nudging mechanisms to help users moderate their interactions.

Similar Work