Deceptive AI Systems That Give Explanations Are More Convincing Than Honest AI Systems And Can Amplify Belief In Misinformation · The Large Language Model Bible Contribute to LLM-Bible

Deceptive AI Systems That Give Explanations Are More Convincing Than Honest AI Systems And Can Amplify Belief In Misinformation

Danry Valdemar, Pataranutaporn Pat, Groh Matthew, Epstein Ziv, Maes Pattie. Arxiv 2024

[Paper]    
Interpretability And Explainability

Advanced Artificial Intelligence (AI) systems, specifically large language models (LLMs), have the capability to generate not just misinformation, but also deceptive explanations that can justify and propagate false information and erode trust in the truth. We examined the impact of deceptive AI generated explanations on individuals’ beliefs in a pre-registered online experiment with 23,840 observations from 1,192 participants. We found that in addition to being more persuasive than accurate and honest explanations, AI-generated deceptive explanations can significantly amplify belief in false news headlines and undermine true ones as compared to AI systems that simply classify the headline incorrectly as being true/false. Moreover, our results show that personal factors such as cognitive reflection and trust in AI do not necessarily protect individuals from these effects caused by deceptive AI generated explanations. Instead, our results show that the logical validity of AI generated deceptive explanations, that is whether the explanation has a causal effect on the truthfulness of the AI’s classification, plays a critical role in countering their persuasiveness - with logically invalid explanations being deemed less credible. This underscores the importance of teaching logical reasoning and critical thinking skills to identify logically invalid arguments, fostering greater resilience against advanced AI-driven misinformation.

Similar Work