Can Large Language Models Change User Preference Adversarially? · The Large Language Model Bible Contribute to LLM-Bible

Can Large Language Models Change User Preference Adversarially?

Subhash Varshini. Arxiv 2023

[Paper]    
Applications Attention Mechanism GPT Interpretability And Explainability Model Architecture Security Transformer

Pretrained large language models (LLMs) are becoming increasingly powerful and ubiquitous in mainstream applications such as being a personal assistant, a dialogue model, etc. As these models become proficient in deducing user preferences and offering tailored assistance, there is an increasing concern about the ability of these models to influence, modify and in the extreme case manipulate user preference adversarially. The issue of lack of interpretability in these models in adversarial settings remains largely unsolved. This work tries to study adversarial behavior in user preferences from the lens of attention probing, red teaming and white-box analysis. Specifically, it provides a bird’s eye view of existing literature, offers red teaming samples for dialogue models like ChatGPT and GODEL and probes the attention mechanism in the latter for non-adversarial and adversarial settings.

Similar Work