Still No Lie Detector For Language Models: Probing Empirical And Conceptual Roadblocks · The Large Language Model Bible Contribute to LLM-Bible

Still No Lie Detector For Language Models: Probing Empirical And Conceptual Roadblocks

Levinstein B. A., Herrmann Daniel A.. Arxiv 2023

[Paper]    
Uncategorized

We consider the questions of whether or not large language models (LLMs) have beliefs, and, if they do, how we might measure them. First, we evaluate two existing approaches, one due to Azaria and Mitchell (2023) and the other to Burns et al. (2022). We provide empirical results that show that these methods fail to generalize in very basic ways. We then argue that, even if LLMs have beliefs, these methods are unlikely to be successful for conceptual reasons. Thus, there is still no lie-detector for LLMs. After describing our empirical results we take a step back and consider whether or not we should expect LLMs to have something like beliefs in the first place. We consider some recent arguments aiming to show that LLMs cannot have beliefs. We show that these arguments are misguided. We provide a more productive framing of questions surrounding the status of beliefs in LLMs, and highlight the empirical nature of the problem. We conclude by suggesting some concrete paths for future work.

Similar Work