Quantitative Certification Of Bias In Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

Quantitative Certification Of Bias In Large Language Models

Chaudhary Isha, Hu Qian, Kumar Manoj, Ziyadi Morteza, Gupta Rahul, Singh Gagandeep. Arxiv 2024

[Paper]    
Ethics And Bias Prompting Reinforcement Learning Tools

Large Language Models (LLMs) can produce responses that exhibit social biases and support stereotypes. However, conventional benchmarking is insufficient to thoroughly evaluate LLM bias, as it can not scale to large sets of prompts and provides no guarantees. Therefore, we propose a novel certification framework QuaCer-B (Quantitative Certification of Bias) that provides formal guarantees on obtaining unbiased responses from target LLMs under large sets of prompts. A certificate consists of high-confidence bounds on the probability of obtaining biased responses from the LLM for any set of prompts containing sensitive attributes, sampled from a distribution. We illustrate the bias certification in LLMs for prompts with various prefixes drawn from given distributions. We consider distributions of random token sequences, mixtures of manual jailbreaks, and jailbreaks in the LLM’s embedding space to certify its bias. We certify popular LLMs with QuaCer-B and present novel insights into their biases.

Similar Work