Queer People Are People First: Deconstructing Sexual Identity Stereotypes In Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

Queer People Are People First: Deconstructing Sexual Identity Stereotypes In Large Language Models

Dhingra Harnoor, Jayashanker Preetiha, Moghe Sayali, Strubell Emma. Arxiv 2023

[Paper]    
Ethics And Bias Prompting

Large Language Models (LLMs) are trained primarily on minimally processed web text, which exhibits the same wide range of social biases held by the humans who created that content. Consequently, text generated by LLMs can inadvertently perpetuate stereotypes towards marginalized groups, like the LGBTQIA+ community. In this paper, we perform a comparative study of how LLMs generate text describing people with different sexual identities. Analyzing bias in the text generated by an LLM using regard score shows measurable bias against queer people. We then show that a post-hoc method based on chain-of-thought prompting using SHAP analysis can increase the regard of the sentence, representing a promising approach towards debiasing the output of LLMs in this setting.

Similar Work