Can Prompt Modifiers Control Bias? A Comparative Analysis Of Text-to-image Generative Models · The Large Language Model Bible Contribute to LLM-Bible

Can Prompt Modifiers Control Bias? A Comparative Analysis Of Text-to-image Generative Models

Shin Philip Wootaek, Ahn Jihyun Janice, Yin Wenpeng, Sampson Jack, Narayanan Vijaykrishnan. Arxiv 2024

[Paper]    
Ethics And Bias Fine Tuning Merging Prompting Responsible AI Tools

It has been shown that many generative models inherit and amplify societal biases. To date, there is no uniform/systematic agreed standard to control/adjust for these biases. This study examines the presence and manipulation of societal biases in leading text-to-image models: Stable Diffusion, DALL-E 3, and Adobe Firefly. Through a comprehensive analysis combining base prompts with modifiers and their sequencing, we uncover the nuanced ways these AI technologies encode biases across gender, race, geography, and region/culture. Our findings reveal the challenges and potential of prompt engineering in controlling biases, highlighting the critical need for ethical AI development promoting diversity and inclusivity. This work advances AI ethics by not only revealing the nuanced dynamics of bias in text-to-image generation models but also by offering a novel framework for future research in controlling bias. Our contributions-panning comparative analyses, the strategic use of prompt modifiers, the exploration of prompt sequencing effects, and the introduction of a bias sensitivity taxonomy-lay the groundwork for the development of common metrics and standard analyses for evaluating whether and how future AI models exhibit and respond to requests to adjust for inherent biases.

Similar Work