Worst Of Both Worlds: Biases Compound In Pre-trained Vision-and-language Models · The Large Language Model Bible Contribute to LLM-Bible

Worst Of Both Worlds: Biases Compound In Pre-trained Vision-and-language Models

Srinivasan Tejas, Bisk Yonatan. Arxiv 2021

[Paper]    
Attention Mechanism BERT Ethics And Bias Model Architecture Multimodal Models Reinforcement Learning

Numerous works have analyzed biases in vision and pre-trained language models individually - however, less attention has been paid to how these biases interact in multimodal settings. This work extends text-based bias analysis methods to investigate multimodal language models, and analyzes intra- and inter-modality associations and biases learned by these models. Specifically, we demonstrate that VL-BERT (Su et al., 2020) exhibits gender biases, often preferring to reinforce a stereotype over faithfully describing the visual scene. We demonstrate these findings on a controlled case-study and extend them for a larger set of stereotypically gendered entities.

Similar Work