Taking A HINT: Leveraging Explanations To Make Vision And Language Models More Grounded · The Large Language Model Bible Contribute to LLM-Bible

Taking A HINT: Leveraging Explanations To Make Vision And Language Models More Grounded

Selvaraju Ramprasaath R., Lee Stefan, Shen Yilin, Jin Hongxia, Ghosh Shalini, Heck Larry, Batra Dhruv, Parikh Devi. The IEEE International Conference on Computer Vision 2019

[Paper]    
Applications Attention Mechanism Interpretability And Explainability Model Architecture RAG Training Techniques

Many vision and language models suffer from poor visual grounding - often falling back on easy-to-learn language priors rather than basing their decisions on visual concepts in the image. In this work, we propose a generic approach called Human Importance-aware Network Tuning (HINT) that effectively leverages human demonstrations to improve visual grounding. HINT encourages deep networks to be sensitive to the same input regions as humans. Our approach optimizes the alignment between human attention maps and gradient-based network importances - ensuring that models learn not just to look at but rather rely on visual concepts that humans found relevant for a task when making predictions. We apply HINT to Visual Question Answering and Image Captioning tasks, outperforming top approaches on splits that penalize over-reliance on language priors (VQA-CP and robust captioning) using human attention demonstrations for just 6% of the training data.

Similar Work