Explain Yourself! Leveraging Language Models For Commonsense Reasoning · The Large Language Model Bible Contribute to LLM-Bible

Explain Yourself! Leveraging Language Models For Commonsense Reasoning

Nazneen Fatema Rajani, Bryan Mccann, Caiming Xiong, Richard Socher. In Proceedings of the Association for Computational Linguistics (ACL) 2019. Florence Italy 2019 – 71 citations

[Paper]    
Training Techniques RAG Tools Interpretability and Explainability Reinforcement Learning

Deep learning models perform poorly on tasks that require commonsense reasoning, which often necessitates some form of world-knowledge or reasoning over information not immediately present in the input. We collect human explanations for commonsense reasoning in the form of natural language sequences and highlighted annotations in a new dataset called Common Sense Explanations (CoS-E). We use CoS-E to train language models to automatically generate explanations that can be used during training and inference in a novel Commonsense Auto-Generated Explanation (CAGE) framework. CAGE improves the state-of-the-art by 10% on the challenging CommonsenseQA task. We further study commonsense reasoning in DNNs using both human and auto-generated explanations including transfer to out-of-domain tasks. Empirical results indicate that we can effectively leverage language models for commonsense reasoning.

Similar Work