[Paper]
Language models (LMs) like GPT-4 are important in AI applications, but their opaque decision-making process reduces user trust, especially in safety-critical areas. We introduce LMExplainer, a novel knowledge-grounded explainer that clarifies the reasoning process of LMs through intuitive, human-understandable explanations. By leveraging a graph attention network (GAT) with a large-scale knowledge graph (KG), LMExplainer not only precisely narrows the reasoning space to focus on the most relevant knowledge but also grounds its reasoning in structured, verifiable knowledge to reduce hallucinations and enhance interpretability. LMExplainer effectively generates human-understandable explanations to enhance transparency and streamline the decision-making process. Additionally, by incorporating debugging into the explanation, it offers expertise suggestions that improve LMs from a developmental perspective. Thus, LMExplainer stands as an enhancement in making LMs more accessible and understandable to users. We evaluate LMExplainer on benchmark datasets such as CommonsenseQA and OpenBookQA, demonstrating that it outperforms most existing methods. By comparing the explanations generated by LMExplainer with those of other models, we show that our approach offers more comprehensive and clearer explanations of the reasoning process. LMExplainer provides a deeper understanding of the inner workings of LMs, advancing towards more reliable, transparent, and equitable AI.