Diff-explainer: Differentiable Convex Optimization For Explainable Multi-hop Inference · The Large Language Model Bible Contribute to LLM-Bible

Diff-explainer: Differentiable Convex Optimization For Explainable Multi-hop Inference

Thayaparan Mokanarangan, Valentino Marco, Ferreira Deborah, Rozanova Julia, Freitas André. Arxiv 2021

[Paper]    
Applications Efficiency And Optimization Fine Tuning Interpretability And Explainability Model Architecture Pretraining Methods Reinforcement Learning Tools Training Techniques Transformer

This paper presents Diff-Explainer, the first hybrid framework for explainable multi-hop inference that integrates explicit constraints with neural architectures through differentiable convex optimization. Specifically, Diff-Explainer allows for the fine-tuning of neural representations within a constrained optimization framework to answer and explain multi-hop questions in natural language. To demonstrate the efficacy of the hybrid framework, we combine existing ILP-based solvers for multi-hop Question Answering (QA) with Transformer-based representations. An extensive empirical evaluation on scientific and commonsense QA tasks demonstrates that the integration of explicit constraints in an end-to-end differentiable framework can significantly improve the performance of non-differentiable ILP solvers (8.91%

Similar Work