Absinstruct: Eliciting Abstraction Ability From Llms Through Explanation Tuning With Plausibility Estimation · The Large Language Model Bible Contribute to LLM-Bible

Absinstruct: Eliciting Abstraction Ability From Llms Through Explanation Tuning With Plausibility Estimation

Wang Zhaowei, Fan Wei, Zong Qing, Zhang Hongming, Choi Sehyun, Fang Tianqing, Liu Xin, Song Yangqiu, Wong Ginny Y., See Simon. Arxiv 2024

[Paper]    
Interpretability And Explainability Reinforcement Learning Tools

Abstraction ability is crucial in human intelligence, which can also benefit various tasks in NLP study. Existing work shows that LLMs are deficient in abstract ability, and how to improve it remains unexplored. In this work, we design the framework AbsInstruct to enhance LLMs’ abstraction ability through instruction tuning. The framework builds instructions with in-depth explanations to assist LLMs in capturing the underlying rationale of abstraction. Meanwhile, we introduce a plausibility estimator to select instructions that are more consistent with the abstraction knowledge of LLMs to be aligned. Then, our framework combines abstraction instructions with general-purpose ones to build a hybrid dataset. Extensive experiments and analyses demonstrate that our framework can considerably enhance LLMs’ abstraction ability with strong generalization performance while maintaining their general instruction-following abilities.

Similar Work