Rainier: Reinforced Knowledge Introspector For Commonsense Question Answering · The Large Language Model Bible Contribute to LLM-Bible

Rainier: Reinforced Knowledge Introspector For Commonsense Question Answering

Liu Jiacheng, Hallinan Skyler, Lu Ximing, He Pengfei, Welleck Sean, Hajishirzi Hannaneh, Choi Yejin. Arxiv 2022

[Paper]    
Applications GPT Model Architecture Reinforcement Learning Training Techniques

Knowledge underpins reasoning. Recent research demonstrates that when relevant knowledge is provided as additional context to commonsense question answering (QA), it can substantially enhance the performance even on top of state-of-the-art. The fundamental challenge is where and how to find such knowledge that is high quality and on point with respect to the question; knowledge retrieved from knowledge bases are incomplete and knowledge generated from language models are inconsistent. We present Rainier, or Reinforced Knowledge Introspector, that learns to generate contextually relevant knowledge in response to given questions. Our approach starts by imitating knowledge generated by GPT-3, then learns to generate its own knowledge via reinforcement learning where rewards are shaped based on the increased performance on the resulting question answering. Rainier demonstrates substantial and consistent performance gains when tested over 9 different commonsense benchmarks: including 5 datasets that are seen during model training, as well as 4 datasets that are kept unseen. Our work is the first to report that knowledge generated by models that are orders of magnitude smaller than GPT-3, even without direct supervision on the knowledge itself, can exceed the quality of commonsense knowledge elicited from GPT-3.

Similar Work