Knowledge Prompts: Injecting World Knowledge Into Language Models Through Soft Prompts · The Large Language Model Bible Contribute to LLM-Bible

Knowledge Prompts: Injecting World Knowledge Into Language Models Through Soft Prompts

Santos Cicero Nogueira Dos, Dong Zhe, Cer Daniel, Nham John, Shakeri Siamak, Ni Jianmo, Sung Yun-hsuan. Arxiv 2022

[Paper]    
Pretraining Methods Prompting Reinforcement Learning Training Techniques

Soft prompts have been recently proposed as a tool for adapting large frozen language models (LMs) to new tasks. In this work, we repurpose soft prompts to the task of injecting world knowledge into LMs. We introduce a method to train soft prompts via self-supervised learning on data from knowledge bases. The resulting soft knowledge prompts (KPs) are task independent and work as an external memory of the LMs. We perform qualitative and quantitative experiments and demonstrate that: (1) KPs can effectively model the structure of the training data; (2) KPs can be used to improve the performance of LMs in different knowledge intensive tasks.

Similar Work