Injecting New Knowledge Into Large Language Models Via Supervised Fine-tuning · The Large Language Model Bible Contribute to LLM-Bible

Injecting New Knowledge Into Large Language Models Via Supervised Fine-tuning

Mecklenburg Nick, Lin Yiyou, Li Xiaoxiao, Holstein Daniel, Nunes Leonardo, Malvar Sara, Silva Bruno, Chandra Ranveer, Aski Vijay, Yannam Pavan Kumar Reddy, Aktas Tolga, Hendry Todd. Arxiv 2024

[Paper]    
Applications Fine Tuning GPT Model Architecture Pretraining Methods RAG Reinforcement Learning Training Techniques

In recent years, Large Language Models (LLMs) have shown remarkable performance in generating human-like text, proving to be a valuable asset across various applications. However, adapting these models to incorporate new, out-of-domain knowledge remains a challenge, particularly for facts and events that occur after the model’s knowledge cutoff date. This paper investigates the effectiveness of Supervised Fine-Tuning (SFT) as a method for knowledge injection in LLMs, specifically focusing on the domain of recent sporting events. We compare different dataset generation strategies – token-based and fact-based scaling – to create training data that helps the model learn new information. Our experiments on GPT-4 demonstrate that while token-based scaling can lead to improvements in Q&A accuracy, it may not provide uniform coverage of new knowledge. Fact-based scaling, on the other hand, offers a more systematic approach to ensure even coverage across all facts. We present a novel dataset generation process that leads to more effective knowledge ingestion through SFT, and our results show considerable performance improvements in Q&A tasks related to out-of-domain knowledge. This study contributes to the understanding of domain adaptation for LLMs and highlights the potential of SFT in enhancing the factuality of LLM responses in specific knowledge domains.

Similar Work