Leveraging Web-crawled Data For High-quality Fine-tuning · The Large Language Model Bible Contribute to LLM-Bible

Leveraging Web-crawled Data For High-quality Fine-tuning

Zhou Jing, Jiang Chenglin, Shen Wei, Zhou Xiao, He Xiaonan. Arxiv 2024

[Paper]    
Fine Tuning GPT Model Architecture Pretraining Methods RAG Training Techniques

Most large language models are fine-tuned using either expensive human-annotated data or GPT-4 generated data which cannot guarantee performance in certain domains. We argue that although the web-crawled data often has formatting errors causing semantic inaccuracies, it can still serve as a valuable source for high-quality supervised fine-tuning in specific domains without relying on advanced models like GPT-4. To this end, we create a paired training dataset automatically by aligning web-crawled data with a smaller set of high-quality data. By training a language model on this dataset, we can convert web data with irregular formats into high-quality ones. Our experiments show that training with the model-transformed data yields better results, surpassing training with only high-quality data by an average score of 9.4% in Chinese math problems. Additionally, our 7B model outperforms several open-source models larger than 32B and surpasses well-known closed-source models such as GPT-3.5, highlighting the efficacy of our approach.

Similar Work