Panda LLM: Training Data And Evaluation For Open-sourced Chinese Instruction-following Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

Panda LLM: Training Data And Evaluation For Open-sourced Chinese Instruction-following Large Language Models

Jiao Fangkai, Ding Bosheng, Luo Tianze, Mo Zhanfeng. Arxiv 2023

[Paper]    
Fine Tuning Training Techniques Uncategorized

This project focuses on enhancing open-source large language models through instruction-tuning and providing comprehensive evaluations of their performance. We explore how various training data factors, such as quantity, quality, and linguistic distribution, influence the performance of instruction-tuned models trained on publicly accessible high-quality instruction datasets for both English and Chinese languages. Our goal is to supplement evaluation with quantitative analyses, providing valuable insights for the continued advancement of open-source chat models. Our model, data, and code are publicly available for others to use and build upon.

Similar Work