Methodology Of Adapting Large English Language Models For Specific Cultural Contexts · The Large Language Model Bible Contribute to LLM-Bible

Methodology Of Adapting Large English Language Models For Specific Cultural Contexts

Zhang Wenjing, Xiao Siqi, Lei Xuejiao, Wang Ning, Zhang Huazheng, An Meijuan, Yang Bikun, Liu Zhaoxiang, Wang Kai, Lian Shiguo. Arxiv 2024

[Paper]    
Fine Tuning RAG Responsible AI Tools

The rapid growth of large language models(LLMs) has emerged as a prominent trend in the field of artificial intelligence. However, current state-of-the-art LLMs are predominantly based on English. They encounter limitations when directly applied to tasks in specific cultural domains, due to deficiencies in domain-specific knowledge and misunderstandings caused by differences in cultural values. To address this challenge, our paper proposes a rapid adaptation method for large models in specific cultural contexts, which leverages instruction-tuning based on specific cultural knowledge and safety values data. Taking Chinese as the specific cultural context and utilizing the LLaMA3-8B as the experimental English LLM, the evaluation results demonstrate that the adapted LLM significantly enhances its capabilities in domain-specific knowledge and adaptability to safety values, while maintaining its original expertise advantages.

Similar Work