Seallms 3: Open Foundation And Chat Multilingual Large Language Models For Southeast Asian Languages · The Large Language Model Bible Contribute to LLM-Bible

Seallms 3: Open Foundation And Chat Multilingual Large Language Models For Southeast Asian Languages

Zhang Wenxuan, Chan Hou Pong, Zhao Yiran, Aljunied Mahani, Wang Jianyu, Liu Chaoqun, Deng Yue, Hu Zhiqiang, Xu Weiwen, Chia Yew Ken, Li Xin, Bing Lidong. Arxiv 2024

[Paper]    
RAG Reinforcement Learning Responsible AI Training Techniques

Large Language Models (LLMs) have shown remarkable abilities across various tasks, yet their development has predominantly centered on high-resource languages like English and Chinese, leaving low-resource languages underserved. To address this disparity, we present SeaLLMs 3, the latest iteration of the SeaLLMs model family, tailored for Southeast Asian languages. This region, characterized by its rich linguistic diversity, has lacked adequate language technology support. SeaLLMs 3 aims to bridge this gap by covering a comprehensive range of languages spoken in this region, including English, Chinese, Indonesian, Vietnamese, Thai, Tagalog, Malay, Burmese, Khmer, Lao, Tamil, and Javanese. Leveraging efficient language enhancement techniques and a specially constructed instruction tuning dataset, SeaLLMs 3 significantly reduces training costs while maintaining high performance and versatility. Our model excels in tasks such as world knowledge, mathematical reasoning, translation, and instruction following, achieving state-of-the-art performance among similarly sized models. Additionally, we prioritized safety and reliability by addressing both general and culture-specific considerations and incorporated mechanisms to reduce hallucinations. This work underscores the importance of inclusive AI, showing that advanced LLM capabilities can benefit underserved linguistic and cultural communities.

Similar Work