EXAONE 3.0 7.8B Instruction Tuned Language Model · The Large Language Model Bible Contribute to LLM-Bible

EXAONE 3.0 7.8B Instruction Tuned Language Model

Research Lg Ai, :, An Soyoung, Bae Kyunghoon, Choi Eunbi, Choi Stanley Jungkyu, Choi Yemuk, Hong Seokhee, Hong Yeonjung, Hwang Junwon, Jeon Hyojin, Jo Gerrard Jeongwon, Jo Hyunjik, Jung Jiyeon, Jung Yountae, Kim Euisoon, Kim Hyosang, Kim Joonkee, Kim Seonghwan, Kim Soyeon, Kim Sunkyoung, Kim Yireun, Kim Youchul, Lee Edward Hwayoung, Lee Haeju, Lee Honglak, Lee Jinsik, Lee Kyungmin, Lee Moontae, Lee Seungjun, Lim Woohyung, Park Sangha, Park Sooyoun, Park Yongmin, Seo Boseong, Yang Sihoon, Yeen Heuiyeen, Yoo Kyungjae, Yun Hyeongu. Arxiv 2024

[Paper]    
Reinforcement Learning

We introduce EXAONE 3.0 instruction-tuned language model, the first open model in the family of Large Language Models (LLMs) developed by LG AI Research. Among different model sizes, we publicly release the 7.8B instruction-tuned model to promote open research and innovations. Through extensive evaluations across a wide range of public and in-house benchmarks, EXAONE 3.0 demonstrates highly competitive real-world performance with instruction-following capability against other state-of-the-art open models of similar size. Our comparative analysis shows that EXAONE 3.0 excels particularly in Korean, while achieving compelling performance across general tasks and complex reasoning. With its strong real-world effectiveness and bilingual proficiency, we hope that EXAONE keeps contributing to advancements in Expert AI. Our EXAONE 3.0 instruction-tuned model is available at https://huggingface.co/LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct

Similar Work