An Empirical Study On Cross-lingual Vocabulary Adaptation For Efficient Language Model Inference · The Large Language Model Bible Contribute to LLM-Bible

An Empirical Study On Cross-lingual Vocabulary Adaptation For Efficient Language Model Inference

Yamaguchi Atsuki, Villavicencio Aline, Aletras Nikolaos. Arxiv 2024

[Paper]    
Applications Efficiency And Optimization Training Techniques

The development of state-of-the-art generative large language models (LLMs) disproportionately relies on English-centric tokenizers, vocabulary and pre-training data. Despite the fact that some LLMs have multilingual capabilities, recent studies have shown that their inference efficiency deteriorates when generating text in languages other than English. This results in increased inference time and costs. Cross-lingual vocabulary adaptation (CVA) methods have been proposed for adapting models to a target language aiming to improve downstream performance. However, the effectiveness of these methods on increasing inference efficiency of generative LLMs has yet to be explored. In this paper, we perform an empirical study of five CVA methods on four generative LLMs (including monolingual and multilingual models) across four typologically-diverse languages and four natural language understanding tasks. We find that CVA substantially contributes to LLM inference speedups of up to 271.5%. We also show that adapting LLMs that have been pre-trained on more balanced multilingual data results in downstream performance comparable to the original models.

Similar Work