Investigating The Translation Capabilities Of Large Language Models Trained On Parallel Data Only · The Large Language Model Bible Contribute to LLM-Bible

Investigating The Translation Capabilities Of Large Language Models Trained On Parallel Data Only

Gilabert Javier GarcĂ­a, Escolano Carlos, Savall Aleix Sant, Fornaciari Francesca De Luca, Mash Audrey, Liao Xixian, Melero Maite. Arxiv 2024

[Paper]    
Applications Fine Tuning Model Architecture Pretraining Methods Prompting Training Techniques

In recent years, Large Language Models (LLMs) have demonstrated exceptional proficiency across a broad spectrum of Natural Language Processing (NLP) tasks, including Machine Translation. However, previous methods predominantly relied on iterative processes such as instruction fine-tuning or continual pre-training, leaving unexplored the challenges of training LLMs solely on parallel data. In this work, we introduce PLUME (Parallel Language Model), a collection of three 2B LLMs featuring varying vocabulary sizes (32k, 128k, and 256k) trained exclusively on Catalan-centric parallel examples. These models perform comparably to previous encoder-decoder architectures on 16 supervised translation directions and 56 zero-shot ones. Utilizing this set of models, we conduct a thorough investigation into the translation capabilities of LLMs, probing their performance, the impact of the different elements of the prompt, and their cross-lingual representation space.

Similar Work