Machine Translation With Large Language Models: Prompt Engineering For Persian, English, And Russian Directions · The Large Language Model Bible Contribute to LLM-Bible

Machine Translation With Large Language Models: Prompt Engineering For Persian, English, And Russian Directions

Pourkamali Nooshin, Sharifi Shler Ebrahim. Arxiv 2024

[Paper]    
Applications Fine Tuning In Context Learning Pretraining Methods Prompting Tools Training Techniques

Generative large language models (LLMs) have demonstrated exceptional proficiency in various natural language processing (NLP) tasks, including machine translation, question answering, text summarization, and natural language understanding. To further enhance the performance of LLMs in machine translation, we conducted an investigation into two popular prompting methods and their combination, focusing on cross-language combinations of Persian, English, and Russian. We employed n-shot feeding and tailored prompting frameworks. Our findings indicate that multilingual LLMs like PaLM exhibit human-like machine translation outputs, enabling superior fine-tuning of desired translation nuances in accordance with style guidelines and linguistic considerations. These models also excel in processing and applying prompts. However, the choice of language model, machine translation task, and the specific source and target languages necessitate certain considerations when adopting prompting frameworks and utilizing n-shot in-context learning. Furthermore, we identified errors and limitations inherent in popular LLMs as machine translation tools and categorized them based on various linguistic metrics. This typology of errors provides valuable insights for utilizing LLMs effectively and offers methods for designing prompts for in-context learning. Our report aims to contribute to the advancement of machine translation with LLMs by improving both the accuracy and reliability of evaluation metrics.

Similar Work