Data Augmentation For Neural Machine Translation Using Generative Language Model · The Large Language Model Bible Contribute to LLM-Bible

Data Augmentation For Neural Machine Translation Using Generative Language Model

Oh Seokjin, Lee Su Ah, Jung Woohwan. Arxiv 2023

[Paper]    
Applications GPT Model Architecture Prompting RAG Tools Training Techniques

Despite the rapid growth in model architecture, the scarcity of large parallel corpora remains the main bottleneck in Neural Machine Translation. Data augmentation is a technique that enhances the performance of data-hungry models by generating synthetic data instead of collecting new ones. We explore prompt-based data augmentation approaches that leverage large-scale language models such as ChatGPT. To create a synthetic parallel corpus, we compare 3 methods using different prompts. We employ two assessment metrics to measure the diversity of the generated synthetic data. This approach requires no further model training cost, which is mandatory in other augmentation methods like back-translation. The proposed method improves the unaugmented baseline by 0.68 BLEU score.

Similar Work