Dempt: Decoding-enhanced Multi-phase Prompt Tuning For Making Llms Be Better Context-aware Translators · The Large Language Model Bible Contribute to LLM-Bible

Dempt: Decoding-enhanced Multi-phase Prompt Tuning For Making Llms Be Better Context-aware Translators

Lyu Xinglin, Li Junhui, Zhao Yanqing, Zhang Min, Wei Daimeng, Tao Shimin, Yang Hao, Zhang Min. Arxiv 2024

[Paper]    
Applications Prompting

Generally, the decoder-only large language models (LLMs) are adapted to context-aware neural machine translation (NMT) in a concatenating way, where LLMs take the concatenation of the source sentence (i.e., intra-sentence context) and the inter-sentence context as the input, and then to generate the target tokens sequentially. This adaptation strategy, i.e., concatenation mode, considers intra-sentence and inter-sentence contexts with the same priority, despite an apparent difference between the two kinds of contexts. In this paper, we propose an alternative adaptation approach, named Decoding-enhanced Multi-phase Prompt Tuning (DeMPT), to make LLMs discriminately model and utilize the inter- and intra-sentence context and more effectively adapt LLMs to context-aware NMT. First, DeMPT divides the context-aware NMT process into three separate phases. During each phase, different continuous prompts are introduced to make LLMs discriminately model various information. Second, DeMPT employs a heuristic way to further discriminately enhance the utilization of the source-side inter- and intra-sentence information at the final decoding phase. Experiments show that our approach significantly outperforms the concatenation method, and further improves the performance of LLMs in discourse modeling.

Similar Work