Linguistically Informed Chatgpt Prompts To Enhance Japanese-chinese Machine Translation: A Case Study On Attributive Clauses · The Large Language Model Bible Contribute to LLM-Bible

Linguistically Informed Chatgpt Prompts To Enhance Japanese-chinese Machine Translation: A Case Study On Attributive Clauses

Gu Wenshi. Arxiv 2023

[Paper]    
Applications GPT Model Architecture Prompting RAG Reinforcement Learning Tools

In the field of Japanese-Chinese translation linguistics, the issue of correctly translating attributive clauses has persistently proven to be challenging. Present-day machine translation tools often fail to accurately translate attributive clauses from Japanese to Chinese. In light of this, this paper investigates the linguistic problem underlying such difficulties, namely how does the semantic role of the modified noun affect the selection of translation patterns for attributive clauses, from a linguistic perspective. To ad-dress these difficulties, a pre-edit scheme is proposed, which aims to enhance the accuracy of translation. Furthermore, we propose a novel two-step prompt strategy, which combines this pre-edit scheme with ChatGPT, currently the most widely used large language model. This prompt strategy is capable of optimizing translation input in zero-shot scenarios and has been demonstrated to improve the average translation accuracy score by over 35%.

Similar Work