An Empirical Comparison On Imitation Learning And Reinforcement Learning For Paraphrase Generation · The Large Language Model Bible Contribute to LLM-Bible

An Empirical Comparison On Imitation Learning And Reinforcement Learning For Paraphrase Generation

Du Wanyu, Ji Yangfeng. Arxiv 2019

[Paper]    
Agentic Ethics And Bias Reinforcement Learning

Generating paraphrases from given sentences involves decoding words step by step from a large vocabulary. To learn a decoder, supervised learning which maximizes the likelihood of tokens always suffers from the exposure bias. Although both reinforcement learning (RL) and imitation learning (IL) have been widely used to alleviate the bias, the lack of direct comparison leads to only a partial image on their benefits. In this work, we present an empirical study on how RL and IL can help boost the performance of generating paraphrases, with the pointer-generator as a base model. Experiments on the benchmark datasets show that (1) imitation learning is constantly better than reinforcement learning; and (2) the pointer-generator models with imitation learning outperform the state-of-the-art methods with a large margin.

Similar Work