Retrieval-enhanced Adversarial Training For Neural Response Generation · The Large Language Model Bible Contribute to LLM-Bible

Retrieval-enhanced Adversarial Training For Neural Response Generation

Zhu Qingfu, Cui Lei, Zhang Weinan, Wei Furu, Liu Ting. Arxiv 2018

[Paper]    
Applications RAG Security Tools Training Techniques

Dialogue systems are usually built on either generation-based or retrieval-based approaches, yet they do not benefit from the advantages of different models. In this paper, we propose a Retrieval-Enhanced Adversarial Training (REAT) method for neural response generation. Distinct from existing approaches, the REAT method leverages an encoder-decoder framework in terms of an adversarial training paradigm, while taking advantage of N-best response candidates from a retrieval-based system to construct the discriminator. An empirical study on a large scale public available benchmark dataset shows that the REAT method significantly outperforms the vanilla Seq2Seq model as well as the conventional adversarial training approach.

Similar Work