Knowledge-grounded Response Generation With Deep Attentional Latent-variable Model · The Large Language Model Bible Contribute to LLM-Bible

Knowledge-grounded Response Generation With Deep Attentional Latent-variable Model

Ye Hao-tong, Lo Kai-ling, Su Shang-yu, Chen Yun-nung. Arxiv 2019

[Paper]    
Applications Attention Mechanism Model Architecture Reinforcement Learning Transformer

End-to-end dialogue generation has achieved promising results without using handcrafted features and attributes specific for each task and corpus. However, one of the fatal drawbacks in such approaches is that they are unable to generate informative utterances, so it limits their usage from some real-world conversational applications. This paper attempts at generating diverse and informative responses with a variational generation model, which contains a joint attention mechanism conditioning on the information from both dialogue contexts and extra knowledge.

Similar Work