Improving Neural Response Diversity With Frequency-aware Cross-entropy Loss · The Large Language Model Bible Contribute to LLM-Bible

Improving Neural Response Diversity With Frequency-aware Cross-entropy Loss

Jiang Shaojie, Ren Pengjie, Monz Christof, De Rijke Maarten. Arxiv 2019

[Paper]    
RAG Uncategorized

Sequence-to-Sequence (Seq2Seq) models have achieved encouraging performance on the dialogue response generation task. However, existing Seq2Seq-based response generation methods suffer from a low-diversity problem: they frequently generate generic responses, which make the conversation less interesting. In this paper, we address the low-diversity problem by investigating its connection with model over-confidence reflected in predicted distributions. Specifically, we first analyze the influence of the commonly used Cross-Entropy (CE) loss function, and find that the CE loss function prefers high-frequency tokens, which results in low-diversity responses. We then propose a Frequency-Aware Cross-Entropy (FACE) loss function that improves over the CE loss function by incorporating a weighting mechanism conditioned on token frequency. Extensive experiments on benchmark datasets show that the FACE loss function is able to substantially improve the diversity of existing state-of-the-art Seq2Seq response generation methods, in terms of both automatic and human evaluations.

Similar Work