A Copy-augmented Sequence-to-sequence Architecture Gives Good Performance On Task-oriented Dialogue · The Large Language Model Bible Contribute to LLM-Bible

A Copy-augmented Sequence-to-sequence Architecture Gives Good Performance On Task-oriented Dialogue

Eric Mihail, Manning Christopher D.. Arxiv 2017

[Paper]    
Agentic Attention Mechanism Model Architecture Uncategorized

Task-oriented dialogue focuses on conversational agents that participate in user-initiated dialogues on domain-specific topics. In contrast to chatbots, which simply seek to sustain open-ended meaningful discourse, existing task-oriented agents usually explicitly model user intent and belief states. This paper examines bypassing such an explicit representation by depending on a latent neural embedding of state and learning selective attention to dialogue history together with copying to incorporate relevant prior context. We complement recent work by showing the effectiveness of simple sequence-to-sequence neural architectures with a copy mechanism. Our model outperforms more complex memory-augmented models by 7% in per-response generation and is on par with the current state-of-the-art on DSTC2.

Similar Work