Analysing The Potential Of Seq-to-seq Models For Incremental Interpretation In Task-oriented Dialogue · The Large Language Model Bible Contribute to LLM-Bible

Analysing The Potential Of Seq-to-seq Models For Incremental Interpretation In Task-oriented Dialogue

Hupkes Dieuwke, Bouwmeester Sanne, Fernández Raquel. Arxiv 2018

[Paper]    
Attention Mechanism Model Architecture Reinforcement Learning

We investigate how encoder-decoder models trained on a synthetic dataset of task-oriented dialogues process disfluencies, such as hesitations and self-corrections. We find that, contrary to earlier results, disfluencies have very little impact on the task success of seq-to-seq models with attention. Using visualisation and diagnostic classifiers, we analyse the representations that are incrementally built by the model, and discover that models develop little to no awareness of the structure of disfluencies. However, adding disfluencies to the data appears to help the model create clearer representations overall, as evidenced by the attention patterns the different models exhibit.

Similar Work