Generating High-quality And Informative Conversation Responses With Sequence-to-sequence Models · The Large Language Model Bible Contribute to LLM-Bible

Generating High-quality And Informative Conversation Responses With Sequence-to-sequence Models

Shao Louis, Gouws Stephan, Britz Denny, Goldie Anna, Strope Brian, Kurzweil Ray. Arxiv 2017

[Paper]    
Attention Mechanism Model Architecture Reinforcement Learning Transformer

Sequence-to-sequence models have been applied to the conversation response generation problem where the source sequence is the conversation history and the target sequence is the response. Unlike translation, conversation responding is inherently creative. The generation of long, informative, coherent, and diverse responses remains a hard task. In this work, we focus on the single turn setting. We add self-attention to the decoder to maintain coherence in longer responses, and we propose a practical approach, called the glimpse-model, for scaling to large datasets. We introduce a stochastic beam-search algorithm with segment-by-segment reranking which lets us inject diversity earlier in the generation process. We trained on a combined data set of over 2.3B conversation messages mined from the web. In human evaluation studies, our method produces longer responses overall, with a higher proportion rated as acceptable and excellent as length increases, compared to baseline sequence-to-sequence models with explicit length-promotion. A back-off strategy produces better responses overall, in the full spectrum of lengths.

Similar Work