Hierarchical Reinforcement Learning For Open-domain Dialog · The Large Language Model Bible Contribute to LLM-Bible

Hierarchical Reinforcement Learning For Open-domain Dialog

Saleh Abdelrhman, Jaques Natasha, Ghandeharioun Asma, Shen Judy Hanwen, Picard Rosalind. Arxiv 2019

[Paper]    
Agentic Ethics And Bias Model Architecture Pretraining Methods Reinforcement Learning Tools Training Techniques Transformer

Open-domain dialog generation is a challenging problem; maximum likelihood training can lead to repetitive outputs, models have difficulty tracking long-term conversational goals, and training on standard movie or online datasets may lead to the generation of inappropriate, biased, or offensive text. Reinforcement Learning (RL) is a powerful framework that could potentially address these issues, for example by allowing a dialog model to optimize for reducing toxicity and repetitiveness. However, previous approaches which apply RL to open-domain dialog generation do so at the word level, making it difficult for the model to learn proper credit assignment for long-term conversational rewards. In this paper, we propose a novel approach to hierarchical reinforcement learning, VHRL, which uses policy gradients to tune the utterance-level embedding of a variational sequence model. This hierarchical approach provides greater flexibility for learning long-term, conversational rewards. We use self-play and RL to optimize for a set of human-centered conversation metrics, and show that our approach provides significant improvements – in terms of both human evaluation and automatic metrics – over state-of-the-art dialog models, including Transformers.

Similar Work