The Second Conversational Intelligence Challenge (convai2) · The Large Language Model Bible Contribute to LLM-Bible

The Second Conversational Intelligence Challenge (convai2)

Dinan Emily, Logacheva Varvara, Malykh Valentin, Miller Alexander, Shuster Kurt, Urbanek Jack, Kiela Douwe, Szlam Arthur, Serban Iulian, Lowe Ryan, Prabhumoye Shrimai, Black Alan W, Rudnicky Alexander, Williams Jason, Pineau Joelle, Burtsev Mikhail, Weston Jason. Arxiv 2019

[Paper]    
Model Architecture Pretraining Methods Transformer

We describe the setting and results of the ConvAI2 NeurIPS competition that aims to further the state-of-the-art in open-domain chatbots. Some key takeaways from the competition are: (i) pretrained Transformer variants are currently the best performing models on this task, (ii) but to improve performance on multi-turn conversations with humans, future systems must go beyond single word metrics like perplexity to measure the performance across sequences of utterances (conversations) – in terms of repetition, consistency and balance of dialogue acts (e.g. how many questions asked vs. answered).

Similar Work