Turingadvice: A Generative And Dynamic Evaluation Of Language Use · The Large Language Model Bible Contribute to LLM-Bible

Turingadvice: A Generative And Dynamic Evaluation Of Language Use

Zellers Rowan, Holtzman Ari, Clark Elizabeth, Qin Lianhui, Farhadi Ali, Choi Yejin. Arxiv 2020

[Paper]    
GPT Model Architecture Tools Training Techniques

We propose TuringAdvice, a new challenge task and dataset for language understanding models. Given a written situation that a real person is currently facing, a model must generate helpful advice in natural language. Our evaluation framework tests a fundamental aspect of human language understanding: our ability to use language to resolve open-ended situations by communicating with each other. Empirical results show that today’s models struggle at TuringAdvice, even multibillion parameter models finetuned on 600k in-domain training examples. The best model, a finetuned T5, writes advice that is at least as helpful as human-written advice in only 14% of cases; a much larger non-finetunable GPT3 model does even worse at 4%. This low performance reveals language understanding errors that are hard to spot outside of a generative setting, showing much room for progress.

Similar Work