Understanding The Effectiveness Of Very Large Language Models On Dialog Evaluation · The Large Language Model Bible Contribute to LLM-Bible

Understanding The Effectiveness Of Very Large Language Models On Dialog Evaluation

Huynh Jessica, Jiao Cathy, Gupta Prakhar, Mehri Shikib, Bajaj Payal, Chaudhary Vishrav, Eskenazi Maxine. Arxiv 2023

[Paper]    
Applications GPT Model Architecture Prompting Training Techniques

Language models have steadily increased in size over the past few years. They achieve a high level of performance on various natural language processing (NLP) tasks such as question answering and summarization. Large language models (LLMs) have been used for generation and can now output human-like text. Due to this, there are other downstream tasks in the realm of dialog that can now harness the LLMs’ language understanding capabilities. Dialog evaluation is one task that this paper will explore. It concentrates on prompting with LLMs: BLOOM, OPT, GPT-3, Flan-T5, InstructDial and TNLGv2. The paper shows that the choice of datasets used for training a model contributes to how well it performs on a task as well as on how the prompt should be structured. Specifically, the more diverse and relevant the group of datasets that a model is trained on, the better dialog evaluation performs. This paper also investigates how the number of examples in the prompt and the type of example selection used affect the model’s performance.

Similar Work