Making A Long Story Short In Conversation Modeling · The Large Language Model Bible Contribute to LLM-Bible

Making A Long Story Short In Conversation Modeling

Tao Yufei, Mines Tiernan, Agrawal Ameeta. Arxiv 2024

[Paper]    
Applications Fine Tuning GPT Model Architecture

Conversation systems accommodate diverse users with unique personalities and distinct writing styles. Within the domain of multi-turn dialogue modeling, this work studies the impact of varied utterance lengths on the quality of subsequent responses generated by conversation models. Using GPT-3 as the base model, multiple dialogue datasets, and several metrics, we conduct a thorough exploration of this aspect of conversational models. Our analysis sheds light on the complex relationship between utterance lengths and the quality of follow-up responses generated by dialogue systems. Empirical findings suggests that, for certain types of conversations, utterance lengths can be reduced by up to 72% without any noticeable difference in the quality of follow-up responses.

Similar Work