An Empirical Study On Context Length For Open-domain Dialog Generation · The Large Language Model Bible Contribute to LLM-Bible

An Empirical Study On Context Length For Open-domain Dialog Generation

Shen Xinyi, Lin Zuoquan. Arxiv 2024

[Paper]    
Attention Mechanism Model Architecture Pretraining Methods Reinforcement Learning Training Techniques Transformer

Transformer-based open-domain dialog models have become increasingly popular in recent years. These models typically represent context as a concatenation of a dialog history. However, there is no criterion to decide how many utterances should be kept adequate in a context. We try to figure out how the choice of context length affects the model. We experiment on three questions from coarse to fine: (i) Does longer context help model training? (ii) Is it necessary to change the training context length when dealing with dialogs of different context lengths? (iii) Do different dialog samples have the same preference for context length? Our experimental results show that context length, an often overlooked setting, deserves attention when implementing Transformer-based dialog models.

Similar Work