Parrot: Enhancing Multi-turn Instruction Following For Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

Parrot: Enhancing Multi-turn Instruction Following For Large Language Models

Sun Yuchong, Liu Che, Zhou Kun, Huang Jinwen, Song Ruihua, Zhao Wayne Xin, Zhang Fuzheng, Zhang Di, Gai Kun. Arxiv 2023

[Paper]    
Efficiency And Optimization Reinforcement Learning Training Techniques

Humans often interact with large language models (LLMs) in multi-turn interaction to obtain desired answers or more information. However, most existing studies overlook the multi-turn instruction following ability of LLMs, in terms of training dataset, training method, and evaluation benchmark. In this paper, we introduce Parrot, a solution aiming to enhance multi-turn instruction following for LLMs. First, we introduce an efficient but effective method for collecting multi-turn instructions that feature human-like queries, such as anaphora and ellipsis. Second, we propose a context-aware preference optimization strategy to further enhance LLMs for complex queries in multi-turn interaction. Moreover, to quantitatively evaluate LLMs in multi-turn instruction following, we manually build a multi-turn benchmark derived from existing ones. Extensive experiments show that Parrot improves current LLMs by up to 7.2% in multi-turn instruction following. Our dataset and codes will be open-sourced to facilitate future research.

Similar Work