[Paper]
The unparalleled performance of closed-sourced ChatGPT has sparked efforts
towards its democratization, with notable strides made by leveraging real user
and ChatGPT dialogues, as evidenced by Vicuna. However, due to challenges in
gathering dialogues involving human participation, current endeavors like Baize
and UltraChat rely on ChatGPT conducting roleplay to simulate humans based on
instructions, resulting in overdependence on seeds, diminished human-likeness,
limited topic diversity, and an absence of genuine multi-round conversational
dynamics. To address the above issues, we propose a paradigm to simulate human
behavior better and explore the benefits of incorporating more human-like
questions in multi-turn conversations. Specifically, we directly target human
questions extracted from genuine human-machine conversations as a learning goal
and provide a novel user simulator called Socratic'. The experimental results
show our response model,
PlatoLM’, achieves SoTA performance among LLaMA-based
7B models in MT-Bench. Our findings further demonstrate that our method
introduces highly human-like questioning patterns and rich topic structures,
which can teach the response model better than previous works in multi-round
conversations.