Bianque: Balancing The Questioning And Suggestion Ability Of Health Llms With Multi-turn Health Conversations Polished By Chatgpt · The Large Language Model Bible Contribute to LLM-Bible

Bianque: Balancing The Questioning And Suggestion Ability Of Health Llms With Multi-turn Health Conversations Polished By Chatgpt

Chen Yirong, Wang Zhenyu, Xing Xiaofen, Zheng Huimin, Xu Zhipei, Fang Kai, Wang Junhong, Li Sihang, Wu Jieling, Liu Qi, Xu Xiangmin. Arxiv 2023

[Paper]    
GPT Model Architecture Reinforcement Learning

Large language models (LLMs) have performed well in providing general and extensive health suggestions in single-turn conversations, exemplified by systems such as ChatGPT, ChatGLM, ChatDoctor, DoctorGLM, and etc. However, the limited information provided by users during single turn results in inadequate personalization and targeting of the generated suggestions, which requires users to independently select the useful part. It is mainly caused by the missing ability to engage in multi-turn questioning. In real-world medical consultations, doctors usually employ a series of iterative inquiries to comprehend the patient’s condition thoroughly, enabling them to provide effective and personalized suggestions subsequently, which can be defined as chain of questioning (CoQ) for LLMs. To improve the CoQ of LLMs, we propose BianQue, a ChatGLM-based LLM finetuned with the self-constructed health conversation dataset BianQueCorpus that is consist of multiple turns of questioning and health suggestions polished by ChatGPT. Experimental results demonstrate that the proposed BianQue can simultaneously balance the capabilities of both questioning and health suggestions, which will help promote the research and application of LLMs in the field of proactive health.

Similar Work