Llm-based Medical Assistant Personalization With Short- And Long-term Memory Coordination · The Large Language Model Bible Contribute to LLM-Bible

Llm-based Medical Assistant Personalization With Short- And Long-term Memory Coordination

Zhang Kai, Kang Yangyang, Zhao Fubang, Liu Xiaozhong. Arxiv 2023

[Paper]    
Fine Tuning GPT Model Architecture Pretraining Methods Training Techniques

Large Language Models (LLMs), such as GPT3.5, have exhibited remarkable proficiency in comprehending and generating natural language. On the other hand, medical assistants hold the potential to offer substantial benefits for individuals. However, the exploration of LLM-based personalized medical assistant remains relatively scarce. Typically, patients converse differently based on their background and preferences which necessitates the task of enhancing user-oriented medical assistant. While one can fully train an LLM for this objective, the resource consumption is unaffordable. Prior research has explored memory-based methods to enhance the response with aware of previous mistakes for new queries during a dialogue session. We contend that a mere memory module is inadequate and fully training an LLM can be excessively costly. In this study, we propose a novel computational bionic memory mechanism, equipped with a parameter-efficient fine-tuning (PEFT) schema, to personalize medical assistants.

Similar Work