Wanglab At Mediqa-chat 2023: Clinical Note Generation From Doctor-patient Conversations Using Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

Wanglab At Mediqa-chat 2023: Clinical Note Generation From Doctor-patient Conversations Using Large Language Models

Giorgi John, Toma Augustin, Xie Ronald, Chen Sondra S., An Kevin R., Zheng Grace X., Wang Bo. Arxiv 2023

[Paper]    
BERT Few Shot GPT In Context Learning Model Architecture Prompting

This paper describes our submission to the MEDIQA-Chat 2023 shared task for automatic clinical note generation from doctor-patient conversations. We report results for two approaches: the first fine-tunes a pre-trained language model (PLM) on the shared task data, and the second uses few-shot in-context learning (ICL) with a large language model (LLM). Both achieve high performance as measured by automatic metrics (e.g. ROUGE, BERTScore) and ranked second and first, respectively, of all submissions to the shared task. Expert human scrutiny indicates that notes generated via the ICL-based approach with GPT-4 are preferred about as often as human-written notes, making it a promising path toward automated note generation from doctor-patient conversations.

Similar Work