Leveraging Chatgpt In Pharmacovigilance Event Extraction: An Empirical Study · The Large Language Model Bible Contribute to LLM-Bible

Leveraging Chatgpt In Pharmacovigilance Event Extraction: An Empirical Study

Sun Zhaoyue, Pergola Gabriele, Wallace Byron C., He Yulan. Arxiv 2024

[Paper]    
Applications Fine Tuning GPT Model Architecture Pretraining Methods Prompting RAG Training Techniques

With the advent of large language models (LLMs), there has been growing interest in exploring their potential for medical applications. This research aims to investigate the ability of LLMs, specifically ChatGPT, in the context of pharmacovigilance event extraction, of which the main goal is to identify and extract adverse events or potential therapeutic events from textual medical sources. We conduct extensive experiments to assess the performance of ChatGPT in the pharmacovigilance event extraction task, employing various prompts and demonstration selection strategies. The findings demonstrate that while ChatGPT demonstrates reasonable performance with appropriate demonstration selection strategies, it still falls short compared to fully fine-tuned small models. Additionally, we explore the potential of leveraging ChatGPT for data augmentation. However, our investigation reveals that the inclusion of synthesized data into fine-tuning may lead to a decrease in performance, possibly attributed to noise in the ChatGPT-generated labels. To mitigate this, we explore different filtering strategies and find that, with the proper approach, more stable performance can be achieved, although constant improvement remains elusive.

Similar Work