Tailoring Vaccine Messaging With Common-ground Opinions · The Large Language Model Bible Contribute to LLM-Bible

Tailoring Vaccine Messaging With Common-ground Opinions

Stureborg Rickard, Chen Sanxing, Xie Ruoyu, Patel Aayushi, Li Christopher, Zhu Chloe Qinyu, Hu Tingnan, Yang Jun, Dhingra Bhuwan. Arxiv 2024

[Paper] [Code] [Code]    
BERT GPT Has Code Model Architecture Reinforcement Learning

One way to personalize chatbot interactions is by establishing common ground with the intended reader. A domain where establishing mutual understanding could be particularly impactful is vaccine concerns and misinformation. Vaccine interventions are forms of messaging which aim to answer concerns expressed about vaccination. Tailoring responses in this domain is difficult, since opinions often have seemingly little ideological overlap. We define the task of tailoring vaccine interventions to a Common-Ground Opinion (CGO). Tailoring responses to a CGO involves meaningfully improving the answer by relating it to an opinion or belief the reader holds. In this paper we introduce TAILOR-CGO, a dataset for evaluating how well responses are tailored to provided CGOs. We benchmark several major LLMs on this task; finding GPT-4-Turbo performs significantly better than others. We also build automatic evaluation metrics, including an efficient and accurate BERT model that outperforms finetuned LLMs, investigate how to successfully tailor vaccine messaging to CGOs, and provide actionable recommendations from this investigation. Code and model weights: https://github.com/rickardstureborg/tailor-cgo Dataset: https://huggingface.co/datasets/DukeNLP/tailor-cgo

Similar Work