Keeping Users Engaged During Repeated Administration Of The Same Questionnaire: Using Large Language Models To Reliably Diversify Questions · The Large Language Model Bible Contribute to LLM-Bible

Keeping Users Engaged During Repeated Administration Of The Same Questionnaire: Using Large Language Models To Reliably Diversify Questions

Yun Hye Sun, Arjmand Mehdi, Sherlock Phillip, Paasche-orlow Michael K., Griffith James W., Bickmore Timothy. Arxiv 2023

[Paper]    
Agentic Ethics And Bias Tools

Standardized, validated questionnaires are vital tools in research and healthcare, offering dependable self-report data. Prior work has revealed that virtual agent-administered questionnaires are almost equivalent to self-administered ones in an electronic form. Despite being an engaging method, repeated use of virtual agent-administered questionnaires in longitudinal or pre-post studies can induce respondent fatigue, impacting data quality via response biases and decreased response rates. We propose using large language models (LLMs) to generate diverse questionnaire versions while retaining good psychometric properties. In a longitudinal study, participants interacted with our agent system and responded daily for two weeks to one of the following questionnaires: a standardized depression questionnaire, question variants generated by LLMs, or question variants accompanied by LLM-generated small talk. The responses were compared to a validated depression questionnaire. Psychometric testing revealed consistent covariation between the external criterion and focal measure administered across the three conditions, demonstrating the reliability and validity of the LLM-generated variants. Participants found that the variants were significantly less repetitive than repeated administrations of the same standardized questionnaire. Our findings highlight the potential of LLM-generated variants to invigorate agent-administered questionnaires and foster engagement and interest, without compromising their validity.

Similar Work