Facilitating Human-llm Collaboration Through Factuality Scores And Source Attributions · The Large Language Model Bible Contribute to LLM-Bible

Facilitating Human-llm Collaboration Through Factuality Scores And Source Attributions

Do Hyo Jin, Ostrand Rachel, Weisz Justin D., Dugan Casey, Sattigeri Prasanna, Wei Dennis, Murugesan Keerthiram, Geyer Werner. Arxiv 2024

[Paper]    

While humans increasingly rely on large language models (LLMs), they are susceptible to generating inaccurate or false information, also known as “hallucinations”. Technical advancements have been made in algorithms that detect hallucinated content by assessing the factuality of the model’s responses and attributing sections of those responses to specific source documents. However, there is limited research on how to effectively communicate this information to users in ways that will help them appropriately calibrate their trust toward LLMs. To address this issue, we conducted a scenario-based study (N=104) to systematically compare the impact of various design strategies for communicating factuality and source attribution on participants’ ratings of trust, preferences, and ease in validating response accuracy. Our findings reveal that participants preferred a design in which phrases within a response were color-coded based on the computed factuality scores. Additionally, participants increased their trust ratings when relevant sections of the source material were highlighted or responses were annotated with reference numbers corresponding to those sources, compared to when they received no annotation in the source material. Our study offers practical design guidelines to facilitate human-LLM collaboration and it promotes a new human role to carefully evaluate and take responsibility for their use of LLM outputs.

Similar Work