Trustscore: Reference-free Evaluation Of LLM Response Trustworthiness · The Large Language Model Bible Contribute to LLM-Bible

Trustscore: Reference-free Evaluation Of LLM Response Trustworthiness

Zheng Danna, Liu Danyang, Lapata Mirella, Pan Jeff Z.. Arxiv 2024

[Paper]    
Applications Prompting Reinforcement Learning Tools

Large Language Models (LLMs) have demonstrated impressive capabilities across various domains, prompting a surge in their practical applications. However, concerns have arisen regarding the trustworthiness of LLMs outputs, particularly in closed-book question-answering tasks, where non-experts may struggle to identify inaccuracies due to the absence of contextual or ground truth information. This paper introduces TrustScore, a framework based on the concept of Behavioral Consistency, which evaluates whether an LLMs response aligns with its intrinsic knowledge. Additionally, TrustScore can seamlessly integrate with fact-checking methods, which assesses alignment with external knowledge sources. The experimental results show that TrustScore achieves strong correlations with human judgments, surpassing existing reference-free metrics, and achieving results on par with reference-based metrics.

Similar Work