Think Twice Before Trusting: Self-detection For Large Language Models Through Comprehensive Answer Reflection · The Large Language Model Bible Contribute to LLM-Bible

Think Twice Before Trusting: Self-detection For Large Language Models Through Comprehensive Answer Reflection

Li Moxin, Wang Wenjie, Feng Fuli, Zhu Fengbin, Wang Qifan, Chua Tat-seng. Arxiv 2024

[Paper]    
RAG Tools

Self-detection for Large Language Model (LLM) seeks to evaluate the LLM output trustability by leveraging LLM’s own capabilities, alleviating the output hallucination issue. However, existing self-detection approaches only retrospectively evaluate answers generated by LLM, typically leading to the over-trust in incorrectly generated answers. To tackle this limitation, we propose a novel self-detection paradigm that considers the comprehensive answer space beyond LLM-generated answers. It thoroughly compares the trustability of multiple candidate answers to mitigate the over-trust in LLM-generated incorrect answers. Building upon this paradigm, we introduce a two-step framework, which firstly instructs LLM to reflect and provide justifications for each candidate answer, and then aggregates the justifications for comprehensive target answer evaluation. This framework can be seamlessly integrated with existing approaches for superior self-detection. Extensive experiments on six datasets spanning three tasks demonstrate the effectiveness of the proposed framework.

Similar Work