Mirror: A Multiple-perspective Self-reflection Method For Knowledge-rich Reasoning · The Large Language Model Bible Contribute to LLM-Bible

Mirror: A Multiple-perspective Self-reflection Method For Knowledge-rich Reasoning

Yan Hanqi, Zhu Qinglin, Wang Xinyu, Gui Lin, He Yulan. Arxiv 2024

[Paper]    
Agentic Efficiency And Optimization RAG Reinforcement Learning Uncategorized

While Large language models (LLMs) have the capability to iteratively reflect on their own outputs, recent studies have observed their struggles with knowledge-rich problems without access to external resources. In addition to the inefficiency of LLMs in self-assessment, we also observe that LLMs struggle to revisit their predictions despite receiving explicit negative feedback. Therefore, We propose Mirror, a Multiple-perspective self-reflection method for knowledge-rich reasoning, to avoid getting stuck at a particular reflection iteration. Mirror enables LLMs to reflect from multiple-perspective clues, achieved through a heuristic interaction between a Navigator and a Reasoner. It guides agents toward diverse yet plausibly reliable reasoning trajectory without access to ground truth by encouraging (1) diversity of directions generated by Navigator and (2) agreement among strategically induced perturbations in responses generated by the Reasoner. The experiments on five reasoning datasets demonstrate that Mirror’s superiority over several contemporary self-reflection approaches. Additionally, the ablation study studies clearly indicate that our strategies alleviate the aforementioned challenges.

Similar Work