Explicit Inductive Inference Using Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

Explicit Inductive Inference Using Large Language Models

Liu Tianyang, Li Tianyi, Cheng Liang, Steedman Mark. Arxiv 2024

[Paper]    
Ethics And Bias RAG Reinforcement Learning

Large Language Models (LLMs) are reported to hold undesirable attestation bias on inference tasks: when asked to predict if a premise P entails a hypothesis H, instead of considering H’s conditional truthfulness entailed by P, LLMs tend to use the out-of-context truth label of H as a fragile proxy. In this paper, we propose a pipeline that exploits this bias to do explicit inductive inference. Our pipeline uses an LLM to transform a premise into a set of attested alternatives, and then aggregate answers of the derived new entailment inquiries to support the original inference prediction. On a directional predicate entailment benchmark, we demonstrate that by applying this simple pipeline, we can improve the overall performance of LLMs on inference and substantially alleviate the impact of their attestation bias.

Similar Work