Gendec: A Robust Generative Question-decomposition Method For Multi-hop Reasoning · The Large Language Model Bible Contribute to LLM-Bible

Gendec: A Robust Generative Question-decomposition Method For Multi-hop Reasoning

Wu Jian, Yang Linyi, Ji Yuliang, Huang Wenhao, Karlsson Börje F., Okumura Manabu. Arxiv 2024

[Paper]    
Applications Fine Tuning GPT Model Architecture RAG Reinforcement Learning Security

Multi-hop QA (MHQA) involves step-by-step reasoning to answer complex questions and find multiple relevant supporting facts. However, Existing large language models’(LLMs) reasoning ability in multi-hop question answering remains exploration, which is inadequate in answering multi-hop questions. Moreover, it is unclear whether LLMs follow a desired reasoning chain to reach the right final answer. In this paper, we propose a \textbf{gen}erative question \textbf{dec}omposition method (GenDec) from the perspective of explainable QA by generating independent and complete sub-questions based on incorporating additional extracted evidence for enhancing LLMs’ reasoning ability in RAG. To demonstrate the impact, generalization, and robustness of Gendec, we conduct two experiments, the first is combining GenDec with small QA systems on paragraph retrieval and QA tasks. We secondly examine the reasoning capabilities of various state-of-the-art LLMs including GPT-4 and GPT-3.5 combined with GenDec. We experiment on the HotpotQA, 2WikihopMultiHopQA, MuSiQue, and PokeMQA datasets.

Similar Work