Llm-based Multi-hop Question Answering With Knowledge Graph Integration In Evolving Environments · The Large Language Model Bible Contribute to LLM-Bible

Llm-based Multi-hop Question Answering With Knowledge Graph Integration In Evolving Environments

Chen Ruirui, Jiang Weifeng, Qin Chengwei, Rawal Ishaan Singh, Tan Cheston, Choi Dongkyu, Xiong Bo, Ai Bo. Arxiv 2024

[Paper]    
Applications RAG Reinforcement Learning Tools

The rapid obsolescence of information in Large Language Models (LLMs) has driven the development of various techniques to incorporate new facts. However, existing methods for knowledge editing still face difficulties with multi-hop questions that require accurate fact identification and sequential logical reasoning, particularly among numerous fact updates. To tackle these challenges, this paper introduces Graph Memory-based Editing for Large Language Models (GMeLLo), a straitforward and effective method that merges the explicit knowledge representation of Knowledge Graphs (KGs) with the linguistic flexibility of LLMs. Beyond merely leveraging LLMs for question answering, GMeLLo employs these models to convert free-form language into structured queries and fact triples, facilitating seamless interaction with KGs for rapid updates and precise multi-hop reasoning. Our results show that GMeLLo significantly surpasses current state-of-the-art knowledge editing methods in the multi-hop question answering benchmark, MQuAKE, especially in scenarios with extensive knowledge edits.

Similar Work