Conversational Question Answering With Reformulations Over Knowledge Graph · The Large Language Model Bible Contribute to LLM-Bible

Conversational Question Answering With Reformulations Over Knowledge Graph

Liu Lihui, Hill Blaine, Du Boxin, Wang Fei, Tong Hanghang. Arxiv 2023

[Paper]    
Agentic Applications Model Architecture Reinforcement Learning

Conversational question answering (convQA) over knowledge graphs (KGs) involves answering multi-turn natural language questions about information contained in a KG. State-of-the-art methods of ConvQA often struggle with inexplicit question-answer pairs. These inputs are easy for human beings to understand given a conversation history, but hard for a machine to interpret, which can degrade ConvQA performance. To address this problem, we propose a reinforcement learning (RL) based model, CornNet, which utilizes question reformulations generated by large language models (LLMs) to improve ConvQA performance. CornNet adopts a teacher-student architecture where a teacher model learns question representations using human writing reformulations, and a student model to mimic the teacher model’s output via reformulations generated by LLMs. The learned question representation is then used by an RL model to locate the correct answer in a KG. Extensive experimental results show that CornNet outperforms state-of-the-art convQA models.

Similar Work