TTQA-RS- A Break-down Prompting Approach For Multi-hop Table-text Question Answering With Reasoning And Summarization · The Large Language Model Bible Contribute to LLM-Bible

TTQA-RS- A Break-down Prompting Approach For Multi-hop Table-text Question Answering With Reasoning And Summarization

Bardhan Jayetri, Xiao Bushi, Wang Daisy Zhe. Arxiv 2024

[Paper]    
Applications GPT Model Architecture Prompting Training Techniques

Question answering (QA) over tables and text has gained much popularity over the years. Multi-hop table-text QA requires multiple hops between the table and text, making it a challenging QA task. Although several works have attempted to solve the table-text QA task, most involve training the models and requiring labeled data. In this paper, we have proposed a model - TTQA-RS: A break-down prompting approach for Multi-hop Table-Text Question Answering with Reasoning and Summarization. Our model uses augmented knowledge including table-text summary with decomposed sub-question with answer for a reasoning-based table-text QA. Using open-source language models our model outperformed all existing prompting methods for table-text QA tasks on existing table-text QA datasets like HybridQA and OTT-QA’s development set. Our results are comparable with the training-based state-of-the-art models, demonstrating the potential of prompt-based approaches using open-source LLMs. Additionally, by using GPT-4 with LLaMA3-70B, our model achieved state-of-the-art performance for prompting-based methods on multi-hop table-text QA.

Similar Work