Hrot: Hybrid Prompt Strategy And Retrieval Of Thought For Table-text Hybrid Question Answering · The Large Language Model Bible Contribute to LLM-Bible

Hrot: Hybrid Prompt Strategy And Retrieval Of Thought For Table-text Hybrid Question Answering

Luo Tongxu, Lei Fangyu, Lei Jiahe, Liu Weihao, He Shihu, Zhao Jun, Liu Kang. Arxiv 2023

[Paper]    
Applications Attention Mechanism Few Shot In Context Learning Model Architecture Prompting Reinforcement Learning

Answering numerical questions over hybrid contents from the given tables and text(TextTableQA) is a challenging task. Recently, Large Language Models (LLMs) have gained significant attention in the NLP community. With the emergence of large language models, In-Context Learning and Chain-of-Thought prompting have become two particularly popular research topics in this field. In this paper, we introduce a new prompting strategy called Hybrid prompt strategy and Retrieval of Thought for TextTableQA. Through In-Context Learning, we prompt the model to develop the ability of retrieval thinking when dealing with hybrid data. Our method achieves superior performance compared to the fully-supervised SOTA on the MultiHiertt dataset in the few-shot setting.

Similar Work