Towards Zero-shot And Few-shot Table Question Answering Using GPT-3 · The Large Language Model Bible Contribute to LLM-Bible

Towards Zero-shot And Few-shot Table Question Answering Using GPT-3

Srivastava Pragya, Ganu Tanuja, Guha Saikat. Arxiv 2022

[Paper]    
Applications Few Shot Fine Tuning GPT Model Architecture Multimodal Models Pretraining Methods Prompting Reinforcement Learning Training Techniques

We present very early results on using GPT-3 to perform question answering on tabular data. We find that stock pre-trained GPT-3 is able to zero-shot learn the table structure from a serialized JSON array-of-arrays representation, and able to answer lookup queries and simple comparison questions in natural language without any fine-tuning. We further find that simple prompt engineering to include few-shot static Q&A examples significantly improves accuracy. Lastly, we find that intermixing passage text improves accuracy even further on heterogeneous data. We apply our approach on a novel dataset of simple tables in newspaper infographics with promising results. Overall, we find much cause for optimism in this basic approach.

Similar Work