Measuring And Improving Attentiveness To Partial Inputs With Counterfactuals · The Large Language Model Bible Contribute to LLM-Bible

Measuring And Improving Attentiveness To Partial Inputs With Counterfactuals

Elazar Yanai, Paranjape Bhargavi, Peng Hao, Wiegreffe Sarah, Raghavi Khyathi, Srikumar Vivek, Singh Sameer, Smith Noah A.. Arxiv 2023

[Paper]    
GPT In Context Learning Model Architecture Prompting Training Techniques

The inevitable appearance of spurious correlations in training datasets hurts the generalization of NLP models on unseen data. Previous work has found that datasets with paired inputs are prone to correlations between a specific part of the input (e.g., the hypothesis in NLI) and the label; consequently, models trained only on those outperform chance. Are these correlations picked up by models trained on the full input data? To address this question, we propose a new evaluation method, Counterfactual Attentiveness Test (CAT). CAT uses counterfactuals by replacing part of the input with its counterpart from a different example (subject to some restrictions), expecting an attentive model to change its prediction. Using CAT, we systematically investigate established supervised and in-context learning models on ten datasets spanning four tasks: natural language inference, reading comprehension, paraphrase detection, and visual & language reasoning. CAT reveals that reliance on such correlations is mainly data-dependent. Surprisingly, we find that GPT3 becomes less attentive with an increased number of demonstrations, while its accuracy on the test data improves. Our results demonstrate that augmenting training or demonstration data with counterfactuals is effective in improving models’ attentiveness. We show that models’ attentiveness measured by CAT reveals different conclusions from solely measuring correlations in data.

Similar Work