Reframing Human-ai Collaboration For Generating Free-text Explanations · The Large Language Model Bible Contribute to LLM-Bible

Reframing Human-ai Collaboration For Generating Free-text Explanations

Wiegreffe Sarah, Hessel Jack, Swayamdipta Swabha, Riedl Mark, Choi Yejin. Arxiv 2021

[Paper]    
Few Shot GPT Interpretability And Explainability Model Architecture Prompting Reinforcement Learning Uncategorized

Large language models are increasingly capable of generating fluent-appearing text with relatively little task-specific supervision. But can these models accurately explain classification decisions? We consider the task of generating free-text explanations using human-written examples in a few-shot manner. We find that (1) authoring higher quality prompts results in higher quality generations; and (2) surprisingly, in a head-to-head comparison, crowdworkers often prefer explanations generated by GPT-3 to crowdsourced explanations in existing datasets. Our human studies also show, however, that while models often produce factual, grammatical, and sufficient explanations, they have room to improve along axes such as providing novel information and supporting the label. We create a pipeline that combines GPT-3 with a supervised filter that incorporates binary acceptability judgments from humans in the loop. Despite the intrinsic subjectivity of acceptability judgments, we demonstrate that acceptability is partially correlated with various fine-grained attributes of explanations. Our approach is able to consistently filter GPT-3-generated explanations deemed acceptable by humans.

Similar Work