Characterizing Large Language Models As Rationalizers Of Knowledge-intensive Tasks · The Large Language Model Bible Contribute to LLM-Bible

Characterizing Large Language Models As Rationalizers Of Knowledge-intensive Tasks

Mishra Aditi, Rahman Sajjadur, Kim Hannah, Mitra Kushan, Hruschka Estevam. Arxiv 2023

[Paper]    
Few Shot Reinforcement Learning Survey Paper

Large language models (LLMs) are proficient at generating fluent text with minimal task-specific supervision. Yet, their ability to provide well-grounded rationalizations for knowledge-intensive tasks remains under-explored. Such tasks, like commonsense multiple-choice questions, require rationales based on world knowledge to support predictions and refute alternate options. We consider the task of generating knowledge-guided rationalization in natural language by using expert-written examples in a few-shot manner. Surprisingly, crowd-workers preferred knowledge-grounded rationales over crowdsourced rationalizations, citing their factuality, sufficiency, and comprehensive refutations. Although LLMs-generated rationales were preferable, further improvements in conciseness and novelty are required. In another study, we show how rationalization of incorrect model predictions erodes humans’ trust in LLM-generated rationales. Motivated by these observations, we create a two-stage pipeline to review task predictions and eliminate potential incorrect decisions before rationalization, enabling trustworthy rationale generation.

Similar Work