Structured, Flexible, And Robust: Benchmarking And Improving Large Language Models Towards More Human-like Behavior In Out-of-distribution Reasoning Tasks · The Large Language Model Bible Contribute to LLM-Bible

Structured, Flexible, And Robust: Benchmarking And Improving Large Language Models Towards More Human-like Behavior In Out-of-distribution Reasoning Tasks

Collins Katherine M., Wong Catherine, Feng Jiahai, Wei Megan, Tenenbaum Joshua B.. Arxiv 2022

[Paper]    
Interpretability And Explainability

Human language offers a powerful window into our thoughts – we tell stories, give explanations, and express our beliefs and goals through words. Abundant evidence also suggests that language plays a developmental role in structuring our learning. Here, we ask: how much of human-like thinking can be captured by learning statistical patterns in language alone? We first contribute a new challenge benchmark for comparing humans and distributional large language models (LLMs). Our benchmark contains two problem-solving domains (planning and explanation generation) and is designed to require generalization to new, out-of-distribution problems expressed in language. We find that humans are far more robust than LLMs on this benchmark. Next, we propose a hybrid Parse-and-Solve model, which augments distributional LLMs with a structured symbolic reasoning module. We find that this model shows more robust adaptation to out-of-distribution planning problems, demonstrating the promise of hybrid AI models for more human-like reasoning.

Similar Work