Symbolic Working Memory Enhances Language Models For Complex Rule Application · The Large Language Model Bible Contribute to LLM-Bible

Symbolic Working Memory Enhances Language Models For Complex Rule Application

Wang Siyuan, Wei Zhongyu, Choi Yejin, Ren Xiang. Arxiv 2024

[Paper] [Code]    
Has Code Reinforcement Learning Security Tools Uncategorized

Large Language Models (LLMs) have shown remarkable reasoning performance but struggle with multi-step deductive reasoning involving a series of rule application steps, especially when rules are presented non-sequentially. Our preliminary analysis shows that while LLMs excel in single-step rule application, their performance drops significantly in multi-step scenarios due to the challenge in rule grounding. It requires anchoring the applicable rule and supporting facts at each step, amidst multiple input rules, facts, and inferred facts. To address this, we propose augmenting LLMs with external working memory and introduce a neurosymbolic framework for rule application. The memory stores facts and rules in both natural language and symbolic forms, enabling precise tracking. Utilizing this memory, our framework iteratively performs symbolic rule grounding and LLM-based rule implementation. The former matches predicates and variables of symbolic rules and facts to ground applicable rules at each step. Experiments indicate our framework’s effectiveness in rule application and its robustness across various steps and settings~\footnote{Code and data are available at \url{https://github.com/SiyuanWangw/RuleApplication}.}.

Similar Work