Rationale-enhanced Language Models Are Better Continual Relation Learners · The Large Language Model Bible Contribute to LLM-Bible

Rationale-enhanced Language Models Are Better Continual Relation Learners

Xiong Weimin, Song Yifan, Wang Peiyi, Li Sujian. Arxiv 2023

[Paper]    
Interpretability And Explainability Merging Security Uncategorized

Continual relation extraction (CRE) aims to solve the problem of catastrophic forgetting when learning a sequence of newly emerging relations. Recent CRE studies have found that catastrophic forgetting arises from the model’s lack of robustness against future analogous relations. To address the issue, we introduce rationale, i.e., the explanations of relation classification results generated by large language models (LLM), into CRE task. Specifically, we design the multi-task rationale tuning strategy to help the model learn current relations robustly. We also conduct contrastive rationale replay to further distinguish analogous relations. Experimental results on two standard benchmarks demonstrate that our method outperforms the state-of-the-art CRE models.

Similar Work