Ungrammatical-syntax-based In-context Example Selection For Grammatical Error Correction · The Large Language Model Bible Contribute to LLM-Bible

Ungrammatical-syntax-based In-context Example Selection For Grammatical Error Correction

Tang Chenming, Qu Fanyi, Wu Yunfang. Arxiv 2024

[Paper]    
Attention Mechanism In Context Learning Model Architecture Prompting

In the era of large language models (LLMs), in-context learning (ICL) stands out as an effective prompting strategy that explores LLMs’ potency across various tasks. However, applying LLMs to grammatical error correction (GEC) is still a challenging task. In this paper, we propose a novel ungrammatical-syntax-based in-context example selection strategy for GEC. Specifically, we measure similarity of sentences based on their syntactic structures with diverse algorithms, and identify optimal ICL examples sharing the most similar ill-formed syntax to the test input. Additionally, we carry out a two-stage process to further improve the quality of selection results. On benchmark English GEC datasets, empirical results show that our proposed ungrammatical-syntax-based strategies outperform commonly-used word-matching or semantics-based methods with multiple LLMs. This indicates that for a syntax-oriented task like GEC, paying more attention to syntactic information can effectively boost LLMs’ performance. Our code will be publicly available after the publication of this paper.

Similar Work