Enhancing Robustness In Large Language Models: Prompting For Mitigating The Impact Of Irrelevant Information · The Large Language Model Bible Contribute to LLM-Bible

Enhancing Robustness In Large Language Models: Prompting For Mitigating The Impact Of Irrelevant Information

Jiang Ming, Huang Tingting, Guo Biao, Lu Yao, Zhang Feng. Arxiv 2024

[Paper]    
Attention Mechanism Model Architecture Prompting Security Uncategorized

In recent years, Large language models (LLMs) have garnered significant attention due to their superior performance in complex reasoning tasks. However, recent studies may diminish their reasoning capabilities markedly when problem descriptions contain irrelevant information, even with the use of advanced prompting techniques. To further investigate this issue, a dataset of primary school mathematics problems containing irrelevant information, named GSMIR, was constructed. Testing prominent LLMs and prompting techniques on this dataset revealed that while LLMs can identify irrelevant information, they do not effectively mitigate the interference it causes once identified. A novel automatic construction method, ATF, which enhances the ability of LLMs to identify and self-mitigate the influence of irrelevant information, is proposed to address this shortcoming. This method operates in two steps: first, analysis of irrelevant information, followed by its filtering. The ATF method, as demonstrated by experimental results, significantly improves the reasoning performance of LLMs and prompting techniques, even in the presence of irrelevant information on the GSMIR dataset.

Similar Work