Evaluating Robustness Of Generative Search Engine On Adversarial Factual Questions · The Large Language Model Bible Contribute to LLM-Bible

Evaluating Robustness Of Generative Search Engine On Adversarial Factual Questions

Hu Xuming, Li Xiaochuan, Chen Junzhe, Li Yinghui, Li Yangning, Li Xiaoguang, Wang Yasheng, Liu Qun, Wen Lijie, Yu Philip S., Guo Zhijiang. Arxiv 2024

[Paper]    
RAG Responsible AI Security

Generative search engines have the potential to transform how people seek information online, but generated responses from existing large language models (LLMs)-backed generative search engines may not always be accurate. Nonetheless, retrieval-augmented generation exacerbates safety concerns, since adversaries may successfully evade the entire system by subtly manipulating the most vulnerable part of a claim. To this end, we propose evaluating the robustness of generative search engines in the realistic and high-risk setting, where adversaries have only black-box system access and seek to deceive the model into returning incorrect responses. Through a comprehensive human evaluation of various generative search engines, such as Bing Chat, PerplexityAI, and YouChat across diverse queries, we demonstrate the effectiveness of adversarial factual questions in inducing incorrect responses. Moreover, retrieval-augmented generation exhibits a higher susceptibility to factual errors compared to LLMs without retrieval. These findings highlight the potential security risks of these systems and emphasize the need for rigorous evaluation before deployment.

Similar Work