Fmm-attack: A Flow-based Multi-modal Adversarial Attack On Video-based Llms · The Large Language Model Bible Contribute to LLM-Bible

Fmm-attack: A Flow-based Multi-modal Adversarial Attack On Video-based Llms

Li Jinmin, Gao Kuofeng, Bai Yang, Zhang Jingyun, Xia Shu-tao, Wang Yisen. Arxiv 2024

[Paper] [Code]    
Has Code Prompting Responsible AI Security

Despite the remarkable performance of video-based large language models (LLMs), their adversarial threat remains unexplored. To fill this gap, we propose the first adversarial attack tailored for video-based LLMs by crafting flow-based multi-modal adversarial perturbations on a small fraction of frames within a video, dubbed FMM-Attack. Extensive experiments show that our attack can effectively induce video-based LLMs to generate incorrect answers when videos are added with imperceptible adversarial perturbations. Intriguingly, our FMM-Attack can also induce garbling in the model output, prompting video-based LLMs to hallucinate. Overall, our observations inspire a further understanding of multi-modal robustness and safety-related feature alignment across different modalities, which is of great importance for various large multi-modal models. Our code is available at https://github.com/THU-Kingmin/FMM-Attack.

Similar Work