Memeguard: An LLM And Vlm-based Framework For Advancing Content Moderation Via Meme Intervention · The Large Language Model Bible Contribute to LLM-Bible

Memeguard: An LLM And Vlm-based Framework For Advancing Content Moderation Via Meme Intervention

Jha Prince, Jain Raghav, Mandal Konika, Chadha Aman, Saha Sriparna, Bhattacharyya Pushpak. Arxiv 2024

[Paper]    
Multimodal Models RAG Reinforcement Learning Tools

In the digital world, memes present a unique challenge for content moderation due to their potential to spread harmful content. Although detection methods have improved, proactive solutions such as intervention are still limited, with current research focusing mostly on text-based content, neglecting the widespread influence of multimodal content like memes. Addressing this gap, we present \textit{MemeGuard}, a comprehensive framework leveraging Large Language Models (LLMs) and Visual Language Models (VLMs) for meme intervention. \textit{MemeGuard} harnesses a specially fine-tuned VLM, \textit{VLMeme}, for meme interpretation, and a multimodal knowledge selection and ranking mechanism (\textit{MKS}) for distilling relevant knowledge. This knowledge is then employed by a general-purpose LLM to generate contextually appropriate interventions. Another key contribution of this work is the \textit{\textbf{I}ntervening} \textit{\textbf{C}yberbullying in \textbf{M}ultimodal \textbf{M}emes (ICMM)} dataset, a high-quality, labeled dataset featuring toxic memes and their corresponding human-annotated interventions. We leverage \textit{ICMM} to test \textit{MemeGuard}, demonstrating its proficiency in generating relevant and effective responses to toxic memes.

Similar Work