Enhance Modality Robustness In Text-centric Multimodal Alignment With Adversarial Prompting · The Large Language Model Bible Contribute to LLM-Bible

Enhance Modality Robustness In Text-centric Multimodal Alignment With Adversarial Prompting

Tsai Yun-da, Yen Ting-yu, Liao Keng-te, Lin Shou-de. Arxiv 2024

[Paper]    
Applications Multimodal Models Prompting RAG Reinforcement Learning Security Training Techniques

Converting different modalities into generalized text, which then serves as input prompts for large language models (LLMs), is a common approach for aligning multimodal models, particularly when pairwise data is limited. Text-centric alignment method leverages the unique properties of text as a modality space, transforming diverse inputs into a unified textual representation, thereby enabling downstream models to effectively interpret various modal inputs. This study evaluates the quality and robustness of multimodal representations in the face of noise imperfections, dynamic input order permutations, and missing modalities, revealing that current text-centric alignment methods can compromise downstream robustness. To address this issue, we propose a new text-centric adversarial training approach that significantly enhances robustness compared to traditional robust training methods and pre-trained multimodal foundation models. Our findings underscore the potential of this approach to improve the robustness and adaptability of multimodal representations, offering a promising solution for dynamic and real-world applications.

Similar Work