What Do Mllms Hear? Examining Reasoning With Text And Sound Components In Multimodal Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

What Do Mllms Hear? Examining Reasoning With Text And Sound Components In Multimodal Large Language Models

Çoban Enis Berk, Mandel Michael I., Devaney Johanna. Arxiv 2024

[Paper]    
Multimodal Models RAG

Large Language Models (LLMs) have demonstrated remarkable reasoning capabilities, notably in connecting ideas and adhering to logical rules to solve problems. These models have evolved to accommodate various data modalities, including sound and images, known as multimodal LLMs (MLLMs), which are capable of describing images or sound recordings. Previous work has demonstrated that when the LLM component in MLLMs is frozen, the audio or visual encoder serves to caption the sound or image input facilitating text-based reasoning with the LLM component. We are interested in using the LLM’s reasoning capabilities in order to facilitate classification. In this paper, we demonstrate through a captioning/classification experiment that an audio MLLM cannot fully leverage its LLM’s text-based reasoning when generating audio captions. We also consider how this may be due to MLLMs separately representing auditory and textual information such that it severs the reasoning pathway from the LLM to the audio encoder.

Similar Work