AQUALLM: Audio Question Answering Data Generation Using Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

AQUALLM: Audio Question Answering Data Generation Using Large Language Models

Behera Swarup Ranjan, Injeti Krishna Mohan, Patibandla Jaya Sai Kiran, Pokala Praveen Kumar, Pailla Balakrishna Reddy. Arxiv 2023

[Paper] [Code]    
Applications Attention Mechanism Has Code Model Architecture Reinforcement Learning Tools

Audio Question Answering (AQA) constitutes a pivotal task in which machines analyze both audio signals and natural language questions to produce precise natural language answers. The significance of possessing high-quality, diverse, and extensive AQA datasets cannot be overstated when aiming for the precision of an AQA system. While there has been notable focus on developing accurate and efficient AQA models, the creation of high-quality, diverse, and extensive datasets for the specific task at hand has not garnered considerable attention. To address this challenge, this work makes several contributions. We introduce a scalable AQA data generation pipeline, denoted as the AQUALLM framework, which relies on Large Language Models (LLMs). This framework utilizes existing audio-caption annotations and incorporates state-of-the-art LLMs to generate expansive, high-quality AQA datasets. Additionally, we present three extensive and high-quality benchmark datasets for AQA, contributing significantly to the progression of AQA research. AQA models trained on the proposed datasets set superior benchmarks compared to the existing state-of-the-art. Moreover, models trained on our datasets demonstrate enhanced generalizability when compared to models trained using human-annotated AQA data. Code and datasets will be accessible on GitHub~\footnote{\url{https://github.com/swarupbehera/AQUALLM}}.

Similar Work