LARA: Linguistic-adaptive Retrieval-augmented Llms For Multi-turn Intent Classification · The Large Language Model Bible Contribute to LLM-Bible

LARA: Linguistic-adaptive Retrieval-augmented Llms For Multi-turn Intent Classification

Junhua Liu, Keat Tan Yong, Bin Fu. Arxiv 2024

[Paper]    
In Context Learning Model Architecture Prompting RAG Training Techniques

Following the significant achievements of large language models (LLMs), researchers have employed in-context learning for text classification tasks. However, these studies focused on monolingual, single-turn classification tasks. In this paper, we introduce LARA (Linguistic-Adaptive Retrieval-Augmented Language Models), designed to enhance accuracy in multi-turn classification tasks across six languages, accommodating numerous intents in chatbot interactions. Multi-turn intent classification is notably challenging due to the complexity and evolving nature of conversational contexts. LARA tackles these issues by combining a fine-tuned smaller model with a retrieval-augmented mechanism, integrated within the architecture of LLMs. This integration allows LARA to dynamically utilize past dialogues and relevant intents, thereby improving the understanding of the context. Furthermore, our adaptive retrieval techniques bolster the cross-lingual capabilities of LLMs without extensive retraining and fine-tune. Comprehensive experiments demonstrate that LARA achieves state-of-the-art performance on multi-turn intent classification tasks, enhancing the average accuracy by 3.67% compared to existing methods.

Similar Work