PANDA: Preference Adaptation For Enhancing Domain-specific Abilities Of Llms · The Large Language Model Bible Contribute to LLM-Bible

PANDA: Preference Adaptation For Enhancing Domain-specific Abilities Of Llms

Liu An, Yang Zonghan, Zhang Zhenhe, Hu Qingyuan, Li Peng, Yan Ming, Zhang Ji, Huang Fei, Liu Yang. Arxiv 2024

[Paper]    
Fine Tuning Pretraining Methods RAG Reinforcement Learning Training Techniques

While Large language models (LLMs) have demonstrated considerable capabilities across various natural language tasks, they often fall short of the performance achieved by domain-specific state-of-the-art models. One potential approach to enhance domain-specific capabilities of LLMs involves fine-tuning them using corresponding datasets. However, this method can be both resource and time-intensive, and not applicable to closed-source commercial LLMs. In this paper, we propose Preference Adaptation for Enhancing Domain-specific Abilities of LLMs (PANDA), a method designed to augment the domain-specific capabilities of LLMs by leveraging insights from the response preference of expert models without requiring fine-tuning. Our experimental results reveal that PANDA significantly enhances the domain-specific ability of LLMs on text classification and interactive decision tasks. Moreover, LLM with PANDA even outperforms the expert model that being learned on 4 tasks of ScienceWorld. This finding highlights the potential of exploring tuning-free approaches to achieve weak-to-strong generalization.

Similar Work