Syncpkl: Harnessing Llms To Generate Synthetic Data For Commonsense Persona Knowledge Linking · The Large Language Model Bible Contribute to LLM-Bible

Syncpkl: Harnessing Llms To Generate Synthetic Data For Commonsense Persona Knowledge Linking

Lin Kuan-yen. Arxiv 2024

[Paper] [Code]    
Applications BERT Has Code Model Architecture RAG Security Training Techniques

Understanding rich dialogues often requires NLP systems to access relevant commonsense persona knowledge, but retrieving this knowledge is challenging due to complex contexts and the implicit nature of commonsense. This paper presents our approach to the Commonsense Persona Knowledge Linking (CPKL) challenge, addressing the critical need for integrating persona and commonsense knowledge in open-domain dialogue systems. We introduce SynCPKL Pipeline, a pipeline that leverages Large Language Models to generate high-quality synthetic datasets for training commonsense persona knowledge linkers. To demonstrate the efficacy of our approach, we present SynCPKL, a new dataset specifically designed for this task. Our experiments validate the effectiveness of SynCPKL for training commonsense persona knowledge linkers. Additionally, our top-performing model, Derberta-SynCPKL, secured first place in the CPKL challenge by a 16% improvement in F1 score. We released both SynCPKL and Derberta-SynCPKL at https://github.com/irislin1006/CPKL.

Similar Work