Step-back Profiling: Distilling User History For Personalized Scientific Writing · The Large Language Model Bible Contribute to LLM-Bible

Step-back Profiling: Distilling User History For Personalized Scientific Writing

Tang Xiangru, Zhang Xingyao, Shao Yanjun, Wu Jie, Zhao Yilun, Cohan Arman, Gong Ming, Zhang Dongmei, Gerstein Mark. Arxiv 2024

[Paper] [Code]    
Has Code Reinforcement Learning Uncategorized

Large language models (LLM) excel at a variety of natural language processing tasks, yet they struggle to generate personalized content for individuals, particularly in real-world scenarios like scientific writing. Addressing this challenge, we introduce STEP-BACK PROFILING to personalize LLMs by distilling user history into concise profiles, including essential traits and preferences of users. To conduct the experiments, we construct a Personalized Scientific Writing (PSW) dataset to study multi-user personalization. PSW requires the models to write scientific papers given specialized author groups with diverse academic backgrounds. As for the results, we demonstrate the effectiveness of capturing user characteristics via STEP-BACK PROFILING for collaborative writing. Moreover, our approach outperforms the baselines by up to 3.6 points on the general personalization benchmark (LaMP), including 7 personalization LLM tasks. Our ablation studies validate the contributions of different components in our method and provide insights into our task definition. Our dataset and code are available at \url{https://github.com/gersteinlab/step-back-profiling}.

Similar Work