Personalized Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

Personalized Large Language Models

Woźniak Stanisław, Koptyra Bartłomiej, Janz Arkadiusz, Kazienko Przemysław, Kocoń Jan. Arxiv 2024

[Paper]    
Fine Tuning Model Architecture Pretraining Methods Training Techniques

Large language models (LLMs) have significantly advanced Natural Language Processing (NLP) tasks in recent years. However, their universal nature poses limitations in scenarios requiring personalized responses, such as recommendation systems and chatbots. This paper investigates methods to personalize LLMs, comparing fine-tuning and zero-shot reasoning approaches on subjective tasks. Results demonstrate that personalized fine-tuning improves model reasoning compared to non-personalized models. Experiments on datasets for emotion recognition and hate speech detection show consistent performance gains with personalized methods across different LLM architectures. These findings underscore the importance of personalization for enhancing LLM capabilities in subjective text perception tasks.

Similar Work