Fine-grained Affective Processing Capabilities Emerging From Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

Fine-grained Affective Processing Capabilities Emerging From Large Language Models

Broekens Joost, Hilpert Bernhard, Verberne Suzan, Baraka Kim, Gebhard Patrick, Plaat Aske. Arxiv 2023

[Paper]    
Agentic Applications GPT Merging Model Architecture Pretraining Methods Prompting Transformer

Large language models, in particular generative pre-trained transformers (GPTs), show impressive results on a wide variety of language-related tasks. In this paper, we explore ChatGPT’s zero-shot ability to perform affective computing tasks using prompting alone. We show that ChatGPT a) performs meaningful sentiment analysis in the Valence, Arousal and Dominance dimensions, b) has meaningful emotion representations in terms of emotion categories and these affective dimensions, and c) can perform basic appraisal-based emotion elicitation of situations based on a prompt-based computational implementation of the OCC appraisal model. These findings are highly relevant: First, they show that the ability to solve complex affect processing tasks emerges from language-based token prediction trained on extensive data sets. Second, they show the potential of large language models for simulating, processing and analyzing human emotions, which has important implications for various applications such as sentiment analysis, socially interactive agents, and social robotics.

Similar Work