Evaluating Psychological Safety Of Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

Evaluating Psychological Safety Of Large Language Models

Li Xingxuan, Li Yutong, Qiu Lin, Joty Shafiq, Bing Lidong. Arxiv 2022

[Paper]    
Efficiency And Optimization Ethics And Bias Fine Tuning GPT Model Architecture Pretraining Methods Prompting RAG Reinforcement Learning Responsible AI Training Techniques

In this work, we designed unbiased prompts to systematically evaluate the psychological safety of large language models (LLMs). First, we tested five different LLMs by using two personality tests: Short Dark Triad (SD-3) and Big Five Inventory (BFI). All models scored higher than the human average on SD-3, suggesting a relatively darker personality pattern. Despite being instruction fine-tuned with safety metrics to reduce toxicity, InstructGPT, GPT-3.5, and GPT-4 still showed dark personality patterns; these models scored higher than self-supervised GPT-3 on the Machiavellianism and narcissism traits on SD-3. Then, we evaluated the LLMs in the GPT series by using well-being tests to study the impact of fine-tuning with more training data. We observed a continuous increase in the well-being scores of GPT models. Following these observations, we showed that fine-tuning Llama-2-chat-7B with responses from BFI using direct preference optimization could effectively reduce the psychological toxicity of the model. Based on the findings, we recommended the application of systematic and comprehensive psychological metrics to further evaluate and improve the safety of LLMs.

Similar Work