High-dimension Human Value Representation In Large Language Models · The Large Language Model Bible Contribute to LLM-Bible

High-dimension Human Value Representation In Large Language Models

Cahyawijaya Samuel, Chen Delong, Bang Yejin, Khalatbari Leila, Wilie Bryan, Ji Ziwei, Ishii Etsuko, Fung Pascale. Arxiv 2024

[Paper]    
Agentic GPT Language Modeling Model Architecture Reinforcement Learning Training Techniques

The widespread application of Large Language Models (LLMs) across various tasks and fields has necessitated the alignment of these models with human values and preferences. Given various approaches of human value alignment, ranging from Reinforcement Learning with Human Feedback (RLHF), to constitutional learning, etc. there is an urgent need to understand the scope and nature of human values injected into these models before their release. There is also a need for model alignment without a costly large scale human annotation effort. We propose UniVaR, a high-dimensional representation of human value distributions in LLMs, orthogonal to model architecture and training data. Trained from the value-relevant output of eight multilingual LLMs and tested on the output from four multilingual LLMs, namely LlaMA2, ChatGPT, JAIS and Yi, we show that UniVaR is a powerful tool to compare the distribution of human values embedded in different LLMs with different langauge sources. Through UniVaR, we explore how different LLMs prioritize various values in different languages and cultures, shedding light on the complex interplay between human values and language modeling.

Similar Work