Modular Pluralism: Pluralistic Alignment Via Multi-llm Collaboration · The Large Language Model Bible Contribute to LLM-Bible

Modular Pluralism: Pluralistic Alignment Via Multi-llm Collaboration

Feng Shangbin, Sorensen Taylor, Liu Yuhan, Fisher Jillian, Park Chan Young, Choi Yejin, Tsvetkov Yulia. Arxiv 2024

[Paper]    
RAG Reinforcement Learning Tools

While existing alignment paradigms have been integral in developing large language models (LLMs), LLMs often learn an averaged human preference and struggle to model diverse preferences across cultures, demographics, and communities. We propose Modular Pluralism, a modular framework based on multi-LLM collaboration for pluralistic alignment: it “plugs into” a base LLM a pool of smaller but specialized community LMs, where models collaborate in distinct modes to flexibility support three modes of pluralism: Overton, steerable, and distributional. Modular Pluralism is uniquely compatible with black-box LLMs and offers the modular control of adding new community LMs for previously underrepresented communities. We evaluate Modular Pluralism with six tasks and four datasets featuring questions/instructions with value-laden and perspective-informed responses. Extensive experiments demonstrate that Modular Pluralism advances the three pluralism objectives across six black-box and open-source LLMs. Further analysis reveals that LLMs are generally faithful to the inputs from smaller community LLMs, allowing seamless patching by adding a new community LM to better cover previously underrepresented communities.

Similar Work