CBF-LLM: Safe Control For LLM Alignment · The Large Language Model Bible Contribute to LLM-Bible

CBF-LLM: Safe Control For LLM Alignment

Miyaoka Yuya, Inoue Masaki. Arxiv 2024

[Paper] [Code]    
Applications BERT Has Code Language Modeling Model Architecture RAG Responsible AI Tools

This paper proposes a control-based framework for aligning large language models (LLMs) by leveraging a control barrier function (CBF) to ensure user-desirable text generation. The presented framework applies the safety filter, designed based on the CBF, to the output generation of the baseline LLM, i.e., the sequence of the token, with the aim of intervening in the generated text. The overall text-generation system is implemented with Llama 3 and a RoBERTa model, and the source code is available at https://github.com/Mya-Mya/CBF-LLM. The experiment demonstrates its control ability and effectiveness in reducing the number of interventions needed for user-specified alignment tasks.

Similar Work