Ensuring Safe And High-quality Outputs: A Guideline Library Approach For Language Models · The Large Language Model Bible Contribute to LLM-Bible

Ensuring Safe And High-quality Outputs: A Guideline Library Approach For Language Models

Luo Yi, Lin Zhenghao, Zhang Yuhao, Sun Jiashuo, Lin Chen, Xu Chengjin, Su Xiangdong, Shen Yelong, Guo Jian, Gong Yeyun. Arxiv 2024

[Paper]    
Ethics And Bias Fine Tuning GPT Model Architecture Pretraining Methods Responsible AI Security Tools Training Techniques

Large Language Models (LLMs) exhibit impressive capabilities but also present risks such as biased content generation and privacy issues. One of the current alignment techniques includes principle-driven integration, but it faces challenges arising from the imprecision of manually crafted rules and inadequate risk perception in models without safety training. To address these, we introduce Guide-Align, a two-stage approach. Initially, a safety-trained model identifies potential risks and formulates specific guidelines for various inputs, establishing a comprehensive library of guidelines and a model for input-guidelines retrieval. Subsequently, the retrieval model correlates new inputs with relevant guidelines, which guide LLMs in response generation to ensure safe and high-quality outputs, thereby aligning with human values. An additional optional stage involves fine-tuning a model with well-aligned datasets generated through the process implemented in the second stage. Our method customizes guidelines to accommodate diverse inputs, thereby enhancing the fine-grainedness and comprehensiveness of the guideline library. Furthermore, it incorporates safety expertise from a safety-trained LLM through a lightweight retrieval model. We evaluate our approach on three benchmarks, demonstrating significant improvements in LLM security and quality. Notably, our fine-tuned model, Labrador, even at 13 billion parameters, outperforms GPT-3.5-turbo and surpasses GPT-4 in alignment capabilities.

Similar Work