Does Alignment Tuning Really Break Llms' Internal Confidence? · The Large Language Model Bible Contribute to LLM-Bible

Does Alignment Tuning Really Break Llms' Internal Confidence?

Oh Hongseok, Hwang Wonseok. Arxiv 2024

[Paper]    
Reinforcement Learning

Large Language Models (LLMs) have shown remarkable progress, but their real-world application necessitates reliable calibration. This study conducts a comprehensive analysis of calibration degradation of LLMs across four dimensions: models, calibration metrics, tasks, and confidence extraction methods. Initial analysis showed that the relationship between alignment and calibration is not always a trade-off, but under stricter analysis conditions, we found the alignment process consistently harms calibration. This highlights the need for (1) a careful approach when measuring model confidences and calibration errors and (2) future research into algorithms that can help LLMs to achieve both instruction-following and calibration without sacrificing either.

Similar Work