Can We Trust The Evaluation On Chatgpt? · The Large Language Model Bible Contribute to LLM-Bible

Can We Trust The Evaluation On Chatgpt?

Aiyappa Rachith, An Jisun, Kwak Haewoon, Ahn Yong-yeol. Proceedings of the 2023

[Paper]    
Agentic GPT Model Architecture Reinforcement Learning

ChatGPT, the first large language model (LLM) with mass adoption, has demonstrated remarkable performance in numerous natural language tasks. Despite its evident usefulness, evaluating ChatGPT’s performance in diverse problem domains remains challenging due to the closed nature of the model and its continuous updates via Reinforcement Learning from Human Feedback (RLHF). We highlight the issue of data contamination in ChatGPT evaluations, with a case study of the task of stance detection. We discuss the challenge of preventing data contamination and ensuring fair model evaluation in the age of closed and continuously trained models.

Similar Work