Chatgpt Rates Natural Language Explanation Quality Like Humans: But On Which Scales? · The Large Language Model Bible Contribute to LLM-Bible

Chatgpt Rates Natural Language Explanation Quality Like Humans: But On Which Scales?

Huang Fan, Kwak Haewoon, Park Kunwoo, An Jisun. Arxiv 2024

[Paper]    
Ethics And Bias GPT Interpretability And Explainability Model Architecture Prompting Responsible AI

As AI becomes more integral in our lives, the need for transparency and responsibility grows. While natural language explanations (NLEs) are vital for clarifying the reasoning behind AI decisions, evaluating them through human judgments is complex and resource-intensive due to subjectivity and the need for fine-grained ratings. This study explores the alignment between ChatGPT and human assessments across multiple scales (i.e., binary, ternary, and 7-Likert scale). We sample 300 data instances from three NLE datasets and collect 900 human annotations for both informativeness and clarity scores as the text quality measurement. We further conduct paired comparison experiments under different ranges of subjectivity scores, where the baseline comes from 8,346 human annotations. Our results show that ChatGPT aligns better with humans in more coarse-grained scales. Also, paired comparisons and dynamic prompting (i.e., providing semantically similar examples in the prompt) improve the alignment. This research advances our understanding of large language models’ capabilities to assess the text explanation quality in different configurations for responsible AI development.

Similar Work