The Behavior Of Large Language Models When Prompted To Generate Code Explanations · The Large Language Model Bible Contribute to LLM-Bible

The Behavior Of Large Language Models When Prompted To Generate Code Explanations

Oli Priti, Banjade Rabin, Chapagain Jeevan, Rus Vasile. Arxiv 2023

[Paper]    
Interpretability And Explainability Prompting

This paper systematically investigates the generation of code explanations by Large Language Models (LLMs) for code examples commonly encountered in introductory programming courses. Our findings reveal significant variations in the nature of code explanations produced by LLMs, influenced by factors such as the wording of the prompt, the specific code examples under consideration, the programming language involved, the temperature parameter, and the version of the LLM. However, a consistent pattern emerges for Java and Python, where explanations exhibit a Flesch-Kincaid readability level of approximately 7-8 grade and a consistent lexical density, indicating the proportion of meaningful words relative to the total explanation size. Additionally, the generated explanations consistently achieve high scores for correctness, but lower scores on three other metrics: completeness, conciseness, and specificity.

Similar Work