The Challenges Of Evaluating LLM Applications: An Analysis Of Automated, Human, And Llm-based Approaches · The Large Language Model Bible Contribute to LLM-Bible

The Challenges Of Evaluating LLM Applications: An Analysis Of Automated, Human, And Llm-based Approaches

Abeysinghe Bhashithe, Circi Ruhan. Arxiv 2024

[Paper]    
Applications Model Architecture Pretraining Methods Tools Transformer

Chatbots have been an interesting application of natural language generation since its inception. With novel transformer based Generative AI methods, building chatbots have become trivial. Chatbots which are targeted at specific domains for example medicine and psychology are implemented rapidly. This however, should not distract from the need to evaluate the chatbot responses. Especially because the natural language generation community does not entirely agree upon how to effectively evaluate such applications. With this work we discuss the issue further with the increasingly popular LLM based evaluations and how they correlate with human evaluations. Additionally, we introduce a comprehensive factored evaluation mechanism that can be utilized in conjunction with both human and LLM-based evaluations. We present the results of an experimental evaluation conducted using this scheme in one of our chatbot implementations which consumed educational reports, and subsequently compare automated, traditional human evaluation, factored human evaluation, and factored LLM evaluation. Results show that factor based evaluation produces better insights on which aspects need to be improved in LLM applications and further strengthens the argument to use human evaluation in critical spaces where main functionality is not direct retrieval.

Similar Work