Evaluation Of RAG Metrics For Question Answering In The Telecom Domain · The Large Language Model Bible Contribute to LLM-Bible

Evaluation Of RAG Metrics For Question Answering In The Telecom Domain

Roychowdhury Sujoy, Soman Sumit, Ranjani H G, Gunda Neeraj, Chhabra Vansh, Bala Sai Krishna. Arxiv 2024

[Paper]    
Applications Fine Tuning Pretraining Methods Prompting RAG Tools Training Techniques

Retrieval Augmented Generation (RAG) is widely used to enable Large Language Models (LLMs) perform Question Answering (QA) tasks in various domains. However, RAG based on open-source LLM for specialized domains has challenges of evaluating generated responses. A popular framework in the literature is the RAG Assessment (RAGAS), a publicly available library which uses LLMs for evaluation. One disadvantage of RAGAS is the lack of details of derivation of numerical value of the evaluation metrics. One of the outcomes of this work is a modified version of this package for few metrics (faithfulness, context relevance, answer relevance, answer correctness, answer similarity and factual correctness) through which we provide the intermediate outputs of the prompts by using any LLMs. Next, we analyse the expert evaluations of the output of the modified RAGAS package and observe the challenges of using it in the telecom domain. We also study the effect of the metrics under correct vs. wrong retrieval and observe that few of the metrics have higher values for correct retrieval. We also study for differences in metrics between base embeddings and those domain adapted via pre-training and fine-tuning. Finally, we comment on the suitability and challenges of using these metrics for in-the-wild telecom QA task.

Similar Work