Evaluating The Efficacy Of Foundational Models: Advancing Benchmarking Practices To Enhance Fine-tuning Decision-making · The Large Language Model Bible Contribute to LLM-Bible

Evaluating The Efficacy Of Foundational Models: Advancing Benchmarking Practices To Enhance Fine-tuning Decision-making

Amujo Oluyemi Enoch, Yang Shanchieh Jay. Arxiv 2024

[Paper]    
Fine Tuning Pretraining Methods Prompting Security Tools Training Techniques

Recently, large language models (LLMs) have expanded into various domains. However, there remains a need to evaluate how these models perform when prompted with commonplace queries compared to domain-specific queries, which may be useful for benchmarking prior to fine-tuning for domain-specific downstream tasks. This study evaluates LLMs, specifically Gemma-2B and Gemma-7B, across diverse domains, including cybersecurity, medicine, and finance, compared to common knowledge queries. This study utilizes a comprehensive methodology to assess foundational models, which includes problem formulation, data analysis, and the development of ThroughCut, a novel outlier detection technique that automatically identifies response throughput outliers based on their conciseness. This methodological rigor enhances the credibility of the presented evaluation frameworks. This study focused on assessing inference time, response length, throughput, quality, and resource utilization and investigated the correlations between these factors. The results indicate that model size and types of prompts used for inference significantly influenced response length and quality. In addition, common prompts, which include various types of queries, generate diverse and inconsistent responses at irregular intervals. In contrast, domain-specific prompts consistently generate concise responses within a reasonable time. Overall, this study underscores the need for comprehensive evaluation frameworks to enhance the reliability of benchmarking procedures in multidomain AI research.

Similar Work