Benchmarking Open-source Language Models For Efficient Question Answering In Industrial Applications · The Large Language Model Bible Contribute to LLM-Bible

Benchmarking Open-source Language Models For Efficient Question Answering In Industrial Applications

Alassan Mahaman Sanoussi Yahaya, Espejel Jessica López, Bouhandi Merieme, Dahhane Walid, Ettifouri El Hassane. Arxiv 2024

[Paper]    
Applications Efficiency And Optimization Reinforcement Learning Tools

In the rapidly evolving landscape of Natural Language Processing (NLP), Large Language Models (LLMs) have demonstrated remarkable capabilities in tasks such as question answering (QA). However, the accessibility and practicality of utilizing these models for industrial applications pose significant challenges, particularly concerning cost-effectiveness, inference speed, and resource efficiency. This paper presents a comprehensive benchmarking study comparing open-source LLMs with their non-open-source counterparts on the task of question answering. Our objective is to identify open-source alternatives capable of delivering comparable performance to proprietary models while being lightweight in terms of resource requirements and suitable for Central Processing Unit (CPU)-based inference. Through rigorous evaluation across various metrics including accuracy, inference speed, and resource consumption, we aim to provide insights into selecting efficient LLMs for real-world applications. Our findings shed light on viable open-source alternatives that offer acceptable performance and efficiency, addressing the pressing need for accessible and efficient NLP solutions in industry settings.

Similar Work