Are Small Language Models Ready To Compete With Large Language Models For Practical Applications? · The Large Language Model Bible Contribute to LLM-Bible

Are Small Language Models Ready To Compete With Large Language Models For Practical Applications?

Sinha Neelabh, Jain Vinija, Chadha Aman. Arxiv 2024

[Paper]    
Applications GPT Merging Model Architecture Prompting RAG Tools

The rapid rise of Language Models (LMs) has expanded their use in several applications. Yet, due to constraints of model size, associated cost, or proprietary restrictions, utilizing state-of-the-art (SOTA) LLMs is not always feasible. With open, smaller LMs emerging, more applications can leverage their capabilities, but selecting the right LM can be challenging as smaller LMs don’t perform well universally. This work tries to bridge this gap by proposing a framework to experimentally evaluate small, open LMs in practical settings through measuring semantic correctness of outputs across three practical aspects: task types, application domains and reasoning types, using diverse prompt styles. It also conducts an in-depth comparison of 10 small, open LMs to identify best LM and prompt style depending on specific application requirement using the proposed framework. We also show that if selected appropriately, they can outperform SOTA LLMs like DeepSeek-v2, GPT-4o-mini, Gemini-1.5-Pro, and even compete with GPT-4o.

Similar Work