Better Question-answering Models On A Budget · The Large Language Model Bible Contribute to LLM-Bible

Better Question-answering Models On A Budget

Wijeratne Yudhanjaya, Marikar Ishan. Arxiv 2023

[Paper]    
Fine Tuning GPT Model Architecture Prompting

Low-rank adaptation (LoRA) and question-answer datasets from large language models have made it much easier for much smaller models to be finetuned to the point where they display sophisticated conversational abilities. In this paper, we present Eluwa, a family of LoRA models that use the Stanford Alpaca dataset and massively improve the capabilities of Facebook’s OPT 1.3B, 2.7B and 6.7B models. We benchmark these models in multiple ways, including letting GPT-4 judge their answers to prompts that span general knowledge, writing, programming and other tasks. We show that smaller models here can be fine-tuned to be as performant as models 3x larger - all for as little as 40 USD in compute.

Similar Work