L3 Ensembles: Lifelong Learning Approach For Ensemble Of Foundational Language Models · The Large Language Model Bible Contribute to LLM-Bible

L3 Ensembles: Lifelong Learning Approach For Ensemble Of Foundational Language Models

Shiri Aidin, Roy Kaushik, Sheth Amit, Gaur Manas. Arxiv 2023

[Paper]    
Efficiency And Optimization Fine Tuning Merging Pretraining Methods Tools Training Techniques

Fine-tuning pre-trained foundational language models (FLM) for specific tasks is often impractical, especially for resource-constrained devices. This necessitates the development of a Lifelong Learning (L3) framework that continuously adapts to a stream of Natural Language Processing (NLP) tasks efficiently. We propose an approach that focuses on extracting meaningful representations from unseen data, constructing a structured knowledge base, and improving task performance incrementally. We conducted experiments on various NLP tasks to validate its effectiveness, including benchmarks like GLUE and SuperGLUE. We measured good performance across the accuracy, training efficiency, and knowledge transfer metrics. Initial experimental results show that the proposed L3 ensemble method increases the model accuracy by 4% ~ 36% compared to the fine-tuned FLM. Furthermore, L3 model outperforms naive fine-tuning approaches while maintaining competitive or superior performance (up to 15.4% increase in accuracy) compared to the state-of-the-art language model (T5) for the given task, STS benchmark.

Similar Work