Pushing The Limit Of LLM Capacity For Text Classification · The Large Language Model Bible Contribute to LLM-Bible

Pushing The Limit Of LLM Capacity For Text Classification

Zhang Yazhou, Wang Mengyao, Ren Chenyu, Li Qiuchi, Tiwari Prayag, Wang Benyou, Qin Jing. Arxiv 2024

[Paper]    
Fine Tuning GPT Language Modeling Model Architecture Pretraining Methods RAG Tools Training Techniques

The value of text classification’s future research has encountered challenges and uncertainties, due to the extraordinary efficacy demonstrated by large language models (LLMs) across numerous downstream NLP tasks. In this era of open-ended language modeling, where task boundaries are gradually fading, an urgent question emerges: have we made significant advances in text classification under the full benefit of LLMs? To answer this question, we propose RGPT, an adaptive boosting framework tailored to produce a specialized text classification LLM by recurrently ensembling a pool of strong base learners. The base learners are constructed by adaptively adjusting the distribution of training samples and iteratively fine-tuning LLMs with them. Such base learners are then ensembled to be a specialized text classification LLM, by recurrently incorporating the historical predictions from the previous learners. Through a comprehensive empirical comparison, we show that RGPT significantly outperforms 8 SOTA PLMs and 7 SOTA LLMs on four benchmarks by 1.36% on average. Further evaluation experiments show a clear surpassing of RGPT over human classification.

Similar Work