ICE-GRT: Instruction Context Enhancement By Generative Reinforcement Based Transformers · The Large Language Model Bible Contribute to LLM-Bible

ICE-GRT: Instruction Context Enhancement By Generative Reinforcement Based Transformers

Zheng Chen, Sun Ke, Tang Da, Ma Yukun, Zhang Yuyu, Xi Chenguang, Zhou Xun. Arxiv 2024

[Paper]    
Efficiency And Optimization Fine Tuning GPT Model Architecture Pretraining Methods Reinforcement Learning Training Techniques Transformer

The emergence of Large Language Models (LLMs) such as ChatGPT and LLaMA encounter limitations in domain-specific tasks, with these models often lacking depth and accuracy in specialized areas, and exhibiting a decrease in general capabilities when fine-tuned, particularly analysis ability in small sized models. To address these gaps, we introduce ICE-GRT, utilizing Reinforcement Learning from Human Feedback (RLHF) grounded in Proximal Policy Optimization (PPO), demonstrating remarkable ability in in-domain scenarios without compromising general task performance. Our exploration of ICE-GRT highlights its understanding and reasoning ability to not only generate robust answers but also to provide detailed analyses of the reasons behind the answer. This capability marks a significant progression beyond the scope of Supervised Fine-Tuning models. The success of ICE-GRT is dependent on several crucial factors, including Appropriate Data, Reward Size Scaling, KL-Control, Advantage Normalization, etc. The ICE-GRT model exhibits state-of-the-art performance in domain-specific tasks and across 12 general Language tasks against equivalent size and even larger size LLMs, highlighting the effectiveness of our approach. We provide a comprehensive analysis of the ICE-GRT, underscoring the significant advancements it brings to the field of LLM.

Similar Work