Neftune: Noisy Embeddings Improve Instruction Finetuning · The Large Language Model Bible Contribute to LLM-Bible

Neftune: Noisy Embeddings Improve Instruction Finetuning

Jain Neel, Chiang Ping-yeh, Wen Yuxin, Kirchenbauer John, Chu Hong-min, Somepalli Gowthami, Bartoldson Brian R., Kailkhura Bhavya, Schwarzschild Avi, Saha Aniruddha, Goldblum Micah, Geiping Jonas, Goldstein Tom. Arxiv 2023

[Paper]    
GPT Model Architecture Reinforcement Learning Training Techniques Uncategorized

We show that language model finetuning can be improved, sometimes dramatically, with a simple augmentation. NEFTune adds noise to the embedding vectors during training. Standard finetuning of LLaMA-2-7B using Alpaca achieves 29.79% on AlpacaEval, which rises to 64.69% using noisy embeddings. NEFTune also improves over strong baselines on modern instruction datasets. Models trained with Evol-Instruct see a 10% improvement, with ShareGPT an 8% improvement, and with OpenPlatypus an 8% improvement. Even powerful models further refined with RLHF such as LLaMA-2-Chat benefit from additional training with NEFTune.

Similar Work