Predicting Fine-tuning Performance With Probing · The Large Language Model Bible Contribute to LLM-Bible

Predicting Fine-tuning Performance With Probing

Zhu Zining, Shahtalebi Soroosh, Rudzicz Frank. Arxiv 2022

[Paper]    
Attention Mechanism Fine Tuning Model Architecture Pretraining Methods Training Techniques

Large NLP models have recently shown impressive performance in language understanding tasks, typically evaluated by their fine-tuned performance. Alternatively, probing has received increasing attention as being a lightweight method for interpreting the intrinsic mechanisms of large NLP models. In probing, post-hoc classifiers are trained on “out-of-domain” datasets that diagnose specific abilities. While probing the language models has led to insightful findings, they appear disjointed from the development of models. This paper explores the utility of probing deep NLP models to extract a proxy signal widely used in model development – the fine-tuning performance. We find that it is possible to use the accuracies of only three probing tests to predict the fine-tuning performance with errors \(40%\) - \(80%\) smaller than baselines. We further discuss possible avenues where probing can empower the development of deep NLP models.

Similar Work