Fine-tuned Network Relies On Generic Representation To Solve Unseen Cognitive Task · The Large Language Model Bible Contribute to LLM-Bible

Fine-tuned Network Relies On Generic Representation To Solve Unseen Cognitive Task

Lin Dongyan. Arxiv 2024

[Paper]    
Fine Tuning GPT Model Architecture Pretraining Methods Reinforcement Learning Training Techniques

Fine-tuning pretrained language models has shown promising results on a wide range of tasks, but when encountering a novel task, do they rely more on generic pretrained representation, or develop brand new task-specific solutions? Here, we fine-tuned GPT-2 on a context-dependent decision-making task, novel to the model but adapted from neuroscience literature. We compared its performance and internal mechanisms to a version of GPT-2 trained from scratch on the same task. Our results show that fine-tuned models depend heavily on pretrained representations, particularly in later layers, while models trained from scratch develop different, more task-specific mechanisms. These findings highlight the advantages and limitations of pretraining for task generalization and underscore the need for further investigation into the mechanisms underpinning task-specific fine-tuning in LLMs.

Similar Work