Bitune: Bidirectional Instruction-tuning · The Large Language Model Bible Contribute to LLM-Bible

Bitune: Bidirectional Instruction-tuning

Kopiczko Dawid J., Blankevoort Tijmen, Asano Yuki M.. Arxiv 2024

[Paper]    
Attention Mechanism Fine Tuning Model Architecture Prompting RAG

We introduce Bitune, a method that improves instruction-tuning of pretrained decoder-only large language models, leading to consistent gains on downstream tasks. Bitune applies both causal and bidirectional attention to the prompt, to obtain a better representation of the query or instruction. We realize this by introducing two sets of parameters, for which we apply parameter-efficient finetuning techniques. These causal and bidirectional features are then combined into a weighted average with trainable coefficients, which is subsequently used to generate new tokens. We demonstrate significant improvements in zero-shot performance on commonsense reasoning, arithmetic, and language understanding tasks, while extensive ablation studies validate the role of each component and demonstrate the method’s agnosticism to different PEFT techniques.

Similar Work