Fido: Fusion-in-decoder Optimized For Stronger Performance And Faster Inference · The Large Language Model Bible Contribute to LLM-Bible

Fido: Fusion-in-decoder Optimized For Stronger Performance And Faster Inference

De Jong Michiel, Zemlyanskiy Yury, Ainslie Joshua, Fitzgerald Nicholas, Sanghai Sumit, Sha Fei, Cohen William. Arxiv 2022

[Paper]    
Merging Model Architecture RAG

Fusion-in-Decoder (FiD) is a powerful retrieval-augmented language model that sets the state-of-the-art on many knowledge-intensive NLP tasks. However, the architecture used for FiD was chosen by making minimal modifications to a standard T5 model, which our analysis shows to be highly suboptimal for a retrieval-augmented model. In particular, FiD allocates the bulk of FLOPs to the encoder, while the majority of inference time results from memory bandwidth constraints in the decoder. We propose two simple changes to the FiD architecture to alleviate memory bandwidth constraints, and speed up inference by 7x. This allows us to use a much larger decoder at modest cost. We denote FiD with the above modifications as FiDO, and show that it strongly improves performance over existing FiD models for a wide range of inference budgets. For example, FiDO-Large-XXL performs faster inference than FiD-Base and achieves better performance than FiD-Large.

Similar Work