Task Formulation Matters When Learning Continually: A Case Study In Visual Question Answering · The Large Language Model Bible Contribute to LLM-Bible

Task Formulation Matters When Learning Continually: A Case Study In Visual Question Answering

Nikandrou Mavina, Yu Lu, Suglia Alessandro, Konstas Ioannis, Rieser Verena. Arxiv 2022

[Paper]    
Applications Model Architecture Pretraining Methods Security Transformer

Continual learning aims to train a model incrementally on a sequence of tasks without forgetting previous knowledge. Although continual learning has been widely studied in computer vision, its application to Vision+Language tasks is not that straightforward, as settings can be parameterized in multiple ways according to their input modalities. In this paper, we present a detailed study of how different settings affect performance for Visual Question Answering. We first propose three plausible task formulations and demonstrate their impact on the performance of continual learning algorithms. We break down several factors of task similarity, showing that performance and sensitivity to task order highly depend on the shift of the output distribution. We also investigate the potential of pretrained models and compare the robustness of transformer models with different visual embeddings. Finally, we provide an analysis interpreting model representations and their impact on forgetting. Our results highlight the importance of stabilizing visual representations in deeper layers.

Similar Work