Synth\(^2\): Boosting Visual-language Models With Synthetic Captions And Image Embeddings · The Large Language Model Bible Contribute to LLM-Bible

Synth\(^2\): Boosting Visual-language Models With Synthetic Captions And Image Embeddings

Sharifzadeh Sahand, Kaplanis Christos, Pathak Shreya, Kumaran Dharshan, Ilic Anastasija, Mitrovic Jovana, Blundell Charles, Banino Andrea. Arxiv 2024

[Paper]    
Multimodal Models RAG Training Techniques

The creation of high-quality human-labeled image-caption datasets presents a significant bottleneck in the development of Visual-Language Models (VLMs). In this work, we investigate an approach that leverages the strengths of Large Language Models (LLMs) and image generation models to create synthetic image-text pairs for efficient and effective VLM training. Our method employs a pretrained text-to-image model to synthesize image embeddings from captions generated by an LLM. Despite the text-to-image model and VLM initially being trained on the same data, our approach leverages the image generator’s ability to create novel compositions, resulting in synthetic image embeddings that expand beyond the limitations of the original dataset. Extensive experiments demonstrate that our VLM, finetuned on synthetic data achieves comparable performance to models trained solely on human-annotated data, while requiring significantly less data. Furthermore, we perform a set of analyses on captions which reveals that semantic diversity and balance are key aspects for better downstream performance. Finally, we show that synthesizing images in the image embedding space is 25% faster than in the pixel space. We believe our work not only addresses a significant challenge in VLM training but also opens up promising avenues for the development of self-improving multi-modal models.

Similar Work