Cogview: Mastering Text-to-image Generation Via Transformers · The Large Language Model Bible Contribute to LLM-Bible

Cogview: Mastering Text-to-image Generation Via Transformers

Ding Ming, Yang Zhuoyi, Hong Wenyi, Zheng Wendi, Zhou Chang, Yin Da, Lin Junyang, Zou Xu, Shao Zhou, Yang Hongxia, Tang Jie. Arxiv 2021

[Paper]    
Model Architecture Multimodal Models Pretraining Methods Training Techniques Transformer

Text-to-Image generation in the general domain has long been an open problem, which requires both a powerful generative model and cross-modal understanding. We propose CogView, a 4-billion-parameter Transformer with VQ-VAE tokenizer to advance this problem. We also demonstrate the finetuning strategies for various downstream tasks, e.g. style learning, super-resolution, text-image ranking and fashion design, and methods to stabilize pretraining, e.g. eliminating NaN losses. CogView achieves the state-of-the-art FID on the blurred MS COCO dataset, outperforming previous GAN-based models and a recent similar work DALL-E.

Similar Work