T2i-compbench: A Comprehensive Benchmark For Open-world Compositional Text-to-image Generation · The Large Language Model Bible Contribute to LLM-Bible

T2i-compbench: A Comprehensive Benchmark For Open-world Compositional Text-to-image Generation

Huang Kaiyi, Sun Kaiyue, Xie Enze, Li Zhenguo, Liu Xihui. Arxiv 2023

[Paper]    
Fine Tuning Multimodal Models Pretraining Methods Prompting Reinforcement Learning Training Techniques

Despite the stunning ability to generate high-quality images by recent text-to-image models, current approaches often struggle to effectively compose objects with different attributes and relationships into a complex and coherent scene. We propose T2I-CompBench, a comprehensive benchmark for open-world compositional text-to-image generation, consisting of 6,000 compositional text prompts from 3 categories (attribute binding, object relationships, and complex compositions) and 6 sub-categories (color binding, shape binding, texture binding, spatial relationships, non-spatial relationships, and complex compositions). We further propose several evaluation metrics specifically designed to evaluate compositional text-to-image generation and explore the potential and limitations of multimodal LLMs for evaluation. We introduce a new approach, Generative mOdel fine-tuning with Reward-driven Sample selection (GORS), to boost the compositional text-to-image generation abilities of pretrained text-to-image models. Extensive experiments and evaluations are conducted to benchmark previous methods on T2I-CompBench, and to validate the effectiveness of our proposed evaluation metrics and GORS approach. Project page is available at https://karine-h.github.io/T2I-CompBench/.

Similar Work