Visiongpt: Vision-language Understanding Agent Using Generalized Multimodal Framework · The Large Language Model Bible Contribute to LLM-Bible

Visiongpt: Vision-language Understanding Agent Using Generalized Multimodal Framework

Kelly Chris, Hu Luhui, Yang Bang, Tian Yu, Yang Deshun, Yang Cindy, Huang Zaoshan, Li Zihao, Hu Jiayin, Zou Yuexian. Arxiv 2024

[Paper]    
Agentic Applications Efficiency And Optimization GPT Model Architecture Multimodal Models Reinforcement Learning Tools

With the emergence of large language models (LLMs) and vision foundation models, how to combine the intelligence and capacity of these open-sourced or API-available models to achieve open-world visual perception remains an open question. In this paper, we introduce VisionGPT to consolidate and automate the integration of state-of-the-art foundation models, thereby facilitating vision-language understanding and the development of vision-oriented AI. VisionGPT builds upon a generalized multimodal framework that distinguishes itself through three key features: (1) utilizing LLMs (e.g., LLaMA-2) as the pivot to break down users’ requests into detailed action proposals to call suitable foundation models; (2) integrating multi-source outputs from foundation models automatically and generating comprehensive responses for users; (3) adaptable to a wide range of applications such as text-conditioned image understanding/generation/editing and visual question answering. This paper outlines the architecture and capabilities of VisionGPT, demonstrating its potential to revolutionize the field of computer vision through enhanced efficiency, versatility, and generalization, and performance. Our code and models will be made publicly available. Keywords: VisionGPT, Open-world visual perception, Vision-language understanding, Large language model, and Foundation model

Similar Work