Iluvui: Instruction-tuned Language-vision Modeling Of Uis From Machine Conversations · The Large Language Model Bible Contribute to LLM-Bible

Iluvui: Instruction-tuned Language-vision Modeling Of Uis From Machine Conversations

Jiang Yue, Schoop Eldon, Swearngin Amanda, Nichols Jeffrey. Arxiv 2023

[Paper]    
Applications Multimodal Models Reinforcement Learning Training Techniques

Multimodal Vision-Language Models (VLMs) enable powerful applications from their fused understanding of images and language, but many perform poorly on UI tasks due to the lack of UI training data. In this paper, we adapt a recipe for generating paired text-image training data for VLMs to the UI domain by combining existing pixel-based methods with a Large Language Model (LLM). Unlike prior art, our method requires no human-provided annotations, and it can be applied to any dataset of UI screenshots. We generate a dataset of 335K conversational examples paired with UIs that cover Q&A, UI descriptions, and planning, and use it to fine-tune a conversational VLM for UI tasks. To assess the performance of our model, we benchmark it on UI element detection tasks, evaluate response quality, and showcase its applicability to multi-step UI navigation and planning.

Similar Work