AMEX: Android Multi-annotation Expo Dataset For Mobile GUI Agents · The Large Language Model Bible Contribute to LLM-Bible

AMEX: Android Multi-annotation Expo Dataset For Mobile GUI Agents

Chai Yuxiang, Huang Siyuan, Niu Yazhe, Xiao Han, Liu Liang, Zhang Dingyu, Gao Peng, Ren Shuai, Li Hongsheng. Arxiv 2024

[Paper]    
Agentic Applications Attention Mechanism Model Architecture RAG Tools Uncategorized

AI agents have drawn increasing attention mostly on their ability to perceive environments, understand tasks, and autonomously achieve goals. To advance research on AI agents in mobile scenarios, we introduce the Android Multi-annotation EXpo (AMEX), a comprehensive, large-scale dataset designed for generalist mobile GUI-control agents. Their capabilities of completing complex tasks by directly interacting with the graphical user interface (GUI) on mobile devices are trained and evaluated with the proposed dataset. AMEX comprises over 104K high-resolution screenshots from 110 popular mobile applications, which are annotated at multiple levels. Unlike existing mobile device-control datasets, e.g., MoTIF, AitW, etc., AMEX includes three levels of annotations: GUI interactive element grounding, GUI screen and element functionality descriptions, and complex natural language instructions, each averaging 13 steps with stepwise GUI-action chains. We develop this dataset from a more instructive and detailed perspective, complementing the general settings of existing datasets. Additionally, we develop a baseline model SPHINX Agent and compare its performance across state-of-the-art agents trained on other datasets. To facilitate further research, we open-source our dataset, models, and relevant evaluation tools. The project is available at https://yuxiangchai.github.io/AMEX/

Similar Work