Llm4vis: Explainable Visualization Recommendation Using Chatgpt · The Large Language Model Bible Contribute to LLM-Bible

Llm4vis: Explainable Visualization Recommendation Using Chatgpt

Lei Wang, Songheng Zhang, Yun Wang, Ee-peng Lim, Yong Wang. Arxiv 2023

[Paper] [Code]    
Few Shot GPT Has Code Interpretability And Explainability Model Architecture Prompting Reinforcement Learning Training Techniques

Data visualization is a powerful tool for exploring and communicating insights in various domains. To automate visualization choice for datasets, a task known as visualization recommendation has been proposed. Various machine-learning-based approaches have been developed for this purpose, but they often require a large corpus of dataset-visualization pairs for training and lack natural explanations for their results. To address this research gap, we propose LLM4Vis, a novel ChatGPT-based prompting approach to perform visualization recommendation and return human-like explanations using very few demonstration examples. Our approach involves feature description, demonstration example selection, explanation generation, demonstration example construction, and inference steps. To obtain demonstration examples with high-quality explanations, we propose a new explanation generation bootstrapping to iteratively refine generated explanations by considering the previous generation and template-based hint. Evaluations on the VizML dataset show that LLM4Vis outperforms or performs similarly to supervised learning models like Random Forest, Decision Tree, and MLP in both few-shot and zero-shot settings. The qualitative evaluation also shows the effectiveness of explanations generated by LLM4Vis. We make our code publicly available at \href{https://github.com/demoleiwang/LLM4Vis}{https://github.com/demoleiwang/LLM4Vis}.

Similar Work