Datavist5: A Pre-trained Language Model For Jointly Understanding Text And Data Visualization · The Large Language Model Bible Contribute to LLM-Bible

Datavist5: A Pre-trained Language Model For Jointly Understanding Text And Data Visualization

Wan Zhuoyue, Song Yuanfeng, Li Shuaimin, Zhang Chen Jason, Wong Raymond Chi-wing. Arxiv 2024

[Paper]    
Applications BERT Efficiency And Optimization Fine Tuning Interpretability And Explainability Model Architecture Multimodal Models Pretraining Methods Reinforcement Learning Training Techniques

Data visualization (DV) is the fundamental and premise tool to improve the efficiency in conveying the insights behind the big data, which has been widely accepted in existing data-driven world. Task automation in DV, such as converting natural language queries to visualizations (i.e., text-to-vis), generating explanations from visualizations (i.e., vis-to-text), answering DV-related questions in free form (i.e. FeVisQA), and explicating tabular data (i.e., table-to-text), is vital for advancing the field. Despite their potential, the application of pre-trained language models (PLMs) like T5 and BERT in DV has been limited by high costs and challenges in handling cross-modal information, leading to few studies on PLMs for DV. We introduce \textbf{DataVisT5}, a novel PLM tailored for DV that enhances the T5 architecture through a hybrid objective pre-training and multi-task fine-tuning strategy, integrating text and DV datasets to effectively interpret cross-modal semantics. Extensive evaluations on public datasets show that DataVisT5 consistently outperforms current state-of-the-art models on various DV-related tasks. We anticipate that DataVisT5 will not only inspire further research on vertical PLMs but also expand the range of applications for PLMs.

Similar Work