Deplot: One-shot Visual Language Reasoning By Plot-to-table Translation · The Large Language Model Bible Contribute to LLM-Bible

Deplot: One-shot Visual Language Reasoning By Plot-to-table Translation

Liu Fangyu, Eisenschlos Julian Martin, Piccinno Francesco, Krichene Syrine, Pang Chenxi, Lee Kenton, Joshi Mandar, Chen Wenhu, Collier Nigel, Altun Yasemin. Arxiv 2022

[Paper]    
Few Shot Prompting Reinforcement Learning Training Techniques Uncategorized

Visual language such as charts and plots is ubiquitous in the human world. Comprehending plots and charts requires strong reasoning skills. Prior state-of-the-art (SOTA) models require at least tens of thousands of training examples and their reasoning capabilities are still much limited, especially on complex human-written queries. This paper presents the first one-shot solution to visual language reasoning. We decompose the challenge of visual language reasoning into two steps: (1) plot-to-text translation, and (2) reasoning over the translated text. The key in this method is a modality conversion module, named as DePlot, which translates the image of a plot or chart to a linearized table. The output of DePlot can then be directly used to prompt a pretrained large language model (LLM), exploiting the few-shot reasoning capabilities of LLMs. To obtain DePlot, we standardize the plot-to-table task by establishing unified task formats and metrics, and train DePlot end-to-end on this task. DePlot can then be used off-the-shelf together with LLMs in a plug-and-play fashion. Compared with a SOTA model finetuned on more than >28k data points, DePlot+LLM with just one-shot prompting achieves a 24.0% improvement over finetuned SOTA on human-written queries from the task of chart QA.

Similar Work