Helm: Highlighted Evidence Augmented Language Model For Enhanced Table-to-text Generation · The Large Language Model Bible Contribute to LLM-Bible

Helm: Highlighted Evidence Augmented Language Model For Enhanced Table-to-text Generation

Bian Junyi, Qin Xiaolei, Zou Wuhe, Huang Mengzuo, Luo Congyi, Zhang Ke, Zhang Weidong. Arxiv 2023

[Paper]    
Applications Fine Tuning Interpretability And Explainability Language Modeling Pretraining Methods Prompting Reinforcement Learning Tools Training Techniques

Large models have demonstrated significant progress across various domains, particularly in tasks related to text generation. In the domain of Table to Text, many Large Language Model (LLM)-based methods currently resort to modifying prompts to invoke public APIs, incurring potential costs and information leaks. With the advent of open-source large models, fine-tuning LLMs has become feasible. In this study, we conducted parameter-efficient fine-tuning on the LLaMA2 model. Distinguishing itself from previous fine-tuning-based table-to-text methods, our approach involves injecting reasoning information into the input by emphasizing table-specific row data. Our model consists of two modules: 1) a table reasoner that identifies relevant row evidence, and 2) a table summarizer that generates sentences based on the highlighted table. To facilitate this, we propose a search strategy to construct reasoning labels for training the table reasoner. On both the FetaQA and QTSumm datasets, our approach achieved state-of-the-art results. Additionally, we observed that highlighting input tables significantly enhances the model’s performance and provides valuable interpretability.

Similar Work