Text-tuple-table: Towards Information Integration In Text-to-table Generation Via Global Tuple Extraction · The Large Language Model Bible Contribute to LLM-Bible

Text-tuple-table: Towards Information Integration In Text-to-table Generation Via Global Tuple Extraction

Deng Zheye, Chan Chunkit, Wang Weiqi, Sun Yuxi, Fan Wei, Zheng Tianshi, Yim Yauwai, Song Yangqiu. Arxiv 2024

[Paper] [Code]    
Applications Attention Mechanism Fine Tuning Has Code Model Architecture Pretraining Methods Training Techniques

The task of condensing large chunks of textual information into concise and structured tables has gained attention recently due to the emergence of Large Language Models (LLMs) and their potential benefit for downstream tasks, such as text summarization and text mining. Previous approaches often generate tables that directly replicate information from the text, limiting their applicability in broader contexts, as text-to-table generation in real-life scenarios necessitates information extraction, reasoning, and integration. However, there is a lack of both datasets and methodologies towards this task. In this paper, we introduce LiveSum, a new benchmark dataset created for generating summary tables of competitions based on real-time commentary texts. We evaluate the performances of state-of-the-art LLMs on this task in both fine-tuning and zero-shot settings, and additionally propose a novel pipeline called \(T^3\)(Text-Tuple-Table) to improve their performances. Extensive experimental results demonstrate that LLMs still struggle with this task even after fine-tuning, while our approach can offer substantial performance gains without explicit training. Further analyses demonstrate that our method exhibits strong generalization abilities, surpassing previous approaches on several other text-to-table datasets. Our code and data can be found at https://github.com/HKUST-KnowComp/LiveSum-TTT.

Similar Work