Facts-and-feelings: Capturing Both Objectivity And Subjectivity In Table-to-text Generation · The Large Language Model Bible Contribute to LLM-Bible

Facts-and-feelings: Capturing Both Objectivity And Subjectivity In Table-to-text Generation

Dey Tathagata, Bhattacharyya Pushpak. Arxiv 2024

[Paper]    
Applications BERT Fine Tuning Language Modeling Model Architecture Pretraining Methods Prompting Training Techniques

Table-to-text generation, a long-standing challenge in natural language generation, has remained unexplored through the lens of subjectivity. Subjectivity here encompasses the comprehension of information derived from the table that cannot be described solely by objective data. Given the absence of pre-existing datasets, we introduce the Ta2TS dataset with 3849 data instances. We perform the task of fine-tuning sequence-to-sequence models on the linearized tables and prompting on popular large language models. We analyze the results from a quantitative and qualitative perspective to ensure the capture of subjectivity and factual consistency. The analysis shows the fine-tuned LMs can perform close to the prompted LLMs. Both the models can capture the tabular data, generating texts with 85.15% BERTScore and 26.28% Meteor score. To the best of our knowledge, we provide the first-of-its-kind dataset on tables with multiple genres and subjectivity included and present the first comprehensive analysis and comparison of different LLM performances on this task.

Similar Work