Ontology-free General-domain Knowledge Graph-to-text Generation Dataset Synthesis Using Large Language Model · The Large Language Model Bible Contribute to LLM-Bible

Ontology-free General-domain Knowledge Graph-to-text Generation Dataset Synthesis Using Large Language Model

Kim Daehee, Kang Deokhyung, Ryu Sangwon, Lee Gary Geunbae. Arxiv 2024

[Paper]    
Applications Language Modeling RAG

Knowledge Graph-to-Text (G2T) generation involves verbalizing structured knowledge graphs into natural language text. Recent advancements in Pretrained Language Models (PLMs) have improved G2T performance, but their effectiveness depends on datasets with precise graph-text alignment. However, the scarcity of high-quality, general-domain G2T generation datasets restricts progress in the general-domain G2T generation research. To address this issue, we introduce Wikipedia Ontology-Free Graph-text dataset (WikiOFGraph), a new large-scale G2T dataset generated using a novel method that leverages Large Language Model (LLM) and Data-QuestEval. Our new dataset, which contains 5.85M general-domain graph-text pairs, offers high graph-text consistency without relying on external ontologies. Experimental results demonstrate that PLM fine-tuned on WikiOFGraph outperforms those trained on other datasets across various evaluation metrics. Our method proves to be a scalable and effective solution for generating high-quality G2T data, significantly advancing the field of G2T generation.

Similar Work