Seq2rdf: An End-to-end Application For Deriving Triples From Natural Language Text · The Large Language Model Bible Contribute to LLM-Bible

Seq2rdf: An End-to-end Application For Deriving Triples From Natural Language Text

Liu Yue, Zhang Tongtao, Liang Zhicheng, Ji Heng, Mcguinness Deborah L.. Arxiv 2018

[Paper]    
Applications Attention Mechanism Model Architecture RAG Tools Transformer

We present an end-to-end approach that takes unstructured textual input and generates structured output compliant with a given vocabulary. Inspired by recent successes in neural machine translation, we treat the triples within a given knowledge graph as an independent graph language and propose an encoder-decoder framework with an attention mechanism that leverages knowledge graph embeddings. Our model learns the mapping from natural language text to triple representation in the form of subject-predicate-object using the selected knowledge graph vocabulary. Experiments on three different data sets show that we achieve competitive F1-Measures over the baselines using our simple yet effective approach. A demo video is included.

Similar Work