Assessing Llms Suitability For Knowledge Graph Completion · The Large Language Model Bible Contribute to LLM-Bible

Assessing Llms Suitability For Knowledge Graph Completion

Iga Vasile Ionut Remus, Silaghi Gheorghe Cosmin. Arxiv 2024

[Paper]    
Applications Few Shot GPT Model Architecture Prompting Reinforcement Learning

Recent work has shown the capability of Large Language Models (LLMs) to solve tasks related to Knowledge Graphs, such as Knowledge Graph Completion, even in Zero- or Few-Shot paradigms. However, they are known to hallucinate answers, or output results in a non-deterministic manner, thus leading to wrongly reasoned responses, even if they satisfy the user’s demands. To highlight opportunities and challenges in knowledge graphs-related tasks, we experiment with three distinguished LLMs, namely Mixtral-8x7b-Instruct-v0.1, GPT-3.5-Turbo-0125 and GPT-4o, on Knowledge Graph Completion for static knowledge graphs, using prompts constructed following the TELeR taxonomy, in Zero- and One-Shot contexts, on a Task-Oriented Dialogue system use case. When evaluated using both strict and flexible metrics measurement manners, our results show that LLMs could be fit for such a task if prompts encapsulate sufficient information and relevant examples.

Similar Work