Skip to content

Repository for experiments regarding the assessment of the suitability of LLMs for Knowledge Graph Completion task, using Mixtral-8x7b, GPT-3.5-turbo, and GPT-4o.

Notifications You must be signed in to change notification settings

IonutIga/LLMs-for-KGC

Repository files navigation

LLMs-for-KGC

Large Language Models (LLMs) have demonstrated incredible abilities of solving diverse tasks formulated in natural language. Recent work has demonstrated their capacity to solve tasks related to Knowledge Graphs (KGs), such as KG Completion (KGC), even in Zero- or Few-Shot paradigms. However, they are known to hallucinate answers, or output results in a non-deterministic manner, thus leading to wrongly reasoned responses, even if they satisfy the user’s demands. To highlight opportunities and challenges in KG-related tasks, we experiment with three distinguished LLMs, namely Mixtral 8x7B-instruct-v0.1, GPT 3.5-turbo-0125, and GPT-4o on KGC for Static KGs, using prompts constructed following the TELeR taxonomy, in Zero- and One-Shot contexts, on a Task-Oriented Dialogue System use case . When evaluated using both strict and flexible metrics measurement manners, our results show that LLMs may be fit for such a task if prompts encapsulate sufficient information and relevant examples.

To reproduce the experiments, follow the guidelines in the notebook. All other references to previous works of ours can be found on Github.

About

Repository for experiments regarding the assessment of the suitability of LLMs for Knowledge Graph Completion task, using Mixtral-8x7b, GPT-3.5-turbo, and GPT-4o.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published