Lavrinovics, E., Biswas, R., Bjerva, J., & Hose, K. (2025). Knowledge graphs, large language models, and hallucinations: an NLP perspective. Journal of Web Semantics, 85, Article 100844. https://doi.org/10.1016/j.websem.2024.100844
E192-02 - Forschungsbereich Databases and Artificial Intelligence
-
Journal:
Journal of Web Semantics
-
ISSN:
1570-8268
-
Date (published):
May-2025
-
Number of Pages:
7
-
Publisher:
ELSEVIER
-
Peer reviewed:
Yes
-
Keywords:
LLM; Factuality; Knowledge Graphs; Hallucinations
en
Abstract:
Large Language Models (LLMs) have revolutionized Natural Language Processing (NLP) based applications including automated text generation, question answering, chatbots, and others. However, they face a significant challenge: hallucinations, where models produce plausible-sounding but factually incorrect responses. This undermines trust and limits the applicability of LLMs in different domains. Knowledge Graphs (KGs), on the other hand, provide a structured collection of interconnected facts represented as entities (nodes) and their relationships (edges). In recent research, KGs have been leveraged to provide context that can fill gaps in an LLM’s understanding of certain topics offering a promising approach to mitigate hallucinations in LLMs, enhancing their reliability and accuracy while benefiting from their wide applicability. Nonetheless, it is still a very active area of research with various unresolved open problems. In this paper, we discuss these open challenges covering state-of-the-art datasets and benchmarks as well as methods for knowledge integration and evaluating hallucinations. In our discussion, we consider the current use of KGs in LLM systems and identify future directions within each of these challenges.
en
Research Areas:
Logic and Computation: 70% Information Systems Engineering: 30%