Orsa, M. (2023). Using Human Computation for Ontology evaluation [Diploma Thesis, Technische Universität Wien]. reposiTUm. https://doi.org/10.34726/hss.2023.97601
E194 - Institut für Information Systems Engineering
-
Date (published):
2023
-
Number of Pages:
80
-
Keywords:
Knowledge base; Ontology evaluation; Ontology engineering; Human Computation; Crowdsourcing; VeriCoM
en
Abstract:
Applications rely on ontologies as knowledge bases and as wrongly represented information can lead to false outcomes, the quality of an ontology can be a deciding factor for the success of the system using it. Because the process of ontology engineering is liable to errors, the need for ontology evaluation arises. While automated approaches exist, there are still errors which need background information and human knowledge. For such cases, Human Computation and Crowdsourcing can be applied, such that with the help of crowds, the knowledge base for the ontology can be enhanced and the existing one validated.While common errors are introduced in the literature, an analysis of typical engineering errors in beginners' ontologies in practice is missing. Moreover, the need for a methodology to solve problems regarding ontology engineering with Human Computation arises and it is not clear, while considering error classification, to what extent is the overall performance influenced, when having multiple error types in one Human Computation task and when the variety of errors increases from three to five.This thesis firstly collects information regarding common errors through a \textit{literature research} and uses \textit{data comparison} between beginners' ontologies to find out practical information. Concerning finding a methodology to solve problems regarding ontology engineering with Human Computation, an existing approach called \textit{VeriCoM} was extended to an Ontology Engineering use case. Lastly, the thesis shows how the performance of workers regarding verification of ontology engineering errors changes, when increasing the error types and the variety of errors in an Human Computation task, through an experiment that follows principles of \textit{designing the experimental process}.Based on the results, we conclude that: most common errors in beginners' ontologies are: readability, disjointness of classes, not declaring inverse relationships and the confusion between logical "and" and "or" (i). Moreover, the VeriCoM approach shows that the average performance of the workers is at 78\% while the average speed of completing a task is at 55 seconds, proving that the approach is able to achieve high performance regarding the verification of specific defects in ontologies (ii). Comparing a previous similar experiment with this thesis experiment, that has an increased number of error types of five, but also tasks with multiple errors, respectively, the actual performance decreased from 92.58\% to 78\%, while the response time remains on average at slightly less then one minute. Even though the performance reduced by 14.58\%, the overall performance is still high and did not changed drastically. This goes to show that Human Computation is still reliable, even when the ontologies to be verified contain multiple and different errors, and is a viable approach for Ontology verification in general (iii).
en
Additional information:
Arbeit an der Bibliothek noch nicht eingelangt - Daten nicht geprüft Abweichender Titel nach Übersetzung der Verfasserin/des Verfassers