|Title:||Human-Centric Ontology Evaluation||Language:||English||Authors:||Tsaneva, Stefani Stoynova||Qualification level:||Diploma||Advisor:||Biffl, Stefan||Assisting Advisor:||Sabou, Reka Marta||Issue Date:||2021||Number of Pages:||117||Qualification level:||Diploma||Abstract:||
Ontologies present a conceptual view of a domain of interest and are essential for systems requiring real-world knowledge. The correctness and quality of ontologies are of high importance, as incorrectly represented information or controversial concepts modeled from a single viewpoint can lead to invalid application outputs and biased systems. Several ontology quality issues can be detected automatically, such as the detection of syntax errors or hierarchy cycles, however, others require human involvement, e.g., identifying incorrectly modeled statements, or discovering concepts not compliant with how humans think. Such human-centric ontology evaluation tasks (HOETs), typically performed manually by domain experts or knowledge engineers, can be expensive, time-intensive, and have limited scalability. Human Computation (HC) techniques have been used as a promising approach to outsource HOETs to human contributors at a lower cost.Despite the importance of human-centric ontology evaluation, a systematic understanding of the types of HOETs is still missing. Moreover, it is not clear which human-centric ontology evaluation has already been addressed with HC methods and how to use HC to realise those HOETs that were not yet investigated.This thesis aims to address this research gap by following a Design Science methodology. First, systematic literature review methods are used to investigate human-centric ontology evaluation and a structured and unbiased review of HOETs, their characteristics and used solution approaches is provided. We also identify a list of HOETs for which a HC approach has still not been presented. Second, from this list, we select the task of ontology restriction verification and propose a corresponding HC task design. Third, an experimental evaluation of the proposed HC task design solution is performed using a student crowd in the context of distance learning approaches at Vienna University of Technology.Based on the evaluation data we conclude that: (i) over 90% of the collected responses were correct; (ii) with the proposed evaluation method a 100% accuracy of the verifications can be reached using a majority vote aggregation; (iii) the knowledge representation formalism in which an ontology is presented to the contributors can influence the quality of their assessments; (iv) which formalism leads to the highest quality of verifications depends on the ontology axiom structure and the defect type; (v) prior modeling knowledge of the participants is a good predictor of their verification performance.With the proposed HC method high-quality evaluations were achieved when the contrib- utors are novice ontology engineers. In future, experimental investigations are needed where the solution is also explored with layman crowds. Several HOETs are identified that are still missing a HC approach, therefore, the proposed HC task can be further extended to support those and thus enable the verification of multiple ontology aspects.
|Keywords:||Ontology Evaluation; Human Computation; Crowdsourcing; Human-in-the-loop; Ontology Restrictions||URI:||https://doi.org/10.34726/hss.2021.79389
|DOI:||10.34726/hss.2021.79389||Library ID:||AC16189434||Organisation:||E194 - Institut für Information Systems Engineering||Publication Type:||Thesis
|Appears in Collections:||Thesis|
Files in this item:
checked on Oct 22, 2021
checked on Oct 22, 2021
Items in reposiTUm are protected by copyright, with all rights reserved, unless otherwise indicated.