Ontologies explicitly capture domain knowledge in machine-readable formats and act as semantically rich knowledge sources for information systems. Detecting misrepresented knowledge by ontology verification is crucial for avoiding malfunctioning systems, as their decisions rely on correct knowledge. While certain classes of ontology errors can be detected automatically through reasoning, some error classes require human involvement for being solved and are therefore addressed by human-centred ontology verification. Human Computation (HC) is a resource-effective solution to human-centred ontology verification because it avoids employing highly skilled domain experts and engineers. However, a systematic mapping study in this area shows that the process of using HC to solve human-centred ontology verification (i) is not well understood as there is no reference process that is widely used and that (ii) there is no widely-accepted tool support available and most authors rely on the ad-hoc use of a handful of very diverse tools. To address these gaps, this thesis contributes to better understanding the typical process performed during human-centred ontology verification and the possibilities of supporting it with a tool. To that end, a design science methodology is used to make the following contributions. First, an iterative approach, comprising a systematic literature review, semi-structured interviews and a focus group, defines “VeriCoM 2.0”, a set of three process models describing human-centred ontology verification. Second, a reference architecture featuring four viewpoints is established to enable the implementation of an end-to-end process support platform for “VeriCoM 2.0”. Third, the contributions also include a prototypical implementation of an extensible platform based on the reference architecture. Fourth, a case study evaluates the created artefacts to understand to what extent the preparation of human-centred ontology verification can be supported. The evaluation of the case study shows that the process models are a helpful tool to plan, conduct and communicate a human-centred ontology verification. Furthermore, the prototypical implementation can support eleven out of nineteen preparation activities of the verification process. Comparing the time effort of implementing the prototypical platform, and thus automating the preparation of the verification, to the time effort of preparing the same verification manually, shows that 29.47% less effort is required for the implementation. An additional comparison reveals that by reusing the prototypical platform and solely customising it to the same verification task, time efforts can be reduced by 85.33% with respect to a manual preparation of the verification task.