Kusa, W. (2024). Automated eligibility screening and its evaluation in the medical domain [Dissertation, Technische Universität Wien]. reposiTUm. https://doi.org/10.34726/hss.2024.124620
E194 - Institut für Information Systems Engineering
-
Date (published):
2024
-
Number of Pages:
207
-
Keywords:
information retrieval; natural language processing; evaluation; domain-specific search; systematic reviews; citation screening; clinical trial matching; eligibility screening; living literature reviews
en
Abstract:
Eligibility screening in medical fields involves assessing data against predefined criteria, vital for research and clinical applications. However, this task is complicated by the vast amount of data, its complexity, and the lack of standardised formats, impeding efficient access to necessary information for informed decision-making. This thesis explores the challenges in screening for clinical trial recruitment and systematic literature reviews. Clinical trials are essential for medical advancement but matching patients to these trials is intricate and laborious. We examine methods to improve the accuracy of matching patients with trials based on eligibility criteria. Furthermore, the thesis delves into systematic literature reviews, crucial for evidence-based medicine but labour-intensive due to the need for screening numerous studies. We explore automation techniques to streamline citation screening, saving researchers time and effort. Our contributions in this domain focus on three key factors: datasets, evaluation measures and automation approaches. First, in terms of datasets, we extensively evaluate available citation screening resources. To tackle the limitations in the available datasets, we introduce two comprehensive citation screening datasets: CSMeD and CSMeD-ft. Next, the thesis proposes new evaluation measures and ex- perimental designs to facilitate a more rigorous and standardised assessment of automated citation screening systems. Additionally, we present an evaluation approach that shifts the focus to systematic review out- comes instead of Recall, showing that the evaluation based on individual publications’ impact changes the ranking of compared models. Finally, in terms of automation approaches, this work focuses on techniques based on neural networks and large language models to enhance the efficiency and accuracy of eligibility screening. We demonstrate how eligibility criteria can be used to model screening as a question-answering. To showcase how our findings can be used in practice, we introduce CRUISE–Screening, a tool combining search and screening capabilities, helping researchers conduct literature reviews more systematically.