Hannibal, G. (2023). The Trust-Vulnerability Relation - A Theory-driven and Multidisciplinary Approach to the Study of Interpersonal Trust in Human-Robot Interaction [Dissertation, Technische Universität Wien]. reposiTUm. https://doi.org/10.34726/hss.2023.108560
Trust in robots and their trustworthiness has been studied and promoted in the literature on human-robot interaction (HRI) as an essential factor for how willing people are to interact and collaborate with them. Robots, deliberately designed to have apparent agency, have motivated ongoing research on trust in HRI to go beyond an understanding of trust as mere reliance to that of interpersonal trust similarly to the way it has been characterized in relationships between humans. However, considering the use of the interpersonal trust concept in the context of HRI requires that the relationship between trust and vulnerability is carefully understood and explored. I present in my dissertation a systematic investigation of how an emphasis on vulnerability as a precondition of trust can help current understanding and analysis of interpersonal trust in HRI. Using conceptual analysis, I argue that the analysis and inclusion of the trust-vulnerability relation in the specific context of HRI must adapt an event approach as it is then located in the interaction between humans and robots. With this theoretical outset, I developed and conducted two exploratory HRI studies to examine how the trust-vulnerability relation can be studied empirically by focusing on the human experience of vulnerability and the perception of robots as benevolent. I also conducted interviews with leading robotics and HRI experts to explore the possible vulnerabilities of robots. After these theoretical perspectives and empirical work, I discussed how to transfer and integrate the knowledge gained from taking a theory-driven and multidisciplinary approach to studies on interpersonal trust in HRI when also considering the design and engineering practice aiming at developing trustworthy robots. With my dissertation I contribute with fundamental knowledge about how to conceptualize interpersonal trust in HRI, what kind of conceptual knowledge needs to be considered and applied to guide empirical HRI studies on the trust topic or the development of trustworthy robots, and how the trust-vulnerability relation can be empirically studied through interaction scenarios between humans and robots in the everyday life situation of clothes shopping. As such, the work I undertook and presented in my dissertation can help advance future research on interpersonal trust in HRI by: (i) drawing attention to the importance of the trust-vulnerability relation for studies on trust in HRI, (ii) providing first steps to explore empirically the trust-vulnerability relation from both a human- and robot-centered perspective, (iii) and discussing what value interpersonal trust in HRI might have in supporting successful HRI in more commercial domains of application.