Papagni, G., & Köszegi, S. T. (2022). Challenges and solutions for trustworthy explainable robots. In S. T. Köszegi & M. Vincze (Eds.), Trust in Robots (pp. 57–79). TU Wien Academic Press. https://doi.org/10.34727/2022/isbn.978-3-85448-052-5_3
For robots to be accepted within society, non-expert users must deem them not only useful (and usable) but also
trustworthy. Designing robots that can explain their decisions and actions in terms that everyone can understand is
crucial to their trustworthiness and successful integration into our society. This paper, written as a part of a doctoral
dissertation, draws from interdisciplinary research on social sciences and explainable robots (and AI) to address the
set of challenges associated with making robots explainable and trustworthy. Particular attention is paid to non-expert
users’ perspectives within the context of everyday interactions. We claim that, as perfect explanations do not
exist, their success in triggering understanding and fostering trust is determined by their plausibility. Furthermore,
we maintain that plausible explanations are the result of contextual negotiations between the parties involved. As a
result, this paper presents strategies formalized into a model for explanatory interactions to maximize users’ understanding
and support trust development.