<div class="csl-bib-body">
<div class="csl-entry">Papagni, G., de Pagter, J., Zafari, S., Filzmoser, M., & Koeszegi, S. T. (2022). Artificial agents’ explainability to support trust: considerations on timing and context. <i>AI & Society</i>. https://doi.org/10.1007/s00146-022-01462-7</div>
</div>
-
dc.identifier.issn
0951-5666
-
dc.identifier.uri
http://hdl.handle.net/20.500.12708/142018
-
dc.description.abstract
Strategies for improving the explainability of artificial agents are a key approach to support the understandability of artificial agents’ decision-making processes and their trustworthiness. However, since explanations are not inclined to standardization, finding solutions that fit the algorithmic-based decision-making processes of artificial agents poses a compelling challenge. This paper addresses the concept of trust in relation to complementary aspects that play a role in interpersonal and human–agent relationships, such as users’ confidence and their perception of artificial agents’ reliability. Particularly, this paper focuses on non-expert users’ perspectives, since users with little technical knowledge are likely to benefit the most from “post-hoc”, everyday explanations. Drawing upon the explainable AI and social sciences literature, this paper investigates how artificial agent’s explainability and trust are interrelated at different stages of an interaction. Specifically, the possibility of implementing explainability as a trust building, trust maintenance and restoration strategy is investigated. To this extent, the paper identifies and discusses the intrinsic limits and fundamental features of explanations, such as structural qualities and communication strategies. Accordingly, this paper contributes to the debate by providing recommendations on how to maximize the effectiveness of explanations for supporting non-expert users’ understanding and trust.
en
dc.language.iso
en
-
dc.publisher
Springer Nature
-
dc.relation.ispartof
AI & Society
-
dc.rights.uri
http://creativecommons.org/licenses/by/4.0/
-
dc.subject
Trust
en
dc.subject
Explainability
en
dc.subject
Artificial Intelligence
en
dc.subject
Explainable Artificial Agents
en
dc.title
Artificial agents' explainability to support trust: considerations on timing and context