Zafari, S., de Pagter, J., Papagni, G., Rosenstein, A., Filzmoser, M., & Köszegi, S. T. (2024). Trust Development and Explainability: A Longitudinal Study with a Personalized Assistive System. Multimodal Technologies and Interaction, 8(3), 1–20. https://doi.org/10.3390/mti8030020
This article reports on a longitudinal experiment in which the influence of an assistive system’s malfunctioning and transparency on trust was examined over a period of seven days. To this end, we simulated the system’s personalized recommendation features to support participants with the task of learning new texts and taking quizzes. Using a 2 × 2 mixed design, the system’s malfunctioning (correct vs. faulty) and transparency (with vs. without explanation) were manipulated as between-subjects variables, whereas exposure time was used as a repeated-measure variable. A combined qualitative and quantitative methodological approach was used to analyze the data from 171 participants. Our results show that participants perceived the system making a faulty recommendation as a trust violation. Additionally, a trend emerged from both the quantitative and qualitative analyses regarding how the availability of explanations (even when not accessed) increased the perception of a trustworthy system.
en
Research Areas:
Information Systems Engineering: 40% Automation and Robotics: 40% Computational System Design: 20%