Cech, F. (2022). Algorithmic accountability : Transparency, agency and literacy in the Age of the Algorithm [Dissertation, Technische Universität Wien]. reposiTUm. https://doi.org/10.34726/hss.2023.108760
The continuing digital transformation of society has given rise to what some scholars provocatively call The Age of the Algorithm: a time in which human endeavours are increasingly mediated, supported, regulated, determined, structured and even replaced by algorithms and algorithmic systems. The promises of algorithmic technologies are as lofty as they are profound: unprecedented efficiency and speed, intricate insights into complex problems with hitherto insurmountably large data sets, and improved fairness and objectivity of previously human decision-making processes burdened with personal bias and discrimination are just some of the benefits promised to us. At the same time, a growing number of critical issues arising from the use of algorithmic systems have led to calls for improved accountability and transparency of algorithmic technologies. As we delegate power to technology, we also must find new ways of holding those employing these systems to account for their conduct in order to face the various challenges presented by complex, opaque and black-boxed socio-technical assemblages. While the academic communities of the various related disciplines—from, political science and governance studies—agree, by and large, on the importance of accountability, a coherent and agreed-upon definition of algorithmic accountability, transparency and the related issues of algorithmic literacy and meaningful human agency has yet to emerge. Likewise, concrete guidelines and frameworks to support algorithmic accountability across various application contexts and a wide range of technologies including, but not limited to, Decision-Making (ADM), are still scarce. and Automated To address these issues, I present this dissertation on algorithmic accountability and transparency situated within the emerging, inter-disciplinary field of Studies (CAS). Building on the theoretical foundations of the term algorithm from various perspectives, the related issues of bias, discrimination and transparency, and prior work on public accountability, I then appropriate and adapt Bovens’ widely used definition of accountability for algorithmic systems, and introduce the notion of procedural micro-accountability as an important and often overlooked perspective. To apply these theoretical considerations, I subsequently present two case studies: (1) the EnerCoach energy accounting system, and (2) the Arbeitsmarkt-Assistenz-System (AMAS), an unemployment profiling system used by the .xiii Through a situated algorithmic (auto-)ethnography of the EnerCoach system based on qualitative interviews, code reviews and other auxiliary data sources, I identify crucial challenges related to system-level transparency and ex-post explainability. Following this analysis and employing an interventionist approach founded in participatory design methodologies, I describe how stakeholders co-designed concrete measures to address the previously identified issues and evaluate both the use of participatory approaches and the success of these measures.For AMAS, I present the results of a collaborative research project founded in a qualitative document analysis of more than 134 internal and public documents of the AMAS system. After describing the system’s socio-technical configuration, its stakeholders andorganisational embedding within theto arrive at a thick description, I summarizethe critical issues of bias and discrimination as manifested by the system. The core of this case study analysis focuses on its lack of system-level transparency and ex-post explainability and the relation of these issues to algorithmic accountability. By synthesizing the insights gained in the two case studies in the form of a comparative case study, I introduce the Algorithmic Accountability Agency Framework (A3 framework) as an analytic lens to structure accountability processes through set of guiding questions. Building on Bandura’s Social Cognitive Theory of emergent interactive agency, the framework models both micro- and macro-accountability processes through the lens of human agency. In applying the framework to the two case studies, I then showcase its ability as a widely applicable toolset for both evaluation and assessment of algorithmic accountability processes, as well as its potential to support a critical discourse that encourages the ideation of concrete socio-technical measures to improve these processes. With this dissertation I aim to contribute to the nascent field of critical algorithm studies in both a theoretical and conceptual, as well as a very practical manner. The summary of theoretical foundations leading to the conceptualization of algorithmic accountability as a wicked problem is meant to support future efforts in addressing this critical issue from a broad and inter-disciplinary perspective. The case studies showcase the value of in-depth, qualitative and quantitative analyses of algorithmic systems as complex socio-technical assemblages, and provide concrete best-practice examples for the use of participatory design methodologies to involve all stakeholders in addressing these critical issues. Finally, the A3 framework is a directly applicable, practical tool to evaluate algorithmic accountability processes, and further explicates the complicated relationship between algorithmic accountability, transparency, algorithmic literacy and human agency in this ‘Age of the Algorithm’.