Singer, T. (2024). Visual Generative AI in Warfare and Terrorism: Risk Mitigation through Technical Requirements and Regulatory Insights [Diploma Thesis, Technische Universität Wien]. reposiTUm. https://doi.org/10.34726/hss.2024.126126
E193 - Institut für Visual Computing and Human-Centered Technology
-
Date (published):
2024
-
Number of Pages:
126
-
Keywords:
generative AI; radicalization; AI technologies
en
Abstract:
The nature of modern terrorism and warfare has evolved significantly, with technological advancements enabling the capture and dissemination of uncensored graphic propaganda across social media platforms. These visuals, often in HD quality, can be used to train generative AI models, raising concerns about the misuse of such technologies to fuel violence, radicalization, and polarization, with profound psychological consequences on both microand macro-levels. This thesis examines whether current AI regulations, particularly the EU AI Act, adequately address these risks, and seeks for relevant technical solutions to mitigate them. We built a working corpus using the PRISMA framework, drawing on research addressing AI-powered radicalization and online terrorist activities. Through a socio-technical lens, we explored how exposure to violent content triggers radicalization pathways, studying radicalization models and the interplay between structured online and offline terrorist activities. We also explored the role of internet infrastructure and core algorithms in facilitating radicalization and how extremist groups exploit these social and technical components to achieve their goals, leading to a broad scope of direct and indirect consequences. Our analysis of the risk landscape, based on a risk-based approach, identified multiple risks, including propaganda-driven dehumanization, the enhancement of the “othering” phenomenon, the normalization of violence, and widespread psychological harm. We conducted a gap assessment of the EU AI Act, finding that while the act broadly covers these risks, it addresses key challenges like bias, privacy, transparency, and explainability only in abstract terms, without explicit technology-focused requirements. Additionally, there is insufficient focus on extremist groups and terror organizations as malicious actors, limited technological standardization, and no national education programs to build resilience against the misuse of generative AI. We recommend incorporating systematic human moderation, advanced machine learning algorithms to detect extremist inputs and violent outputs, and anonymization of individual visual attributes using generative adversarial networks (GANs). Furthermore, we propose a set of standards for watermarking techniques to support global regulatory efforts and research. These gaps highlight the need for active collaboration among regulators and other stakeholders to ensure the responsible development and deployment of AI technologies that mitigate the risks identified in this work.
en
Additional information:
Arbeit an der Bibliothek noch nicht eingelangt - Daten nicht geprüft Abweichender Titel nach Übersetzung der Verfasserin/des Verfassers