Dasovic, I. (2026). Aggregation Techniques in Federated Learning Under Differential Privacy Constraints [Diploma Thesis, Technische Universität Wien]. reposiTUm. https://doi.org/10.34726/hss.2026.131806
Machine learning (ML) is increasingly integrated into critical domains such as healthcare, finance, and mobile applications, where data privacy presents significant challenges. Federated Learning (FL) enables model training on decentralized data without requiring raw data to be centrally collected. Having multiple different clients increases the attack surface and the number of attack vectors, especially when some clients have weaker privacy. As a result, privacy incidents are typically limited to individual clients rather than compromising the entire dataset.However, FL alone does not fully guarantee privacy, as model updates can still reveal sensitive information through inference or reconstruction attacks, and further measures are necessary to protect sensitive data during training. Differential Privacy (DP) has emerged as one of the most practical techniques to provide formal privacy guarantees, and it can be integrated with FL models to enhance privacy protection. Yet, using DP typically involves a trade-off, leading to reduced model effectiveness. Moreover, aggregation methods and optimization strategies significantly affect how local model updates are combined, affecting both privacy and performance outcomes.This thesis investigates novel views on how aggregation strategies, privacy budgets, and non-independent and identically distributed (non-IID) data interact in differentially private federated learning, shedding light on their combined effect on the performance-privacy trade-off. Instead of analyzing these factors independently, the study offers new perspectives on how aggregation methods interact with privacy mechanisms and data heterogeneity in decentralized environments. Through comprehensive evaluation and analysis, the thesis provides practical guidelines for designing differentially private federated learning models applicable to real-world scenarios. The findings contribute to advancing privacy-aware ML practices, balancing data protection with model utility in decentralized learning environments.
en
Additional information:
Arbeit an der Bibliothek noch nicht eingelangt - Daten nicht geprüft