<div class="csl-bib-body">
<div class="csl-entry">Dasovic, I. (2026). <i>Aggregation Techniques in Federated Learning Under Differential Privacy Constraints</i> [Diploma Thesis, Technische Universität Wien]. reposiTUm. https://doi.org/10.34726/hss.2026.131806</div>
</div>
-
dc.identifier.uri
https://doi.org/10.34726/hss.2026.131806
-
dc.identifier.uri
http://hdl.handle.net/20.500.12708/227193
-
dc.description
Arbeit an der Bibliothek noch nicht eingelangt - Daten nicht geprüft
-
dc.description.abstract
Machine learning (ML) is increasingly integrated into critical domains such as healthcare, finance, and mobile applications, where data privacy presents significant challenges. Federated Learning (FL) enables model training on decentralized data without requiring raw data to be centrally collected. Having multiple different clients increases the attack surface and the number of attack vectors, especially when some clients have weaker privacy. As a result, privacy incidents are typically limited to individual clients rather than compromising the entire dataset.However, FL alone does not fully guarantee privacy, as model updates can still reveal sensitive information through inference or reconstruction attacks, and further measures are necessary to protect sensitive data during training. Differential Privacy (DP) has emerged as one of the most practical techniques to provide formal privacy guarantees, and it can be integrated with FL models to enhance privacy protection. Yet, using DP typically involves a trade-off, leading to reduced model effectiveness. Moreover, aggregation methods and optimization strategies significantly affect how local model updates are combined, affecting both privacy and performance outcomes.This thesis investigates novel views on how aggregation strategies, privacy budgets, and non-independent and identically distributed (non-IID) data interact in differentially private federated learning, shedding light on their combined effect on the performance-privacy trade-off. Instead of analyzing these factors independently, the study offers new perspectives on how aggregation methods interact with privacy mechanisms and data heterogeneity in decentralized environments. Through comprehensive evaluation and analysis, the thesis provides practical guidelines for designing differentially private federated learning models applicable to real-world scenarios. The findings contribute to advancing privacy-aware ML practices, balancing data protection with model utility in decentralized learning environments.
en
dc.language
English
-
dc.language.iso
en
-
dc.rights.uri
http://rightsstatements.org/vocab/InC/1.0/
-
dc.subject
federated learning
en
dc.subject
differential privacy
en
dc.subject
aggregation methods
en
dc.subject
privacy-preserving machine learning
en
dc.title
Aggregation Techniques in Federated Learning Under Differential Privacy Constraints
en
dc.type
Thesis
en
dc.type
Hochschulschrift
de
dc.rights.license
In Copyright
en
dc.rights.license
Urheberrechtsschutz
de
dc.identifier.doi
10.34726/hss.2026.131806
-
dc.contributor.affiliation
TU Wien, Österreich
-
dc.rights.holder
Ivana Dasovic
-
dc.publisher.place
Wien
-
tuw.version
vor
-
tuw.thesisinformation
Technische Universität Wien
-
dc.contributor.assistant
Mayer, Rudolf
-
tuw.publication.orgunit
E194 - Institut für Information Systems Engineering