<div class="csl-bib-body">
<div class="csl-entry">Li, Y., Wang, X., Li, H., Donta, P. K., Huang, M., & Dustdar, S. (2025). Communication-Efficient Federated Learning for Heterogeneous Clients. <i>ACM Transactions on Internet Technology</i>, <i>25</i>(2), 1–37. https://doi.org/10.1145/3716870</div>
</div>
-
dc.identifier.issn
1533-5399
-
dc.identifier.uri
http://hdl.handle.net/20.500.12708/216692
-
dc.description.abstract
Federated learning stands out as a promising approach within the domain of edge computing, providing a framework for collaborative training on distributed datasets without necessitating data sharing. However, federated learning involves the frequent transmission of machine learning model updates between the server and clients, resulting in high communication costs. Additionally, heterogeneous clients can further complicate the Federated Learning process and deteriorate performance. To address these challenges, we propose Adaptive Self-Knowledge Distillation-based Quality- and Reputation-Aware Cross-Device Federated Learning (ASDQR) - an efficient communication and inference framework designed for heterogeneous clients. ASDQR initiates the process by selecting high-reputation and high-quality clients to be involved in federated learning, significantly impacting communication efficiency and inference effectiveness. ASDQR also introduces a model of adaptive local self-knowledge distillation that incorporates multiple local personalized historical knowledge for more accurate inference, allowing the historical level to be dynamically adjusted across time. Finally, we present an inference-effective aggregation scheme that assigns higher weights to important and reliable local model updates based on clients' contribution degrees when performing global model aggregation. ASDQR consistently outperforms baseline methods across all datasets and communication rounds, achieving 9.0% higher accuracy than FedAvg, 6.59% higher than MOON, 0.29% higher than FedProx, 0.2% higher than PFedSD, and 0.08% higher than FedMD on the MNIST dataset at 100 communication rounds. Similar improvements are observed on CIFAR, HAR, and WISDM datasets, demonstrating the robustness and efficiency of ASDQR in federated learning with non-IID data.
en
dc.language.iso
en
-
dc.publisher
ASSOC COMPUTING MACHINERY
-
dc.relation.ispartof
ACM Transactions on Internet Technology
-
dc.subject
Computing methodologies
en
dc.subject
Learning paradigms
en
dc.subject
Computer systems organization
en
dc.subject
Cloud computing
en
dc.title
Communication-Efficient Federated Learning for Heterogeneous Clients