<div class="csl-bib-body">
<div class="csl-entry">Zhang, Q., Zhu, Z., Zhou, A., Sun, Q., Dustdar, S., & Wang, S. (2024). Energy-efficient federated training on mobile device. <i>IEEE Network</i>, <i>38</i>(1), 180–186. https://doi.org/10.1109/MNET.130.2200471</div>
</div>
-
dc.identifier.issn
0890-8044
-
dc.identifier.uri
http://hdl.handle.net/20.500.12708/200964
-
dc.description.abstract
On-device deep learning technology has attracted increasing interest recently. CPUs are the most common commercial hardware on devices and many training libraries have been developed and optimized for them. However, CPUs still suffer from poor training performance (i.e., training time) due to the specific asymmetric multiprocessor. Moreover, the energy constraint imposes restrictions on battery-powered devices. With federated training, we expect the local training to be completed rapidly therefore the global model converges fast. At the same time, energy consumption should be minimized to avoid compromising the user experience. To this end, we consider energy and training time and propose a novel framework with a machine learning-based adaptive configuration allocation strategy, which chooses optimal configuration combinations for efficient on-device training. We carry out experiments on the popular library MNN and the experimental results show that the adaptive allocation algorithm reduces substantial energy consumption, compared to all batches with fixed configurations on off-the-shelf CPUs.
en
dc.language.iso
en
-
dc.publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
-
dc.relation.ispartof
IEEE Network
-
dc.subject
energy consumption
en
dc.subject
algorithm
en
dc.subject
mobile device
en
dc.subject
Machine Learning
en
dc.subject
CPU
en
dc.title
Energy-efficient federated training on mobile device