<div class="csl-bib-body">
<div class="csl-entry">Li, K., Wang, X., He, Q., Yi, B., Morichetta, A., & Huang, M. (2022). Cooperative Multiagent Deep Reinforcement Learning for Computation Offloading: A Mobile Network Operator Perspective. <i>IEEE Internet of Things Journal</i>, <i>9</i>(23), 24161–24173. https://doi.org/10.1109/JIOT.2022.3189445</div>
</div>
-
dc.identifier.issn
2327-4662
-
dc.identifier.uri
http://hdl.handle.net/20.500.12708/139745
-
dc.description.abstract
Computation offloading decisions play a crucial role in implementing mobile-edge computing (MEC) technology in the Internet of Things (IoT) services. Mobile network operators (MNOs) can employ computation offloading techniques to reduce task completion delay and improve the Quality of Service (QoS) for users by optimizing the system's processing delay and energy consumption. However, different IoT applications (e.g., entertainment and autonomous driving) generate different delay tolerances and benefits for computational tasks from the MNO perspective. Therefore, simply minimizing the delay of all tasks does not satisfy the QoS of each user. The system architecture design should consider the significance of users and the heterogeneity of tasks. Unfortunately, rare work has been done to discuss this practical issue. In this article, from the perspective of MNO, we investigate the computation offloading optimization problem of multiuser delay-sensitive tasks. First, we propose a new optimization model, which designs different optimization objectives for the cost and revenue of tasks. Then, we transform the problem into a Markov decision processes problem, which leads to designing a multiagent iterative optimization framework. For the strategic optimization of each agent, we further propose a cooperative multiagent deep reinforcement learning (CMDRL) algorithm to optimize two different objectives at the same time. Two agents are integrated into the CMDRL framework to enable agents to collaborate and converge to the global optimum in a distributed manner. At the same time, the priority experience replay method is introduced to improve the utilization rate of effective samples and the learning efficiency of the algorithm. The experimental results show that our proposed method can effectively achieve a significantly higher profit than the alternative state-of-the-art method and exhibit a more favorable computational performance than benchmark deep reinforcement learning methods.
en
dc.language.iso
en
-
dc.publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
-
dc.relation.ispartof
IEEE Internet of Things Journal
-
dc.subject
Computational offloading
en
dc.subject
deep reinforcement learning (DRL)
en
dc.subject
delay bounds
en
dc.subject
mobile-edge computing (MEC)
en
dc.subject
task revenue
en
dc.title
Cooperative Multiagent Deep Reinforcement Learning for Computation Offloading: A Mobile Network Operator Perspective