<div class="csl-bib-body">
<div class="csl-entry">Li, K., Wang, X., He, Q., Yang, M., Huang, M., & Dustdar, S. (2023). Task Computation Offloading for Multi-Access Edge Computing via Attention Communication Deep Reinforcement Learning. <i>IEEE Transactions on Services Computing</i>, <i>16</i>(4), 2985–2999. https://doi.org/10.1109/TSC.2022.3225473</div>
</div>
-
dc.identifier.issn
1939-1374
-
dc.identifier.uri
http://hdl.handle.net/20.500.12708/188045
-
dc.description.abstract
This article investigates how to enhance the Multi-access Edge Computing (MEC) systems performance with the aid of device-to-device (D2D) communication computation offloading. By adequately exploiting a novel computation offloading mechanism based on D2D collaboration, users can efficiently share computational resources with each other. However, it is challenging to distinguish valuable information that truly promotes a collaborative decision, as worthless information can hinder collaboration among users. In addition, the transmission of large volumes of information requires high bandwidth and incurs significant latency and computational complexity, resulting in unacceptable costs. In this article, we propose an efficient D2D-assisted MEC computation offloading framework based on Attention Communication Deep Reinforcement Learning (ACDRL), which simulates the interactions between related entities, including device-to-device collaboration in the horizontal and device-to-edge offloading in the vertical. Second, we developed a distributed cooperative reinforcement learning algorithm that includes an attention mechanism that skews computational resources towards active users to avoid unnecessary resource wastage in large-scale MEC systems. Finally, to improve the effectiveness and rationality of cooperation among users, we introduce a communication channel to integrate information from all users in a communication group, thus facilitating cooperative decision-making. The proposed framework is benchmarked, and the experimental results show that the proposed framework can effectively reduce latency and provide valuable insights for practical design compared to other baseline approaches.
en
dc.language.iso
en
-
dc.publisher
IEEE COMPUTER SOC
-
dc.relation.ispartof
IEEE Transactions on Services Computing
-
dc.subject
Multi-access edge computing
en
dc.subject
reinforcement learning
en
dc.subject
task computation offloading
en
dc.subject
user cooperation
en
dc.title
Task Computation Offloading for Multi-Access Edge Computing via Attention Communication Deep Reinforcement Learning