Karagiannis, V. (2022). Self-organizing fog computing systems [Dissertation, Technische Universität Wien]. reposiTUm. https://doi.org/10.34726/hss.2022.103763
E194 - Institut für Information Systems Engineering
-
Date (published):
2022
-
Number of Pages:
159
-
Keywords:
Edge computing; Fog computing; Cloud computing; Reinforcement learning; Internet of Things; Distributed systems; Self-organizing systems; Overlay networks
en
Abstract:
Fog computing is a novel computing paradigm which enables the execution of applications on compute nodes which reside both in the cloud and at the edge of the network. Various performance benefits, such as low communication latency and high network bandwidth, have turned this paradigm into a well-accepted extension of cloud computing. So far, many fog computing systems have been proposed, consisting of distributed compute nodes which are often organized hierarchically in layers. Such systems commonly rely on the assumption that the nodes of adjacent layers reside close to each other, thereby achieving low latency computations. However, this assumption may not hold in fog computing systems that span over large geographical areas, due to the wide distribution of the nodes. In addition, most proposed fog computing systems route the data on a path which starts at the data source, and goes through various edge and cloud nodes. Each node on this path may accept the data if there are available resources to process this data locally. Otherwise, the data is forwarded to the next node on path. Notably, when the data is forwarded (rather than accepted), the communication latency increases by the delay to reach the next node. This thesis aims at tackling these problems by proposing distributed algorithms whereby the compute nodes measure the network proximity to each other, and self-organize accordingly. These algorithms are implemented on geographically distributed compute nodes, considering image processing and smart city use cases, and are thoroughly evaluated showing significant latency- and bandwidth-related performance benefits. Furthermore, we analyze the communication latency of sending data to distributed edge and cloud compute nodes, and we propose two novel routing approaches: i) A context-aware routing mechanism which maintains a history of previous transmissions, and uses this history to find nearby nodes with available resources. ii) edgeRouting, which leverages the high bandwidth between nodes of cloud providers in order to select network paths with low communication latency. Both of these mechanisms are evaluated under real-world settings, and are shown to be able to lower the communication latency of fog computing systems significantly, compared to alternative methods.