<div class="csl-bib-body">
<div class="csl-entry">Gackstatter, P., Frangoudis, P., & Dustdar, S. (2022). Pushing Serverless to the Edge with WebAssembly Runtimes. In M. Fazio, D. K. Panda, R. Prodan, V. Cardellini, B. Kantarci, O. Rana, & M. Villari (Eds.), <i>Proceedings of the 22nd IEEE/ACM International Symposium on Cluster, Cloud and Internet Computing (CCGrid 2022)</i> (pp. 140–149). IEEE. https://doi.org/10.1109/CCGrid54584.2022.00023</div>
</div>
-
dc.identifier.uri
http://hdl.handle.net/20.500.12708/80548
-
dc.description.abstract
Serverless computing has become a popular part of the cloud computing model, thanks to abstracting away infrastructure management and enabling developers to write functions that auto-scale in a polyglot environment, while only paying for the used compute time. While this model is ideal for handling unpredictable and bursty workloads, cold-start latencies of hundreds of milliseconds or more still hinder its support for latency-critical IoT services, and may cancel the latency benefits that come with proximity, when serverless functions are deployed at the edge. Moreover, CPU power and memory limitations which often characterize edge hosts drive latencies even higher. The root of the problem lies in the de facto runtime environments for serverless functions, namely container technologies such as Docker. A radical approach is thus to replace them with a more light-weight alternative. For this purpose, we examine WebAssembly's suitability for use as a serverless container runtime, with a focus on edge computing settings, and present the design and implementation of a WebAssembly-based runtime environment for serverless edge computing. WOW, our prototype for WebAssembly execution in Apache QpenWhisk, reduces cold-start latency by up to 99.5%, can improve on memory consumption by more than 5×, and increases function execution throughput by up to 4.2× on low-end edge computing equipment compared to the standard Docker-based container runtime for various serverless workloads.