Marchisio, A. (2023). Cross-Layer Optimizations for Energy-Efficiency and Robustness of Advanced Machine Learning Architectures [Dissertation, Technische Universität Wien]. reposiTUm. https://doi.org/10.34726/hss.2023.117496
Deep Neural Networks; Capsule Networks; Spiking Neural Networks; Neuromorphic Devices; Deep Learning Efficiency; Deep Learning Robustness; Hardware-Aware NAS
en
Abstract:
Machine Learning (ML) algorithms have shown a high level of accuracy in several tasks. Therefore, ML-based applications are widely used in many systems and platforms. However, developing efficient ML-based systems requires addressing two fundamental research problems: energy-efficiency and robustness. Current trends show the growing interest in the community for complex ML models, such as Deep Neural Networks (DNNs), Capsule Networks (CapsNets), and Spiking Neural Networks (SNNs). Besides their high learning capabilities, their complexity poses several research challenges. State-of-the-art DNN accelerators typically optimize the execution of the most common layers and operations. Still, they become obsolete when executing more advanced types of ML architectures, such as CapsNets that involve complex operations or SNNs that support a different computational infrastructure known as a neuromorphic system. Moreover, multiple vulnerability aspects threaten the correct functionality of ML systems. Therefore, it is crucial to investigate security-oriented techniques for enhancing the robustness of such advanced ML architectures, which might offer peculiar properties in terms of resiliency in adverse conditions that are different from traditional DNNs. Another critical limitation of state-of-the-art techniques is that they typically focus on optimizing for a single objective or have a limited set of goals. In this regard, this thesis tackles the above-discussed challenges by exploiting the unique features of advanced ML models and investigates cross-layer concepts and techniques to engage both hardware and software-level methods to build robust and energy-efficient architectures for these advanced ML networks. More specifically, this research improves the energy efficiency of complex models like CapsNets, through a specialized flow of hardware-level designs and software-level optimizations exploiting the application-driven knowledge of these systems and the error tolerance through approximations and quantization. This research also improves the robustness of ML models, in particular for SNNs executed on neuromorphic hardware, due to their inherent cost-effective features. Moreover, this research integrates multiple optimization objectives into specialized frameworks for jointly optimizing the robustness and energy efficiency of these systems.
en
Weitere Information:
Arbeit an der Bibliothek noch nicht eingelangt - Daten nicht geprüft Abweichender Titel nach Übersetzung der Verfasserin/des Verfassers