Putra, R. V. W., & Shafique, M. (2023). TopSpark: A Timestep Optimization Methodology for Energy-Efficient Spiking Neural Networks on Autonomous Mobile Agents. In 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 3561–3567). https://doi.org/10.1109/IROS55552.2023.10342499
E191-02 - Forschungsbereich Embedded Computing Systems
-
Published in:
2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
-
ISBN:
978-1-6654-9190-7
-
Date (published):
2023
-
Event name:
2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
en
Event date:
1-Oct-2023 - 5-Oct-2023
-
Event place:
United States of America (the)
-
Number of Pages:
7
-
Peer reviewed:
Yes
-
Keywords:
training; energy consumption; machine learning algorithms; mobile agents; neurons; energy efficiency; object recognition
en
Abstract:
Autonomous mobile agents (e.g., mobile ground robots and UAVs) typically require low-power/energy-efficient machine learning (ML) algorithms to complete their ML-based tasks (e.g., object recognition) while adapting to diverse environments, as mobile agents are usually powered by batteries. These requirements can be fulfilled by Spiking Neural Networks (SNNs) as they offer low power/energy processing due to their sparse computations and efficient online learning with bio-inspired learning mechanisms for adapting to different environments. Recent works studied that the energy consumption of SNNs can be optimized by reducing the computation time of each neuron for processing a sequence of spikes (i.e., timestep). However, state-of-the-art techniques rely on intensive design searches to determine fixed timestep settings for only the inference phase, thereby hindering the SNN systems from achieving further energy efficiency gains in both the training and inference phases. These techniques also restrict the SNN systems from performing efficient online learning at run time. Toward this, we propose TopSpark, a novel methodology that leverages adaptive timestep reduction to enable energy-efficient SNN processing in both the training and inference phases, while keeping its accuracy close to the accuracy of SNNs without timestep reduction. The key ideas of our TopSpark include: (1) analyzing the impact of different timestep settings on the accuracy; (2) identifying neuron parameters that have a significant impact on accuracy in different timesteps; (3) employing parameter enhancements that make SNNs effectively perform learning and inference using less spiking activity due to reduced timesteps; and (4) developing a strategy to tradeoff accuracy, latency, and energy to meet the design requirements. The experimental results show that, our TopSpark saves the SNN latency by 3.9x as well as energy consumption by 3.5x for training and 3.3x for inference on average, across different network sizes, learning rules, and workloads, while maintaining the accuracy within 2 % of that of SNNs without timestep reduction. In this manner, TopSpark enables low-power/energy-efficient SNN processing for autonomous mobile agents.
en
Research Areas:
Computer Engineering and Software-Intensive Systems: 100%