TY - JOUR
T1 - H2Learn
T2 - High-Efficiency Learning Accelerator for High-Accuracy Spiking Neural Networks
AU - Liang, Ling
AU - Qu, Zheng
AU - Chen, Zhaodong
AU - Tu, Fengbin
AU - Wu, Yujie
AU - Deng, Lei
AU - Li, Guoqi
AU - Li, Peng
AU - Xie, Yuan
N1 - Publisher Copyright:
© 1982-2012 IEEE.
PY - 2022/11/1
Y1 - 2022/11/1
N2 - Although spiking neural networks (SNNs) take benefits from the bioplausible neural modeling, the low accuracy under the common local synaptic plasticity learning rules limits their application in many practical tasks. Recently, an emerging SNN supervised learning algorithm inspired by backpropagation through time (BPTT) from the domain of artificial neural networks (ANNs) has successfully boosted the accuracy of SNNs, and helped improve the practicability of SNNs. However, current general-purpose processors suffer from low efficiency when performing BPTT for SNNs due to the ANN-tailored optimization. On the other hand, current neuromorphic chips cannot support BPTT because they mainly adopt local synaptic plasticity rules for simplified implementation. In this work, we propose H2Learn, a novel architecture that can achieve high efficiency for BPTT-based SNN learning, which ensures high accuracy of SNNs. At the beginning, we characterized the behaviors of BPTT-based SNN learning. Benefited from the binary spike-based computation in the forward pass and weight update, we first design look-up table (LUT)-based processing elements in the forward engine and weight update engine to make accumulations implicit and to fuse the computations of multiple input points. Second, benefited from the rich sparsity in the backward pass, we design a dual-sparsity-aware backward engine, which exploits both input and output sparsity. Finally, we apply a pipeline optimization between different engines to build an end-to-end solution for the BPTT-based SNN learning. Compared with the modern NVIDIA V100 GPU, H2Learn achieves 7.38× area saving, 5.74-10.20× speedup, and 5.25-7.12× energy saving on several benchmark datasets.
AB - Although spiking neural networks (SNNs) take benefits from the bioplausible neural modeling, the low accuracy under the common local synaptic plasticity learning rules limits their application in many practical tasks. Recently, an emerging SNN supervised learning algorithm inspired by backpropagation through time (BPTT) from the domain of artificial neural networks (ANNs) has successfully boosted the accuracy of SNNs, and helped improve the practicability of SNNs. However, current general-purpose processors suffer from low efficiency when performing BPTT for SNNs due to the ANN-tailored optimization. On the other hand, current neuromorphic chips cannot support BPTT because they mainly adopt local synaptic plasticity rules for simplified implementation. In this work, we propose H2Learn, a novel architecture that can achieve high efficiency for BPTT-based SNN learning, which ensures high accuracy of SNNs. At the beginning, we characterized the behaviors of BPTT-based SNN learning. Benefited from the binary spike-based computation in the forward pass and weight update, we first design look-up table (LUT)-based processing elements in the forward engine and weight update engine to make accumulations implicit and to fuse the computations of multiple input points. Second, benefited from the rich sparsity in the backward pass, we design a dual-sparsity-aware backward engine, which exploits both input and output sparsity. Finally, we apply a pipeline optimization between different engines to build an end-to-end solution for the BPTT-based SNN learning. Compared with the modern NVIDIA V100 GPU, H2Learn achieves 7.38× area saving, 5.74-10.20× speedup, and 5.25-7.12× energy saving on several benchmark datasets.
KW - Neuromorphic device
KW - spiking neural network (SNN)
KW - supervised training
UR - http://www.scopus.com/inward/record.url?scp=85122067441&partnerID=8YFLogxK
U2 - 10.1109/TCAD.2021.3138347
DO - 10.1109/TCAD.2021.3138347
M3 - Article
AN - SCOPUS:85122067441
SN - 0278-0070
VL - 41
SP - 4782
EP - 4796
JO - IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
JF - IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
IS - 11
ER -