TY - GEN
T1 - Neuromorphic hardware in the loop
T2 - 2017 International Joint Conference on Neural Networks
AU - Schmitt, Sebastian
AU - Klahn, Johann
AU - Bellec, Guillaume
AU - Grubl, Andreas
AU - Guttler, Maurice
AU - Hartel, Andreas
AU - Hartmann, Stephan
AU - Husmann, Dan
AU - Husmann, Kai
AU - Jeltsch, Sebastian
AU - Karasenko, Vitali
AU - Kleider, Mitja
AU - Koke, Christoph
AU - Kononov, Alexander
AU - Mauch, Christian
AU - Muller, Eric
AU - Muller, Paul
AU - Partzsch, Johannes
AU - Petrovici, Mihai A.
AU - Schiefer, Stefan
AU - Scholze, Stefan
AU - Thanasoulis, Vasilis
AU - Vogginger, Bernhard
AU - Legenstein, Robert
AU - Maass, Wolfgang
AU - Mayr, Christian
AU - Schuffny, Rene
AU - Schemmel, Johannes
AU - Meier, Karlheinz
N1 - Publisher Copyright:
© 2017 IEEE.
PY - 2017/6/30
Y1 - 2017/6/30
N2 - Emulating spiking neural networks on analog neuromorphic hardware offers several advantages over simulating them on conventional computers, particularly in terms of speed and energy consumption. However, this usually comes at the cost of reduced control over the dynamics of the emulated networks. In this paper, we demonstrate how iterative training of a hardware-emulated network can compensate for anomalies induced by the analog substrate. We first convert a deep neural network trained in software to a spiking network on the BrainScaleS wafer-scale neuromorphic system, thereby enabling an acceleration factor of 10000 compared to the biological time domain. This mapping is followed by the in-the-loop training, where in each training step, the network activity is first recorded in hardware and then used to compute the parameter updates in software via backpropagation. An essential finding is that the parameter updates do not have to be precise, but only need to approximately follow the correct gradient, which simplifies the computation of updates. Using this approach, after only several tens of iterations, the spiking network shows an accuracy close to the ideal software-emulated prototype. The presented techniques show that deep spiking networks emulated on analog neuromorphic devices can attain good computational performance despite the inherent variations of the analog substrate.
AB - Emulating spiking neural networks on analog neuromorphic hardware offers several advantages over simulating them on conventional computers, particularly in terms of speed and energy consumption. However, this usually comes at the cost of reduced control over the dynamics of the emulated networks. In this paper, we demonstrate how iterative training of a hardware-emulated network can compensate for anomalies induced by the analog substrate. We first convert a deep neural network trained in software to a spiking network on the BrainScaleS wafer-scale neuromorphic system, thereby enabling an acceleration factor of 10000 compared to the biological time domain. This mapping is followed by the in-the-loop training, where in each training step, the network activity is first recorded in hardware and then used to compute the parameter updates in software via backpropagation. An essential finding is that the parameter updates do not have to be precise, but only need to approximately follow the correct gradient, which simplifies the computation of updates. Using this approach, after only several tens of iterations, the spiking network shows an accuracy close to the ideal software-emulated prototype. The presented techniques show that deep spiking networks emulated on analog neuromorphic devices can attain good computational performance despite the inherent variations of the analog substrate.
UR - http://www.scopus.com/inward/record.url?scp=85031023416&partnerID=8YFLogxK
U2 - 10.1109/IJCNN.2017.7966125
DO - 10.1109/IJCNN.2017.7966125
M3 - Conference paper
AN - SCOPUS:85031023416
T3 - Proceedings of the International Joint Conference on Neural Networks
SP - 2227
EP - 2234
BT - 2017 International Joint Conference on Neural Networks, IJCNN 2017 - Proceedings
PB - Institute of Electrical and Electronics Engineers
Y2 - 14 May 2017 through 19 May 2017
ER -