Neuromorphic Hardware In The Loop: Training a Deep Spiking Network on the BrainScaleS Wafer-Scale System

Sebastian Schmitt, Johann Klähn, Guillaume Emmanuel Fernand Bellec, Andreas Grübl, Güttler Maurice, Andreas Hartl, Stephan Hartmann, Dan Husmann, Kai Husmann, Sebastian Jeltsch, Vitali Karasenko, Mitja Kleider, Christoph Koke, Alexander Kononov, Christian Mauch, Eric Müller, Paul Müller, Johannes Partzsch, Mihai Petrovici, Stefan Schiefer & 9 others Stefan Scholze, Vasilis Thanasoulis, Bernhard Vogginger, Robert Legenstein, Wolfgang Maass, Christian Mayr, Rene Schüffny, Johannes Schemmel, Karlheinz Meier

Research output: Contribution to journalArticleResearch

Abstract

Emulating spiking neural networks on analog neuromorphic hardware offers several advantages over simulating them on conventional computers, particularly in terms of speed and energy consumption. However, this usually comes at the cost of reduced control over the dynamics of the emulated networks. In this paper, we demonstrate how iterative training of a hardware-emulated network can compensate for anomalies induced by the analog substrate. We first convert a deep neural network trained in software to a spiking network on the BrainScaleS wafer-scale neuromorphic system, thereby enabling an acceleration factor of 10 000 compared to the biological time domain. This mapping is followed by the in-the-loop training, where in each training step, the network activity is first recorded in hardware and then used to compute the parameter updates in software via backpropagation. An essential finding is that the parameter updates do not have to be precise, but only need to approximately follow the correct gradient, which simplifies the computation of updates. Using this approach, after only several tens of iterations, the spiking network shows an accuracy close to the ideal software-emulated prototype. The presented techniques show that deep spiking networks emulated on analog neuromorphic devices can attain good computational performance despite the inherent variations of the analog substrate.
Original languageEnglish
Number of pages8
JournalarXiv.org e-Print archive
VolumearXiv:1703.01909
Publication statusPublished - 17 Mar 2017

Fingerprint

Hardware
Substrates
Backpropagation
Energy utilization
Neural networks
Deep neural networks

Cite this

Schmitt, S., Klähn, J., Bellec, G. E. F., Grübl, A., Maurice, G., Hartl, A., ... Meier, K. (2017). Neuromorphic Hardware In The Loop: Training a Deep Spiking Network on the BrainScaleS Wafer-Scale System. arXiv.org e-Print archive, arXiv:1703.01909.

Neuromorphic Hardware In The Loop: Training a Deep Spiking Network on the BrainScaleS Wafer-Scale System. / Schmitt, Sebastian; Klähn, Johann; Bellec, Guillaume Emmanuel Fernand; Grübl, Andreas; Maurice, Güttler; Hartl, Andreas; Hartmann, Stephan; Husmann, Dan; Husmann, Kai; Jeltsch, Sebastian; Karasenko, Vitali; Kleider, Mitja; Koke, Christoph; Kononov, Alexander; Mauch, Christian; Müller, Eric; Müller, Paul; Partzsch, Johannes; Petrovici, Mihai; Schiefer, Stefan; Scholze, Stefan; Thanasoulis, Vasilis; Vogginger, Bernhard; Legenstein, Robert; Maass, Wolfgang; Mayr, Christian; Schüffny, Rene; Schemmel, Johannes; Meier, Karlheinz.

In: arXiv.org e-Print archive, Vol. arXiv:1703.01909, 17.03.2017.

Research output: Contribution to journalArticleResearch

Schmitt, S, Klähn, J, Bellec, GEF, Grübl, A, Maurice, G, Hartl, A, Hartmann, S, Husmann, D, Husmann, K, Jeltsch, S, Karasenko, V, Kleider, M, Koke, C, Kononov, A, Mauch, C, Müller, E, Müller, P, Partzsch, J, Petrovici, M, Schiefer, S, Scholze, S, Thanasoulis, V, Vogginger, B, Legenstein, R, Maass, W, Mayr, C, Schüffny, R, Schemmel, J & Meier, K 2017, 'Neuromorphic Hardware In The Loop: Training a Deep Spiking Network on the BrainScaleS Wafer-Scale System' arXiv.org e-Print archive, vol. arXiv:1703.01909.
Schmitt, Sebastian ; Klähn, Johann ; Bellec, Guillaume Emmanuel Fernand ; Grübl, Andreas ; Maurice, Güttler ; Hartl, Andreas ; Hartmann, Stephan ; Husmann, Dan ; Husmann, Kai ; Jeltsch, Sebastian ; Karasenko, Vitali ; Kleider, Mitja ; Koke, Christoph ; Kononov, Alexander ; Mauch, Christian ; Müller, Eric ; Müller, Paul ; Partzsch, Johannes ; Petrovici, Mihai ; Schiefer, Stefan ; Scholze, Stefan ; Thanasoulis, Vasilis ; Vogginger, Bernhard ; Legenstein, Robert ; Maass, Wolfgang ; Mayr, Christian ; Schüffny, Rene ; Schemmel, Johannes ; Meier, Karlheinz. / Neuromorphic Hardware In The Loop: Training a Deep Spiking Network on the BrainScaleS Wafer-Scale System. In: arXiv.org e-Print archive. 2017 ; Vol. arXiv:1703.01909.
@article{4692cc42b3a840f0b8e35258fda73804,
title = "Neuromorphic Hardware In The Loop: Training a Deep Spiking Network on the BrainScaleS Wafer-Scale System",
abstract = "Emulating spiking neural networks on analog neuromorphic hardware offers several advantages over simulating them on conventional computers, particularly in terms of speed and energy consumption. However, this usually comes at the cost of reduced control over the dynamics of the emulated networks. In this paper, we demonstrate how iterative training of a hardware-emulated network can compensate for anomalies induced by the analog substrate. We first convert a deep neural network trained in software to a spiking network on the BrainScaleS wafer-scale neuromorphic system, thereby enabling an acceleration factor of 10 000 compared to the biological time domain. This mapping is followed by the in-the-loop training, where in each training step, the network activity is first recorded in hardware and then used to compute the parameter updates in software via backpropagation. An essential finding is that the parameter updates do not have to be precise, but only need to approximately follow the correct gradient, which simplifies the computation of updates. Using this approach, after only several tens of iterations, the spiking network shows an accuracy close to the ideal software-emulated prototype. The presented techniques show that deep spiking networks emulated on analog neuromorphic devices can attain good computational performance despite the inherent variations of the analog substrate.",
author = "Sebastian Schmitt and Johann Kl{\"a}hn and Bellec, {Guillaume Emmanuel Fernand} and Andreas Gr{\"u}bl and G{\"u}ttler Maurice and Andreas Hartl and Stephan Hartmann and Dan Husmann and Kai Husmann and Sebastian Jeltsch and Vitali Karasenko and Mitja Kleider and Christoph Koke and Alexander Kononov and Christian Mauch and Eric M{\"u}ller and Paul M{\"u}ller and Johannes Partzsch and Mihai Petrovici and Stefan Schiefer and Stefan Scholze and Vasilis Thanasoulis and Bernhard Vogginger and Robert Legenstein and Wolfgang Maass and Christian Mayr and Rene Sch{\"u}ffny and Johannes Schemmel and Karlheinz Meier",
year = "2017",
month = "3",
day = "17",
language = "English",
volume = "arXiv:1703.01909",
journal = "arXiv.org e-Print archive",
publisher = "Cornell University Library",

}

TY - JOUR

T1 - Neuromorphic Hardware In The Loop: Training a Deep Spiking Network on the BrainScaleS Wafer-Scale System

AU - Schmitt, Sebastian

AU - Klähn, Johann

AU - Bellec, Guillaume Emmanuel Fernand

AU - Grübl, Andreas

AU - Maurice, Güttler

AU - Hartl, Andreas

AU - Hartmann, Stephan

AU - Husmann, Dan

AU - Husmann, Kai

AU - Jeltsch, Sebastian

AU - Karasenko, Vitali

AU - Kleider, Mitja

AU - Koke, Christoph

AU - Kononov, Alexander

AU - Mauch, Christian

AU - Müller, Eric

AU - Müller, Paul

AU - Partzsch, Johannes

AU - Petrovici, Mihai

AU - Schiefer, Stefan

AU - Scholze, Stefan

AU - Thanasoulis, Vasilis

AU - Vogginger, Bernhard

AU - Legenstein, Robert

AU - Maass, Wolfgang

AU - Mayr, Christian

AU - Schüffny, Rene

AU - Schemmel, Johannes

AU - Meier, Karlheinz

PY - 2017/3/17

Y1 - 2017/3/17

N2 - Emulating spiking neural networks on analog neuromorphic hardware offers several advantages over simulating them on conventional computers, particularly in terms of speed and energy consumption. However, this usually comes at the cost of reduced control over the dynamics of the emulated networks. In this paper, we demonstrate how iterative training of a hardware-emulated network can compensate for anomalies induced by the analog substrate. We first convert a deep neural network trained in software to a spiking network on the BrainScaleS wafer-scale neuromorphic system, thereby enabling an acceleration factor of 10 000 compared to the biological time domain. This mapping is followed by the in-the-loop training, where in each training step, the network activity is first recorded in hardware and then used to compute the parameter updates in software via backpropagation. An essential finding is that the parameter updates do not have to be precise, but only need to approximately follow the correct gradient, which simplifies the computation of updates. Using this approach, after only several tens of iterations, the spiking network shows an accuracy close to the ideal software-emulated prototype. The presented techniques show that deep spiking networks emulated on analog neuromorphic devices can attain good computational performance despite the inherent variations of the analog substrate.

AB - Emulating spiking neural networks on analog neuromorphic hardware offers several advantages over simulating them on conventional computers, particularly in terms of speed and energy consumption. However, this usually comes at the cost of reduced control over the dynamics of the emulated networks. In this paper, we demonstrate how iterative training of a hardware-emulated network can compensate for anomalies induced by the analog substrate. We first convert a deep neural network trained in software to a spiking network on the BrainScaleS wafer-scale neuromorphic system, thereby enabling an acceleration factor of 10 000 compared to the biological time domain. This mapping is followed by the in-the-loop training, where in each training step, the network activity is first recorded in hardware and then used to compute the parameter updates in software via backpropagation. An essential finding is that the parameter updates do not have to be precise, but only need to approximately follow the correct gradient, which simplifies the computation of updates. Using this approach, after only several tens of iterations, the spiking network shows an accuracy close to the ideal software-emulated prototype. The presented techniques show that deep spiking networks emulated on analog neuromorphic devices can attain good computational performance despite the inherent variations of the analog substrate.

M3 - Article

VL - arXiv:1703.01909

JO - arXiv.org e-Print archive

JF - arXiv.org e-Print archive

ER -