A solution to the learning dilemma for recurrent networks of spiking neurons

Guillaume Emmanuel Fernand Bellec, Franz Scherr, Anand Subramoney, Elias Hajek, Darjan Salaj, Robert Legenstein, Wolfgang Maass*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Recurrently connected networks of spiking neurons underlie the astounding information processing capabilities of the brain. Yet in spite of extensive research, how they can learn through synaptic plasticity to carry out complex network computations remains unclear. We argue that two pieces of this puzzle were provided by experimental data from neuroscience. A mathematical result tells us how these pieces need to be combined to enable biologically plausible online network learning through gradient descent, in particular deep reinforcement learning. This learning method–called e-prop–approaches the performance of backpropagation through time (BPTT), the best-known method for training recurrent neural networks in machine learning. In addition, it suggests a method for powerful on-chip learning in energy-efficient spike-based hardware for artificial intelligence.

Original languageEnglish
Article number3625
JournalNature Communications
Volume11
Issue number1
DOIs
Publication statusPublished - 1 Dec 2020

ASJC Scopus subject areas

  • Physics and Astronomy(all)
  • Chemistry(all)
  • Biochemistry, Genetics and Molecular Biology(all)

Fingerprint

Dive into the research topics of 'A solution to the learning dilemma for recurrent networks of spiking neurons'. Together they form a unique fingerprint.

Cite this