A solution to the learning dilemma for recurrent networks of spiking neurons

Research output: Contribution to journalArticleResearch

Abstract

Recurrently connected networks of spiking neurons underlie the astounding information processing capabilities of the brain. But in spite of extensive research, it has remained open how they can learn through synaptic plasticity to carry out complex network computations. We argue that two pieces of this puzzle were provided by experimental data from neuroscience. A new mathematical insight tells us how these pieces need to be combined to enable biologically plausible online network learning through gradient descent, in particular deep reinforcement learning. This new learning method -- called e-prop -- approaches the performance of BPTT (backpropagation through time), the best known method for training recurrent neural networks in machine learning. In addition, it suggests a method for powerful on-chip learning in novel energy-efficient spike-based hardware for AI.
Original languageEnglish
Number of pages31
JournalbioRxiv - the Preprint Server for Biology
Volume2019
DOIs
Publication statusPublished - 9 Dec 2019

Fingerprint

Recurrent neural networks
Reinforcement learning
Complex networks
Backpropagation
Neurons
Plasticity
Learning systems
Brain
Hardware

Cite this

@article{3764fa80ddb74dad9c9bbe0d84be2976,
title = "A solution to the learning dilemma for recurrent networks of spiking neurons",
abstract = "Recurrently connected networks of spiking neurons underlie the astounding information processing capabilities of the brain. But in spite of extensive research, it has remained open how they can learn through synaptic plasticity to carry out complex network computations. We argue that two pieces of this puzzle were provided by experimental data from neuroscience. A new mathematical insight tells us how these pieces need to be combined to enable biologically plausible online network learning through gradient descent, in particular deep reinforcement learning. This new learning method -- called e-prop -- approaches the performance of BPTT (backpropagation through time), the best known method for training recurrent neural networks in machine learning. In addition, it suggests a method for powerful on-chip learning in novel energy-efficient spike-based hardware for AI.",
author = "Guillaume Bellec and Franz Scherr and Anand Subramoney and Elias Hajek and Darjan Salaj and Robert Legenstein and Wolfgang Maass",
year = "2019",
month = "12",
day = "9",
doi = "10.1101/738385",
language = "English",
volume = "2019",
journal = "bioRxiv - the Preprint Server for Biology",
publisher = "Cold Spring Harbor Laboratory Press",

}

TY - JOUR

T1 - A solution to the learning dilemma for recurrent networks of spiking neurons

AU - Bellec, Guillaume

AU - Scherr, Franz

AU - Subramoney, Anand

AU - Hajek, Elias

AU - Salaj, Darjan

AU - Legenstein, Robert

AU - Maass, Wolfgang

PY - 2019/12/9

Y1 - 2019/12/9

N2 - Recurrently connected networks of spiking neurons underlie the astounding information processing capabilities of the brain. But in spite of extensive research, it has remained open how they can learn through synaptic plasticity to carry out complex network computations. We argue that two pieces of this puzzle were provided by experimental data from neuroscience. A new mathematical insight tells us how these pieces need to be combined to enable biologically plausible online network learning through gradient descent, in particular deep reinforcement learning. This new learning method -- called e-prop -- approaches the performance of BPTT (backpropagation through time), the best known method for training recurrent neural networks in machine learning. In addition, it suggests a method for powerful on-chip learning in novel energy-efficient spike-based hardware for AI.

AB - Recurrently connected networks of spiking neurons underlie the astounding information processing capabilities of the brain. But in spite of extensive research, it has remained open how they can learn through synaptic plasticity to carry out complex network computations. We argue that two pieces of this puzzle were provided by experimental data from neuroscience. A new mathematical insight tells us how these pieces need to be combined to enable biologically plausible online network learning through gradient descent, in particular deep reinforcement learning. This new learning method -- called e-prop -- approaches the performance of BPTT (backpropagation through time), the best known method for training recurrent neural networks in machine learning. In addition, it suggests a method for powerful on-chip learning in novel energy-efficient spike-based hardware for AI.

U2 - 10.1101/738385

DO - 10.1101/738385

M3 - Article

VL - 2019

JO - bioRxiv - the Preprint Server for Biology

JF - bioRxiv - the Preprint Server for Biology

ER -