Distributed Bayesian computation and self-organized learning in sheets of spiking neurons with local lateral inhibition

Johannes Bill, Lars Holger Büsing, Stefan Habenschuss, Bernhard Nessler, Wolfgang Maass, Robert Legenstein

Research output: Contribution to journalArticlepeer-review

Abstract

During the last decade, Bayesian probability theory has emerged as a framework in cognitive science and neuroscience for describing perception, reasoning and learning of mammals. However, our understanding of how probabilistic computations could be organized in the brain, and how the observed connectivity structure of cortical microcircuits supports these calculations, is rudimentary at best. In this study, we investigate statistical inference and self-organized learning in a spatially extended spiking network model, that accommodates both local competitive and large-scale associative aspects of neural information processing, under a unified Bayesian account. Specifically, we show how the spiking dynamics of a recurrent network with lateral excitation and local inhibition in response to distributed spiking input, can be understood as sampling from a variational posterior distribution of a well-defined implicit probabilistic model. This interpretation further permits a rigorous analytical treatment of experience-dependent plasticity on the network level. Using machine learning theory, we derive update rules for neuron and synapse parameters which equate with Hebbian synaptic and homeostatic intrinsic plasticity rules in a neural implementation. In computer simulations, we demonstrate that the interplay of these plasticity rules leads to the emergence of probabilistic local experts that form distributed assemblies of similarly tuned cells communicating through lateral excitatory connections. The resulting sparse distributed spike code of a well-adapted network carries compressed information on salient input features combined with prior experience on correlations among them. Our theory predicts that the emergence of such efficient representations benefits from network architectures in which the range of local inhibition matches the spatial extent of pyramidal cells that share common afferent input.
Original languageEnglish
Pages (from-to)e0134356-e0134356
JournalPLoS ONE
Volume10
Issue number8
DOIs
Publication statusPublished - 2015

Fields of Expertise

  • Information, Communication & Computing

Treatment code (Nähere Zuordnung)

  • Theoretical

Fingerprint

Dive into the research topics of 'Distributed Bayesian computation and self-organized learning in sheets of spiking neurons with local lateral inhibition'. Together they form a unique fingerprint.

Cite this