Reservoirs learn to learn

Anand Subramoney, Franz Scherr, Wolfgang Maass

Publikation: ArbeitspapierPreprint

Abstract

We consider reservoirs in the form of liquid state machines, i.e., recurrently connected networks of spiking neurons with randomly chosen weights. So far only the weights of a linear readout were adapted for a specific task. We wondered whether the performance of liquid state machines can be improved if the recurrent weights are chosen with a purpose, rather than randomly. After all, weights of recurrent connections in the brain are also not assumed to be randomly chosen. Rather, these weights were probably optimized during evolution, development, and prior learning experiences for specific task domains. In order to examine the benefits of choosing recurrent weights within a liquid with a purpose, we applied the Learning-to-Learn (L2L) paradigm to our model: We optimized the weights of the recurrent connections -- and hence the dynamics of the liquid state machine -- for a large family of potential learning tasks, which the network might have to learn later through modification of the weights of readout neurons. We found that this two-tiered process substantially improves the learning speed of liquid state machines for specific tasks. In fact, this learning speed increases further if one does not train the weights of linear readouts at all, and relies instead on the internal dynamics and fading memory of the network for remembering salient information that it could extract from preceding examples for the current learning task. This second type of learning has recently been proposed to underlie fast learning in the prefrontal cortex and motor cortex, and hence it is of interest to explore its performance also in models. Since liquid state machines share many properties with other types of reservoirs, our results raise the question whether L2L conveys similar benefits also to these other reservoirs.
Originalspracheenglisch
Seiten1-14
Seitenumfang14
PublikationsstatusVeröffentlicht - 16 Sept. 2019

Publikationsreihe

NamearXiv.org e-Print archive
Herausgeber (Verlag)Cornell University Library

Fingerprint

Untersuchen Sie die Forschungsthemen von „Reservoirs learn to learn“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren