Scaling up liquid state machines to predict over address events from dynamic vision sensors

Jacques Kaiser, Rainer Stal, Anand Subramoney, Arne Roennau, Rüdiger Dillmann

Research output: Contribution to journalArticleResearchpeer-review

Abstract

Short-term visual prediction is important both in biology and robotics. It allows us to anticipate upcoming states of the environment and therefore plan more efficiently. In theoretical neuroscience, liquid state machines have been proposed as a biologically inspired method to perform asynchronous prediction without a model. However, they have so far only been demonstrated in simulation or small scale pre-processed camera images. In this paper, we use a liquid state machine to predict over the whole  $128\times 128$  event stream provided by a real dynamic vision sensor (DVS, or silicon retina). Thanks to the event-based nature of the DVS, the liquid is constantly fed with data when an object is in motion, fully embracing the asynchronicity of spiking neural networks. We propose a smooth continuous representation of the event stream for the short-term visual prediction task. Moreover, compared to previous works (2002 Neural Comput. 2525 282–93 and Burgsteiner H et al 2007 Appl. Intell. 26 99–109), we scale the input dimensionality that the liquid operates on by two order of magnitudes. We also expose the current limits of our method by running experiments in a challenging environment where multiple objects are in motion. This paper is a step towards integrating biologically inspired algorithms derived in theoretical neuroscience to real world robotic setups. We believe that liquid state machines could complement current prediction algorithms used in robotics, especially when dealing with asynchronous sensors.
Original languageEnglish
JournalBioinspiration & Biomimetics
Volume12
Issue number5
DOIs
Publication statusPublished - 1 Sep 2017

Fingerprint

Robotics
Neurosciences
Sensors
Liquids
Silicon
Retina
Cameras
Neural networks
Experiments

Cite this

Scaling up liquid state machines to predict over address events from dynamic vision sensors. / Kaiser, Jacques; Stal, Rainer; Subramoney, Anand; Roennau, Arne; Dillmann, Rüdiger.

In: Bioinspiration & Biomimetics, Vol. 12, No. 5, 01.09.2017.

Research output: Contribution to journalArticleResearchpeer-review

Kaiser, Jacques ; Stal, Rainer ; Subramoney, Anand ; Roennau, Arne ; Dillmann, Rüdiger. / Scaling up liquid state machines to predict over address events from dynamic vision sensors. In: Bioinspiration & Biomimetics. 2017 ; Vol. 12, No. 5.
@article{9fe807e027474cdbaab6a2d708dd7ef6,
title = "Scaling up liquid state machines to predict over address events from dynamic vision sensors",
abstract = "Short-term visual prediction is important both in biology and robotics. It allows us to anticipate upcoming states of the environment and therefore plan more efficiently. In theoretical neuroscience, liquid state machines have been proposed as a biologically inspired method to perform asynchronous prediction without a model. However, they have so far only been demonstrated in simulation or small scale pre-processed camera images. In this paper, we use a liquid state machine to predict over the whole  $128\times 128$  event stream provided by a real dynamic vision sensor (DVS, or silicon retina). Thanks to the event-based nature of the DVS, the liquid is constantly fed with data when an object is in motion, fully embracing the asynchronicity of spiking neural networks. We propose a smooth continuous representation of the event stream for the short-term visual prediction task. Moreover, compared to previous works (2002 Neural Comput. 2525 282–93 and Burgsteiner H et al 2007 Appl. Intell. 26 99–109), we scale the input dimensionality that the liquid operates on by two order of magnitudes. We also expose the current limits of our method by running experiments in a challenging environment where multiple objects are in motion. This paper is a step towards integrating biologically inspired algorithms derived in theoretical neuroscience to real world robotic setups. We believe that liquid state machines could complement current prediction algorithms used in robotics, especially when dealing with asynchronous sensors.",
author = "Jacques Kaiser and Rainer Stal and Anand Subramoney and Arne Roennau and R{\"u}diger Dillmann",
year = "2017",
month = "9",
day = "1",
doi = "10.1088/1748-3190/aa7663",
language = "English",
volume = "12",
journal = "Bioinspiration & Biomimetics",
issn = "1748-3182",
publisher = "IOP Publishing Ltd.",
number = "5",

}

TY - JOUR

T1 - Scaling up liquid state machines to predict over address events from dynamic vision sensors

AU - Kaiser, Jacques

AU - Stal, Rainer

AU - Subramoney, Anand

AU - Roennau, Arne

AU - Dillmann, Rüdiger

PY - 2017/9/1

Y1 - 2017/9/1

N2 - Short-term visual prediction is important both in biology and robotics. It allows us to anticipate upcoming states of the environment and therefore plan more efficiently. In theoretical neuroscience, liquid state machines have been proposed as a biologically inspired method to perform asynchronous prediction without a model. However, they have so far only been demonstrated in simulation or small scale pre-processed camera images. In this paper, we use a liquid state machine to predict over the whole  $128\times 128$  event stream provided by a real dynamic vision sensor (DVS, or silicon retina). Thanks to the event-based nature of the DVS, the liquid is constantly fed with data when an object is in motion, fully embracing the asynchronicity of spiking neural networks. We propose a smooth continuous representation of the event stream for the short-term visual prediction task. Moreover, compared to previous works (2002 Neural Comput. 2525 282–93 and Burgsteiner H et al 2007 Appl. Intell. 26 99–109), we scale the input dimensionality that the liquid operates on by two order of magnitudes. We also expose the current limits of our method by running experiments in a challenging environment where multiple objects are in motion. This paper is a step towards integrating biologically inspired algorithms derived in theoretical neuroscience to real world robotic setups. We believe that liquid state machines could complement current prediction algorithms used in robotics, especially when dealing with asynchronous sensors.

AB - Short-term visual prediction is important both in biology and robotics. It allows us to anticipate upcoming states of the environment and therefore plan more efficiently. In theoretical neuroscience, liquid state machines have been proposed as a biologically inspired method to perform asynchronous prediction without a model. However, they have so far only been demonstrated in simulation or small scale pre-processed camera images. In this paper, we use a liquid state machine to predict over the whole  $128\times 128$  event stream provided by a real dynamic vision sensor (DVS, or silicon retina). Thanks to the event-based nature of the DVS, the liquid is constantly fed with data when an object is in motion, fully embracing the asynchronicity of spiking neural networks. We propose a smooth continuous representation of the event stream for the short-term visual prediction task. Moreover, compared to previous works (2002 Neural Comput. 2525 282–93 and Burgsteiner H et al 2007 Appl. Intell. 26 99–109), we scale the input dimensionality that the liquid operates on by two order of magnitudes. We also expose the current limits of our method by running experiments in a challenging environment where multiple objects are in motion. This paper is a step towards integrating biologically inspired algorithms derived in theoretical neuroscience to real world robotic setups. We believe that liquid state machines could complement current prediction algorithms used in robotics, especially when dealing with asynchronous sensors.

U2 - 10.1088/1748-3190/aa7663

DO - 10.1088/1748-3190/aa7663

M3 - Article

VL - 12

JO - Bioinspiration & Biomimetics

JF - Bioinspiration & Biomimetics

SN - 1748-3182

IS - 5

ER -