Spike-based neuromorphic hardware is a promising option for reducing the energy consumption of image classification, or more generally of inference in large neural networks that have been trained by deep learning. A drastic reduction of this energy consumption is especially needed for implementing state-of-the-art results of deep learning in edge devices. However direct training of deep feedforward spiking neural networks is difficult, and previous methods for converting trained artificial neural networks to spiking neurons required too many spikes. We show that a substantially more efficient conversion from artificial neural networks to spike-based networks is possible if one optimizes the spiking neuron model for that purpose, and enables it to use the timing of spikes to encode information. This method allows us to significantly advance the accuracy that can be achieved for image classification with spiking neurons, and the resulting networks need on average just two spikes per neuron for classifying an image. In addition our new conversion method drastically improves latency and throughput of the resulting spiking networks.
|Publikationsstatus||Veröffentlicht - 31 Jan 2020|
|Name||arXiv.org e-Print archive|
|Herausgeber (Verlag)||Cornell University Library|