Analog neuromorphic hardware allows highly accelerated and, in terms of dissipated power, efficient emulations of spiking neural networks (SNNs) compared to simulations on conventional computers. This thesis presents a supervised learning algorithm for hardware emulated feedforward SNNs with limited resolution of their synaptic efficacies. Training is done in conjunction with an abstract software model of the network which is used to perform the parameter updates based on the recorded network activity of the SNN. Despite inherently present noise in analog circuitry, the introduced algorithm was successfully tested on two classification problems—on the classical XOR problem and on a subset of the MNIST dataset—where a good classification performance could be achieved. The performance could, however, be improved if the variations of the hardware would be on a smaller scale, as it has been shown in a software simulation of the SNNs.
|Publication status||Published - 22 Feb 2017|