Deep Neural Networks have been shown to be beneficial for a variety of tasks, in particular allowing for end-toend learning and reducing the requirement for manual design decisions. However, still many parameters have to be chosen manually in advance, also raising the need to optimize them. One important, but often ignored parameter is the selection of a proper activation function. In this paper, we tackle this problem by learning taskspecific activation functions by using ideas from genetic programming. We propose to construct piece-wise activation functions (for the negative and the positive part) and introduce new genetic operators to combine functions in a more efficient way. The experimental results for multi-class classification demonstrate that for different tasks specific activation functions are learned, also outperforming widely used generic baselines.
|Title of host publication||Proceedings of the Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications|
|Publication status||Published - 2019|
|Event||14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications: VISIGRAPP 2019 - Prague, Czech Republic|
Duration: 25 Feb 2019 → 27 Feb 2019
|Conference||14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications|
|Abbreviated title||VISAPP 2019|
|Period||25/02/19 → 27/02/19|