Learning task-specific activation functions using genetic programming

Research output: Chapter in Book/Report/Conference proceedingConference contributionResearchpeer-review

Abstract

Deep Neural Networks have been shown to be beneficial for a variety of tasks, in particular allowing for end-toend learning and reducing the requirement for manual design decisions. However, still many parameters have to be chosen manually in advance, also raising the need to optimize them. One important, but often ignored parameter is the selection of a proper activation function. In this paper, we tackle this problem by learning taskspecific activation functions by using ideas from genetic programming. We propose to construct piece-wise activation functions (for the negative and the positive part) and introduce new genetic operators to combine functions in a more efficient way. The experimental results for multi-class classification demonstrate that for different tasks specific activation functions are learned, also outperforming widely used generic baselines.
Original languageEnglish
Title of host publication Proceedings of the Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications
PublisherSciTePress
Pages533-540
Volume5
ISBN (Print)978-989-758-354-4
DOIs
Publication statusPublished - 2019
Event Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Prague, Czech Republic
Duration: 25 Feb 201927 Feb 2019

Conference

Conference Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications
Abbreviated titleVISAPP 2019
CountryCzech Republic
CityPrague
Period25/02/1927/02/19

Fingerprint

Genetic programming
Chemical activation
Mathematical operators

Cite this

Basirat, M., & Roth, P. M. (2019). Learning task-specific activation functions using genetic programming. In Proceedings of the Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (Vol. 5, pp. 533-540). SciTePress. https://doi.org/10.5220/0007408205330540

Learning task-specific activation functions using genetic programming. / Basirat, Mina; Roth, Peter M.

Proceedings of the Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications. Vol. 5 SciTePress, 2019. p. 533-540.

Research output: Chapter in Book/Report/Conference proceedingConference contributionResearchpeer-review

Basirat, M & Roth, PM 2019, Learning task-specific activation functions using genetic programming. in Proceedings of the Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications. vol. 5, SciTePress, pp. 533-540, Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Prague, Czech Republic, 25/02/19. https://doi.org/10.5220/0007408205330540
Basirat M, Roth PM. Learning task-specific activation functions using genetic programming. In Proceedings of the Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications. Vol. 5. SciTePress. 2019. p. 533-540 https://doi.org/10.5220/0007408205330540
Basirat, Mina ; Roth, Peter M. / Learning task-specific activation functions using genetic programming. Proceedings of the Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications. Vol. 5 SciTePress, 2019. pp. 533-540
@inproceedings{446a87ecfdec41f481afbddf3f45fa1b,
title = "Learning task-specific activation functions using genetic programming",
abstract = "Deep Neural Networks have been shown to be beneficial for a variety of tasks, in particular allowing for end-toend learning and reducing the requirement for manual design decisions. However, still many parameters have to be chosen manually in advance, also raising the need to optimize them. One important, but often ignored parameter is the selection of a proper activation function. In this paper, we tackle this problem by learning taskspecific activation functions by using ideas from genetic programming. We propose to construct piece-wise activation functions (for the negative and the positive part) and introduce new genetic operators to combine functions in a more efficient way. The experimental results for multi-class classification demonstrate that for different tasks specific activation functions are learned, also outperforming widely used generic baselines.",
author = "Mina Basirat and Roth, {Peter M.}",
year = "2019",
doi = "10.5220/0007408205330540",
language = "English",
isbn = "978-989-758-354-4",
volume = "5",
pages = "533--540",
booktitle = "Proceedings of the Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications",
publisher = "SciTePress",
address = "Portugal",

}

TY - GEN

T1 - Learning task-specific activation functions using genetic programming

AU - Basirat, Mina

AU - Roth, Peter M.

PY - 2019

Y1 - 2019

N2 - Deep Neural Networks have been shown to be beneficial for a variety of tasks, in particular allowing for end-toend learning and reducing the requirement for manual design decisions. However, still many parameters have to be chosen manually in advance, also raising the need to optimize them. One important, but often ignored parameter is the selection of a proper activation function. In this paper, we tackle this problem by learning taskspecific activation functions by using ideas from genetic programming. We propose to construct piece-wise activation functions (for the negative and the positive part) and introduce new genetic operators to combine functions in a more efficient way. The experimental results for multi-class classification demonstrate that for different tasks specific activation functions are learned, also outperforming widely used generic baselines.

AB - Deep Neural Networks have been shown to be beneficial for a variety of tasks, in particular allowing for end-toend learning and reducing the requirement for manual design decisions. However, still many parameters have to be chosen manually in advance, also raising the need to optimize them. One important, but often ignored parameter is the selection of a proper activation function. In this paper, we tackle this problem by learning taskspecific activation functions by using ideas from genetic programming. We propose to construct piece-wise activation functions (for the negative and the positive part) and introduce new genetic operators to combine functions in a more efficient way. The experimental results for multi-class classification demonstrate that for different tasks specific activation functions are learned, also outperforming widely used generic baselines.

U2 - 10.5220/0007408205330540

DO - 10.5220/0007408205330540

M3 - Conference contribution

SN - 978-989-758-354-4

VL - 5

SP - 533

EP - 540

BT - Proceedings of the Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications

PB - SciTePress

ER -