Learning task-specific activation functions using genetic programming

Publikation: Beitrag in Buch/Bericht/KonferenzbandBeitrag in einem KonferenzbandForschungBegutachtung

Abstract

Deep Neural Networks have been shown to be beneficial for a variety of tasks, in particular allowing for end-toend learning and reducing the requirement for manual design decisions. However, still many parameters have to be chosen manually in advance, also raising the need to optimize them. One important, but often ignored parameter is the selection of a proper activation function. In this paper, we tackle this problem by learning taskspecific activation functions by using ideas from genetic programming. We propose to construct piece-wise activation functions (for the negative and the positive part) and introduce new genetic operators to combine functions in a more efficient way. The experimental results for multi-class classification demonstrate that for different tasks specific activation functions are learned, also outperforming widely used generic baselines.
Originalspracheenglisch
Titel Proceedings of the Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications
Herausgeber (Verlag)SciTePress
Seiten533-540
Band5
ISBN (Print)978-989-758-354-4
DOIs
PublikationsstatusVeröffentlicht - 2019
Veranstaltung Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Prague, Tschechische Republik
Dauer: 25 Feb 201927 Feb 2019

Konferenz

Konferenz Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications
KurztitelVISAPP 2019
LandTschechische Republik
OrtPrague
Zeitraum25/02/1927/02/19

Fingerprint

Genetic programming
Chemical activation
Mathematical operators

Dies zitieren

Basirat, M., & Roth, P. M. (2019). Learning task-specific activation functions using genetic programming. in Proceedings of the Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (Band 5, S. 533-540). SciTePress. https://doi.org/10.5220/0007408205330540

Learning task-specific activation functions using genetic programming. / Basirat, Mina; Roth, Peter M.

Proceedings of the Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications. Band 5 SciTePress, 2019. S. 533-540.

Publikation: Beitrag in Buch/Bericht/KonferenzbandBeitrag in einem KonferenzbandForschungBegutachtung

Basirat, M & Roth, PM 2019, Learning task-specific activation functions using genetic programming. in Proceedings of the Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications. Bd. 5, SciTePress, S. 533-540, Prague, Tschechische Republik, 25/02/19. https://doi.org/10.5220/0007408205330540
Basirat M, Roth PM. Learning task-specific activation functions using genetic programming. in Proceedings of the Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications. Band 5. SciTePress. 2019. S. 533-540 https://doi.org/10.5220/0007408205330540
Basirat, Mina ; Roth, Peter M. / Learning task-specific activation functions using genetic programming. Proceedings of the Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications. Band 5 SciTePress, 2019. S. 533-540
@inproceedings{446a87ecfdec41f481afbddf3f45fa1b,
title = "Learning task-specific activation functions using genetic programming",
abstract = "Deep Neural Networks have been shown to be beneficial for a variety of tasks, in particular allowing for end-toend learning and reducing the requirement for manual design decisions. However, still many parameters have to be chosen manually in advance, also raising the need to optimize them. One important, but often ignored parameter is the selection of a proper activation function. In this paper, we tackle this problem by learning taskspecific activation functions by using ideas from genetic programming. We propose to construct piece-wise activation functions (for the negative and the positive part) and introduce new genetic operators to combine functions in a more efficient way. The experimental results for multi-class classification demonstrate that for different tasks specific activation functions are learned, also outperforming widely used generic baselines.",
author = "Mina Basirat and Roth, {Peter M.}",
year = "2019",
doi = "10.5220/0007408205330540",
language = "English",
isbn = "978-989-758-354-4",
volume = "5",
pages = "533--540",
booktitle = "Proceedings of the Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications",
publisher = "SciTePress",
address = "Portugal",

}

TY - GEN

T1 - Learning task-specific activation functions using genetic programming

AU - Basirat, Mina

AU - Roth, Peter M.

PY - 2019

Y1 - 2019

N2 - Deep Neural Networks have been shown to be beneficial for a variety of tasks, in particular allowing for end-toend learning and reducing the requirement for manual design decisions. However, still many parameters have to be chosen manually in advance, also raising the need to optimize them. One important, but often ignored parameter is the selection of a proper activation function. In this paper, we tackle this problem by learning taskspecific activation functions by using ideas from genetic programming. We propose to construct piece-wise activation functions (for the negative and the positive part) and introduce new genetic operators to combine functions in a more efficient way. The experimental results for multi-class classification demonstrate that for different tasks specific activation functions are learned, also outperforming widely used generic baselines.

AB - Deep Neural Networks have been shown to be beneficial for a variety of tasks, in particular allowing for end-toend learning and reducing the requirement for manual design decisions. However, still many parameters have to be chosen manually in advance, also raising the need to optimize them. One important, but often ignored parameter is the selection of a proper activation function. In this paper, we tackle this problem by learning taskspecific activation functions by using ideas from genetic programming. We propose to construct piece-wise activation functions (for the negative and the positive part) and introduce new genetic operators to combine functions in a more efficient way. The experimental results for multi-class classification demonstrate that for different tasks specific activation functions are learned, also outperforming widely used generic baselines.

U2 - 10.5220/0007408205330540

DO - 10.5220/0007408205330540

M3 - Conference contribution

SN - 978-989-758-354-4

VL - 5

SP - 533

EP - 540

BT - Proceedings of the Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications

PB - SciTePress

ER -