Recently, it was shown that using a properly parametrized Leaky ReLU (LReLU) as activation function yields significantly better results for a variety of image classification tasks. However, such methods are not feasible in practice. Either the only parameter (i.e., the slope of the negative part) needs to be set manually (L*ReLU), or the approach is vulnerable due to the gradient-based optimization and, thus, highly dependent on a proper initialization (PReLU). In this paper, we exploit the benefits of piecewise linear functions, avoiding these problems. To this end, we propose a fully automatic approach to estimate the slope parameter for LReLU from the data. We realize this via Stochastic Optimization, namely Particle Swarm Optimization (PSO): S*ReLU. In this way, we can show that, compared to widely-used activation functions (including PReLU), we can obtain better results on seven different benchmark datasets, however, also drastically reducing the computational effort.