Verifying the correctness of the implementation of machine learning algorithms like neural networks has become a major topic because – for example – its increasing use in the context of safety critical systems like automated or autonomous vehicles. In contrast to evaluating the learning capabilities of such machine learning algorithms, in verification, and particularly in testing we are interested in finding critical scenarios and in giving some sort of guarantees with respect to the underlying used tests. In this paper, we contribute to the area of testing machine learning algorithms and investigate the effectiveness of traditional mutation tools in the context of Deep Neural Networks testing. In particular, we try to answer the question whether mutated neural networks can be identified considering their learning capabilities when compared to the original network. To answer this question, we performed an empirical study using Java code implementations of such networks and a mutation tool to create mutated neural networks models. As an outcome, we are able to identify some mutations to be more likely to be detected than others.