Active automata learning comprises techniques for learning automata models of black-box systems by testing such systems. While this form of learning enables model-based analysis and verification, it may also require a substantial amount of interactions with considered systems to learn adequate models, which capture the systems’ behaviour. The test cases executed during learning can be divided into two categories: (1) test cases to gain knowledge about a system and (2) test cases to falsify a learned hypothesis automaton. The former are selected by learning algorithms, whereas the latter are selected by conformance-testing algorithms. There exist various options for both types of algorithms and there are dependencies between them. In this paper, we investigate the performance of combinations of four different learning algorithms and seven different testing algorithms. For this purpose, we perform learning experiments using 39 benchmark models. Based on experimental results, we discuss insights regarding the performance of different configurations for various types of systems. These insights may serve as guidance for future users of active automata learning.