Modern musical source separation systems based on deep neural networks reach unprecedented levels of separation quality. However, harnessing the power of these large-scale models in typical audio production environments, which frequently offer only limited computing resources while demanding real-time processing, remains challenging. We extend the multi-scaled DenseNet in several aspects to facilitate real-time source separation scenarios. Specifically, we reduce the computational requirements by inferring Mel-scaled masks and decrease the model size via effective use of bottleneck layers, while improving performance using a deep clustering objective. In addition, we are able to further increase the model efficiency by applying parameterized structured pruning of convolutional weights without any significant impact on the separation performance. We significantly reduce the model size and increase the computational efficiency by a factor of 1.6 and 4.3, respectively, while maintaining the separation performance.