Brain-Computer Interface (BCI) enables its user to control an external device only by mental focus. It does not need any physical movement or speech. One of the methods to design a BCI is based on classification of motor imagery related brain activities. But it is difficult to find out the user specific strategy of movement imagination, which can be classified correctly. Our effort to solve this problem includes the combination of BCI with virtual reality (VR). The idea behind this intention is the well-known fact that feedback has an essential influence on the brain signal. Using VR the users can immerse themselves into the artificial environment and focus on the training task to a BCI-system. External influences can be minimized. The challenge is to find out how to prepare the brain signals for visualization. One of the drawbacks is e.g. visualization of the high dimensionality of brain signals recorded by fixing several electrodes on the scalp of subjects. Because user can only focus on few parameters, we need algorithms, which reduce the data dimension without loosing important information.
Furthermore, we focus on additional classification algorithms, methods that can accurately classify brain signals with low number of channels. This makes the BCI much easier to use. The number of used electrodes for EEG recording can be reduced.