Sensory substitution has been a research subject for decades, and yet its applicability outside of the research is very limited. Thus creating scepticism among researchers that a full sensory substitution is not even possible . In this paper, we do not substitute the entire perceptual channel. Instead, we follow a different approach which reduces the captured information drastically. We present concepts and implementation of two mobile applications which capture the user's environment, describe it in the form of text and then convey its textual description to the user through a vibrotactile wearable display. The applications target users with hearing and vision impairments.
|Titel||Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers|
|Publikationsstatus||Veröffentlicht - 2018|
Luzhnica, G., & Veas, E. (2018). Skin Reading Meets Speech Recognition and Object Recognition for Sensory Substitution. in Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers (S. 146-149) https://doi.org/10.1145/3267305.3267604