Abstract
Original language | English |
---|---|
Title of host publication | Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers |
Pages | 146-149 |
Number of pages | 4 |
ISBN (Electronic) | 978-1-4503-5966-5 |
DOIs | |
Publication status | Published - 2018 |
Fingerprint
Keywords
- haptic interfaces
- skin reading
- vibrotactile display
- wearable display
- wearables
- accessible technology
- accessibility
Cite this
Skin Reading Meets Speech Recognition and Object Recognition for Sensory Substitution. / Luzhnica, Granit; Veas, Eduardo.
Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers. 2018. p. 146-149.Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Research › peer-review
}
TY - GEN
T1 - Skin Reading Meets Speech Recognition and Object Recognition for Sensory Substitution
AU - Luzhnica, Granit
AU - Veas, Eduardo
PY - 2018
Y1 - 2018
N2 - Sensory substitution has been a research subject for decades, and yet its applicability outside of the research is very limited. Thus creating scepticism among researchers that a full sensory substitution is not even possible [8]. In this paper, we do not substitute the entire perceptual channel. Instead, we follow a different approach which reduces the captured information drastically. We present concepts and implementation of two mobile applications which capture the user's environment, describe it in the form of text and then convey its textual description to the user through a vibrotactile wearable display. The applications target users with hearing and vision impairments.
AB - Sensory substitution has been a research subject for decades, and yet its applicability outside of the research is very limited. Thus creating scepticism among researchers that a full sensory substitution is not even possible [8]. In this paper, we do not substitute the entire perceptual channel. Instead, we follow a different approach which reduces the captured information drastically. We present concepts and implementation of two mobile applications which capture the user's environment, describe it in the form of text and then convey its textual description to the user through a vibrotactile wearable display. The applications target users with hearing and vision impairments.
KW - haptic interfaces
KW - skin reading
KW - vibrotactile display
KW - wearable display
KW - wearables
KW - accessible technology
KW - accessibility
U2 - 10.1145/3267305.3267604
DO - 10.1145/3267305.3267604
M3 - Conference contribution
SP - 146
EP - 149
BT - Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers
ER -