Current consumer-grade virtual reality devices provide positional tracking for head and hands of the user. Using this information, virtual hands can be displayed to the user and by that be used for interaction in the virtual environment. However, other body parts are usually not visible to the user since there is no tracking information available. Furthermore, tracking information might be noisy or temporarily unavailable. In this project, we aim to close the gap between such incomplete tracking data and full body tracking, which might only become available in future consumer-grade virtual reality. Our aim is a combination of data driven (neural network-based) and inverse kinematics solvers for estimating joint data for the (upper body) pose of the user. Our previous work shows that, even if the pose is not correct, arms animated by inverse kinematics are preferred over having hands only and that they can also be used for interaction. Using neural networks and sufficient motion capture data, we expect to increase the accuracy even further and thus also increase the feeling of embodiment.
|Effective start/end date||1/05/19 → 30/04/20|