Handseg: An automatically labeled dataset for hand segmentation from depth images

Abhishake Kumar Bojja, Franziska Mueller, Sri Raghu Malireddi, Markus Oberweger, Vincent Lepetit, Christian Theobalt, Kwang Moo Yi, Andrea Tagliasacchi

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

Abstract

We propose an automatic method for generating high-quality annotations for depth-based hand segmentation, and introduce a large-scale hand segmentation dataset. Existing datasets are typically limited to a single hand. By exploiting the visual cues given by an RGBD sensor and a pair of colored gloves, we automatically generate dense annotations for two hand segmentation. This lowers the cost/complexity of creating high quality datasets, and makes it easy to expand the dataset in the future. We further show that existing datasets, even with data augmentation, are not sufficient to train a hand segmentation algorithm that can distinguish two hands. Source and datasets are publicly available at the project page
Original languageEnglish
Title of host publication2019 16th Conference on Computer and Robot Vision (CRV)
Pages151-158
Publication statusPublished - 2019

Fingerprint

Dive into the research topics of 'Handseg: An automatically labeled dataset for hand segmentation from depth images'. Together they form a unique fingerprint.

Cite this