Handseg: An automatically labeled dataset for hand segmentation from depth images

Abhishake Kumar Bojja, Franziska Mueller, Sri Raghu Malireddi, Markus Oberweger, Vincent Lepetit, Christian Theobalt, Kwang Moo Yi, Andrea Tagliasacchi

Publikation: Beitrag in Buch/Bericht/KonferenzbandBeitrag in einem KonferenzbandForschungBegutachtung

Abstract

We propose an automatic method for generating high-quality annotations for depth-based hand segmentation, and introduce a large-scale hand segmentation dataset. Existing datasets are typically limited to a single hand. By exploiting the visual cues given by an RGBD sensor and a pair of colored gloves, we automatically generate dense annotations for two hand segmentation. This lowers the cost/complexity of creating high quality datasets, and makes it easy to expand the dataset in the future. We further show that existing datasets, even with data augmentation, are not sufficient to train a hand segmentation algorithm that can distinguish two hands. Source and datasets are publicly available at the project page
Originalspracheenglisch
Titel2019 16th Conference on Computer and Robot Vision (CRV)
Seiten151-158
PublikationsstatusVeröffentlicht - 2019

Fingerprint

Sensors
Costs

Dies zitieren

Bojja, A. K., Mueller, F., Malireddi, S. R., Oberweger, M., Lepetit, V., Theobalt, C., ... Tagliasacchi, A. (2019). Handseg: An automatically labeled dataset for hand segmentation from depth images. in 2019 16th Conference on Computer and Robot Vision (CRV) (S. 151-158)

Handseg: An automatically labeled dataset for hand segmentation from depth images. / Bojja, Abhishake Kumar; Mueller, Franziska; Malireddi, Sri Raghu; Oberweger, Markus; Lepetit, Vincent; Theobalt, Christian; Yi, Kwang Moo; Tagliasacchi, Andrea.

2019 16th Conference on Computer and Robot Vision (CRV). 2019. S. 151-158.

Publikation: Beitrag in Buch/Bericht/KonferenzbandBeitrag in einem KonferenzbandForschungBegutachtung

Bojja, AK, Mueller, F, Malireddi, SR, Oberweger, M, Lepetit, V, Theobalt, C, Yi, KM & Tagliasacchi, A 2019, Handseg: An automatically labeled dataset for hand segmentation from depth images. in 2019 16th Conference on Computer and Robot Vision (CRV). S. 151-158.
Bojja AK, Mueller F, Malireddi SR, Oberweger M, Lepetit V, Theobalt C et al. Handseg: An automatically labeled dataset for hand segmentation from depth images. in 2019 16th Conference on Computer and Robot Vision (CRV). 2019. S. 151-158
Bojja, Abhishake Kumar ; Mueller, Franziska ; Malireddi, Sri Raghu ; Oberweger, Markus ; Lepetit, Vincent ; Theobalt, Christian ; Yi, Kwang Moo ; Tagliasacchi, Andrea. / Handseg: An automatically labeled dataset for hand segmentation from depth images. 2019 16th Conference on Computer and Robot Vision (CRV). 2019. S. 151-158
@inproceedings{d132c775f1e64046922b18045e063acc,
title = "Handseg: An automatically labeled dataset for hand segmentation from depth images",
abstract = "We propose an automatic method for generating high-quality annotations for depth-based hand segmentation, and introduce a large-scale hand segmentation dataset. Existing datasets are typically limited to a single hand. By exploiting the visual cues given by an RGBD sensor and a pair of colored gloves, we automatically generate dense annotations for two hand segmentation. This lowers the cost/complexity of creating high quality datasets, and makes it easy to expand the dataset in the future. We further show that existing datasets, even with data augmentation, are not sufficient to train a hand segmentation algorithm that can distinguish two hands. Source and datasets are publicly available at the project page",
author = "Bojja, {Abhishake Kumar} and Franziska Mueller and Malireddi, {Sri Raghu} and Markus Oberweger and Vincent Lepetit and Christian Theobalt and Yi, {Kwang Moo} and Andrea Tagliasacchi",
year = "2019",
language = "English",
pages = "151--158",
booktitle = "2019 16th Conference on Computer and Robot Vision (CRV)",

}

TY - GEN

T1 - Handseg: An automatically labeled dataset for hand segmentation from depth images

AU - Bojja, Abhishake Kumar

AU - Mueller, Franziska

AU - Malireddi, Sri Raghu

AU - Oberweger, Markus

AU - Lepetit, Vincent

AU - Theobalt, Christian

AU - Yi, Kwang Moo

AU - Tagliasacchi, Andrea

PY - 2019

Y1 - 2019

N2 - We propose an automatic method for generating high-quality annotations for depth-based hand segmentation, and introduce a large-scale hand segmentation dataset. Existing datasets are typically limited to a single hand. By exploiting the visual cues given by an RGBD sensor and a pair of colored gloves, we automatically generate dense annotations for two hand segmentation. This lowers the cost/complexity of creating high quality datasets, and makes it easy to expand the dataset in the future. We further show that existing datasets, even with data augmentation, are not sufficient to train a hand segmentation algorithm that can distinguish two hands. Source and datasets are publicly available at the project page

AB - We propose an automatic method for generating high-quality annotations for depth-based hand segmentation, and introduce a large-scale hand segmentation dataset. Existing datasets are typically limited to a single hand. By exploiting the visual cues given by an RGBD sensor and a pair of colored gloves, we automatically generate dense annotations for two hand segmentation. This lowers the cost/complexity of creating high quality datasets, and makes it easy to expand the dataset in the future. We further show that existing datasets, even with data augmentation, are not sufficient to train a hand segmentation algorithm that can distinguish two hands. Source and datasets are publicly available at the project page

M3 - Conference contribution

SP - 151

EP - 158

BT - 2019 16th Conference on Computer and Robot Vision (CRV)

ER -