Abstract
We propose a method to automatically generate high quality ground truth annotations for grasping point prediction and show the usefulness of these annotations by training a deep neural network to predict grasping candidates for objects in a cluttered environment. First, we acquire sequences of RGBD images of a real world picking scenario and leverage the sequential depth information to extract labels for grasping point prediction. Afterwards, we train a deep neural network to predict grasping points, establishing a fully automatic pipeline from acquiring data to a trained network without the need of human annotators. We show in our experiments that our network trained with automatically generated labels delivers high quality results for predicting grasping candidates, on par with a trained network which uses human annotated data. This work lowers the cost/complexity of creating specific datasets for grasping and makes it easy to expand the existing dataset without additional effort.
Originalsprache | englisch |
---|---|
Titel | Proceedings of the Joint Austrian Computer Vision and Robotics Workshop 2020 |
Seiten | 124 - 130 |
Publikationsstatus | Veröffentlicht - 2020 |
Veranstaltung | Joint Austrian Computer Vision and Robotics Workshop 2020 - Technische Universität Graz, abgesagt, Österreich Dauer: 17 Sept. 2020 → 18 Sept. 2020 |
Konferenz
Konferenz | Joint Austrian Computer Vision and Robotics Workshop 2020 |
---|---|
Kurztitel | ACVRW 20 |
Land/Gebiet | Österreich |
Ort | abgesagt |
Zeitraum | 17/09/20 → 18/09/20 |