Grasping Point Prediction in Cluttered Environment using Automatically Labeled Data

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

Abstract

We propose a method to automatically generate high quality ground truth annotations for grasping point prediction and show the usefulness of these annotations by training a deep neural network to predict grasping candidates for objects in a cluttered environment. First, we acquire sequences of RGBD images of a real world picking scenario and leverage the sequential depth information to extract labels for grasping point prediction. Afterwards, we train a deep neural network to predict grasping points, establishing a fully automatic pipeline from acquiring data to a trained network without the need of human annotators. We show in our experiments that our network trained with automatically generated labels delivers high quality results for predicting grasping candidates, on par with a trained network which uses human annotated data. This work lowers the cost/complexity of creating specific datasets for grasping and makes it easy to expand the existing dataset without additional effort.
Original languageEnglish
Title of host publicationProceedings of the Joint Austrian Computer Vision and Robotics Workshop 2020
Pages124 - 130
Publication statusPublished - 2020
EventJoint Austrian Computer Vision and Robotics Workshop 2020 - Technische Universität Graz, abgesagt, Austria
Duration: 17 Sep 202018 Sep 2020

Conference

ConferenceJoint Austrian Computer Vision and Robotics Workshop 2020
Abbreviated titleACVRW 20
Country/TerritoryAustria
Cityabgesagt
Period17/09/2018/09/20

Cite this