Grasping Point Prediction in Cluttered Environment using Automatically Labeled Data

Publikation: Beitrag in Buch/Bericht/KonferenzbandBeitrag in einem KonferenzbandBegutachtung

Abstract

We propose a method to automatically generate high quality ground truth annotations for grasping point prediction and show the usefulness of these annotations by training a deep neural network to predict grasping candidates for objects in a cluttered environment. First, we acquire sequences of RGBD images of a real world picking scenario and leverage the sequential depth information to extract labels for grasping point prediction. Afterwards, we train a deep neural network to predict grasping points, establishing a fully automatic pipeline from acquiring data to a trained network without the need of human annotators. We show in our experiments that our network trained with automatically generated labels delivers high quality results for predicting grasping candidates, on par with a trained network which uses human annotated data. This work lowers the cost/complexity of creating specific datasets for grasping and makes it easy to expand the existing dataset without additional effort.
Originalspracheenglisch
TitelProceedings of the Joint Austrian Computer Vision and Robotics Workshop 2020
Seiten124 - 130
PublikationsstatusVeröffentlicht - 2020
VeranstaltungJoint Austrian Computer Vision and Robotics Workshop 2020 - Technische Universität Graz, abgesagt, Österreich
Dauer: 17 Sept. 202018 Sept. 2020

Konferenz

KonferenzJoint Austrian Computer Vision and Robotics Workshop 2020
KurztitelACVRW 20
Land/GebietÖsterreich
Ortabgesagt
Zeitraum17/09/2018/09/20

Dieses zitieren