Depth-aware Object Segmentation and Grasp Detection for Robotic Picking Tasks

Stefan Ainetter, Christoph Böhm, Rohit Dhakate, Stephan Weiss, Friedrich Fraundorfer

Publikation: Beitrag in Buch/Bericht/KonferenzbandBeitrag in einem KonferenzbandBegutachtung

Abstract

In this paper, we present a novel deep neural network architecture for joint class-agnostic object segmentation and grasp detection for robotic picking tasks using a parallel-plate gripper. We introduce depth-aware Coordinate Convolution (CoordConv), a method to increase accuracy for point proposal based object instance segmentation in complex scenes without adding any additional network parameters or computation complexity.
Depth-aware CoordConv uses depth data to extract prior information about the loca-tion of an object to achieve highly accurate object instance segmentation. These result-ing segmentation masks, combined with predicted grasp candidates, lead to a complete scene description for grasping using a parallel-plate gripper. We evaluate the accuracy of grasp detection and instance segmentation on challenging robotic picking datasets, namely Siléane and OCID_grasp, and show the benefit of joint grasp detection and seg-mentation on a real-world robotic picking task
Originalspracheenglisch
TitelBritish Machine Vision Conference (BMVC) 2021
Seitenumfang16
DOIs
PublikationsstatusVeröffentlicht - 2021
Veranstaltung32nd British Machine Vision Conference: BMVC 2021 - Virtuell, Großbritannien / Vereinigtes Königreich
Dauer: 22 Nov 202125 Nov 2021

Konferenz

Konferenz32nd British Machine Vision Conference
KurztitelBMVC 2021
LandGroßbritannien / Vereinigtes Königreich
OrtVirtuell
Zeitraum22/11/2125/11/21

Fingerprint

Untersuchen Sie die Forschungsthemen von „Depth-aware Object Segmentation and Grasp Detection for Robotic Picking Tasks“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren