CorNet: Generic 3D Corners for 6D Pose Estimation of New Objects without Retraining

Giorgia Pitteri, Slobodan Ilic, Vincent Lepetit

Research output: Chapter in Book/Report/Conference proceedingConference contributionResearchpeer-review

Abstract

We present a novel approach to the detection and 3D pose estimation of objects in color images. Its main contribution is that it does not require any training phases nor data for new objects, while state-of-the-art methods typically require hours of training time and hundreds of training registered images. Instead, our method relies only on the objects' geometries. Our method focuses on objects with prominent corners, which covers a large number of industrial objects. We first learn to detect object corners of various shapes in images and also to predict their 3D poses, by using training images of a small set of objects. To detect a new object in a given image, we first identify its corners from its CAD model; we also detect the corners visible in the image and predict their 3D poses. We then introduce a RANSAC-like algorithm that robustly and efficiently detects and estimates the object's 3D pose by matching its corners on the CAD model with their detected counterparts in the image. Because we also estimate the 3D poses of the corners in the image, detecting only 1 or 2 corners is sufficient to estimate the pose of the object, which makes the approach robust to occlusions. We finally rely on a final check that exploits the full 3D geometry of the objects, in case multiple objects have the same corner spatial arrangement. The advantages of our approach make it particularly attractive for industrial contexts, and we demonstrate our approach on the challenging T-LESS dataset.
Original languageEnglish
Title of host publicationProceedings of the IEEE International Conference on Computer Vision Workshops
Publication statusPublished - 2019

Fingerprint

Computer aided design
Geometry
Color

Cite this

Pitteri, G., Ilic, S., & Lepetit, V. (2019). CorNet: Generic 3D Corners for 6D Pose Estimation of New Objects without Retraining. In Proceedings of the IEEE International Conference on Computer Vision Workshops

CorNet: Generic 3D Corners for 6D Pose Estimation of New Objects without Retraining. / Pitteri, Giorgia; Ilic, Slobodan; Lepetit, Vincent.

Proceedings of the IEEE International Conference on Computer Vision Workshops. 2019.

Research output: Chapter in Book/Report/Conference proceedingConference contributionResearchpeer-review

Pitteri, G, Ilic, S & Lepetit, V 2019, CorNet: Generic 3D Corners for 6D Pose Estimation of New Objects without Retraining. in Proceedings of the IEEE International Conference on Computer Vision Workshops.
Pitteri G, Ilic S, Lepetit V. CorNet: Generic 3D Corners for 6D Pose Estimation of New Objects without Retraining. In Proceedings of the IEEE International Conference on Computer Vision Workshops. 2019
Pitteri, Giorgia ; Ilic, Slobodan ; Lepetit, Vincent. / CorNet: Generic 3D Corners for 6D Pose Estimation of New Objects without Retraining. Proceedings of the IEEE International Conference on Computer Vision Workshops. 2019.
@inproceedings{7abacb0ad2694bed893cb30e22345bd2,
title = "CorNet: Generic 3D Corners for 6D Pose Estimation of New Objects without Retraining",
abstract = "We present a novel approach to the detection and 3D pose estimation of objects in color images. Its main contribution is that it does not require any training phases nor data for new objects, while state-of-the-art methods typically require hours of training time and hundreds of training registered images. Instead, our method relies only on the objects' geometries. Our method focuses on objects with prominent corners, which covers a large number of industrial objects. We first learn to detect object corners of various shapes in images and also to predict their 3D poses, by using training images of a small set of objects. To detect a new object in a given image, we first identify its corners from its CAD model; we also detect the corners visible in the image and predict their 3D poses. We then introduce a RANSAC-like algorithm that robustly and efficiently detects and estimates the object's 3D pose by matching its corners on the CAD model with their detected counterparts in the image. Because we also estimate the 3D poses of the corners in the image, detecting only 1 or 2 corners is sufficient to estimate the pose of the object, which makes the approach robust to occlusions. We finally rely on a final check that exploits the full 3D geometry of the objects, in case multiple objects have the same corner spatial arrangement. The advantages of our approach make it particularly attractive for industrial contexts, and we demonstrate our approach on the challenging T-LESS dataset.",
author = "Giorgia Pitteri and Slobodan Ilic and Vincent Lepetit",
year = "2019",
language = "English",
booktitle = "Proceedings of the IEEE International Conference on Computer Vision Workshops",

}

TY - GEN

T1 - CorNet: Generic 3D Corners for 6D Pose Estimation of New Objects without Retraining

AU - Pitteri, Giorgia

AU - Ilic, Slobodan

AU - Lepetit, Vincent

PY - 2019

Y1 - 2019

N2 - We present a novel approach to the detection and 3D pose estimation of objects in color images. Its main contribution is that it does not require any training phases nor data for new objects, while state-of-the-art methods typically require hours of training time and hundreds of training registered images. Instead, our method relies only on the objects' geometries. Our method focuses on objects with prominent corners, which covers a large number of industrial objects. We first learn to detect object corners of various shapes in images and also to predict their 3D poses, by using training images of a small set of objects. To detect a new object in a given image, we first identify its corners from its CAD model; we also detect the corners visible in the image and predict their 3D poses. We then introduce a RANSAC-like algorithm that robustly and efficiently detects and estimates the object's 3D pose by matching its corners on the CAD model with their detected counterparts in the image. Because we also estimate the 3D poses of the corners in the image, detecting only 1 or 2 corners is sufficient to estimate the pose of the object, which makes the approach robust to occlusions. We finally rely on a final check that exploits the full 3D geometry of the objects, in case multiple objects have the same corner spatial arrangement. The advantages of our approach make it particularly attractive for industrial contexts, and we demonstrate our approach on the challenging T-LESS dataset.

AB - We present a novel approach to the detection and 3D pose estimation of objects in color images. Its main contribution is that it does not require any training phases nor data for new objects, while state-of-the-art methods typically require hours of training time and hundreds of training registered images. Instead, our method relies only on the objects' geometries. Our method focuses on objects with prominent corners, which covers a large number of industrial objects. We first learn to detect object corners of various shapes in images and also to predict their 3D poses, by using training images of a small set of objects. To detect a new object in a given image, we first identify its corners from its CAD model; we also detect the corners visible in the image and predict their 3D poses. We then introduce a RANSAC-like algorithm that robustly and efficiently detects and estimates the object's 3D pose by matching its corners on the CAD model with their detected counterparts in the image. Because we also estimate the 3D poses of the corners in the image, detecting only 1 or 2 corners is sufficient to estimate the pose of the object, which makes the approach robust to occlusions. We finally rely on a final check that exploits the full 3D geometry of the objects, in case multiple objects have the same corner spatial arrangement. The advantages of our approach make it particularly attractive for industrial contexts, and we demonstrate our approach on the challenging T-LESS dataset.

M3 - Conference contribution

BT - Proceedings of the IEEE International Conference on Computer Vision Workshops

ER -