FuseSeg: LiDAR Point Cloud Segmentation Fusing Multi-Modal Data

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We introduce a simple yet effective fusion method of LiDAR and RGB data to segment LiDAR point clouds. Utilizing the dense native range representation of a LiDAR sensor and the setup calibration, we establish point correspondences between the two input modalities. Subsequently, we are able to warp and fuse the features from one domain into the other. Therefore, we can jointly exploit information from both data sources within one single network. To show the merit of our method, we extend SqueezeSeg, a point cloud segmentation network, with an RGB feature branch and fuse it into the original structure. Our extension called FuseSeg leads to an improvement of up to 18% IoU on the KITTI benchmark. In addition to the improved accuracy, we also achieve real-time performance at 50 fps, five times as fast as the recording speed of the KITTI LiDAR data.

Original languageEnglish
Title of host publicationProceedings - 2020 IEEE Winter Conference on Applications of Computer Vision, WACV 2020
Pages1863-1872
Number of pages10
ISBN (Electronic)9781728165530
DOIs
Publication statusPublished - Mar 2020
Eventwacv2020: WACV 2020 - Snowmass Village, United States
Duration: 1 Mar 20205 Mar 2020

Conference

Conferencewacv2020
Abbreviated titleWACV 2020
CountryUnited States
CitySnowmass Village
Period1/03/205/03/20

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition
  • Computer Science Applications

Fingerprint Dive into the research topics of 'FuseSeg: LiDAR Point Cloud Segmentation Fusing Multi-Modal Data'. Together they form a unique fingerprint.

Cite this