LU-Net: A Simple Approach to 3D LiDAR Point Cloud Semantic Segmentation

Pierre Biasutti, Vincent Lepetit, Mathieu Brédif, Jean-Francois Aujol, Aurélie Bugeau

Research output: Contribution to conferencePaperResearchpeer-review

Abstract

We propose LU-Net (for LiDAR U-Net), for the semantic segmentation of a 3D LiDAR point cloud. Instead of applying some global 3D segmentation method such as Point-Net, we propose an end-to-end architecture for LiDAR point cloud semantic segmentation that efficiently solves the problem as an image processing problem. First, a high-level 3D feature extraction module is used to compute 3D local features for each point given its neighbors. Then, these features are projected into a 2D multichannel range-image by considering the topology of the sensor. This range-image later serves as the input to a U-Net segmentation network, which is a simple architecture yet enough for our purpose. In this way, we can exploit both the 3D nature of the data and the specificity of the LiDAR sensor. This approach efficiently bridges between 3D point cloud processing and image processing as it outperforms the state-of-the-art by a large margin on the KITTI dataset, as our experiments show. Moreover, this approach operates at 24fps on a single GPU. This is above the acquisition rate of common LiDAR sensors which makes it suitable for real-time applications.
Original languageEnglish
Publication statusPublished - 2019
Externally publishedYes
Event2019 International Conference on Computer Vision - Seoul, Korea, Republic of
Duration: 27 Oct 20192 Nov 2019

Conference

Conference2019 International Conference on Computer Vision
Abbreviated titleICCV 2019
CountryKorea, Republic of
CitySeoul
Period27/10/192/11/19

Fingerprint

Semantics
Sensors
Image processing
Bridge approaches
Feature extraction
Topology
Processing
Experiments
Graphics processing unit

Cite this

Biasutti, P., Lepetit, V., Brédif, M., Aujol, J-F., & Bugeau, A. (2019). LU-Net: A Simple Approach to 3D LiDAR Point Cloud Semantic Segmentation. Paper presented at 2019 International Conference on Computer Vision, Seoul, Korea, Republic of.

LU-Net: A Simple Approach to 3D LiDAR Point Cloud Semantic Segmentation. / Biasutti, Pierre; Lepetit, Vincent; Brédif, Mathieu; Aujol, Jean-Francois; Bugeau, Aurélie.

2019. Paper presented at 2019 International Conference on Computer Vision, Seoul, Korea, Republic of.

Research output: Contribution to conferencePaperResearchpeer-review

Biasutti, P, Lepetit, V, Brédif, M, Aujol, J-F & Bugeau, A 2019, 'LU-Net: A Simple Approach to 3D LiDAR Point Cloud Semantic Segmentation' Paper presented at 2019 International Conference on Computer Vision, Seoul, Korea, Republic of, 27/10/19 - 2/11/19, .
Biasutti P, Lepetit V, Brédif M, Aujol J-F, Bugeau A. LU-Net: A Simple Approach to 3D LiDAR Point Cloud Semantic Segmentation. 2019. Paper presented at 2019 International Conference on Computer Vision, Seoul, Korea, Republic of.
Biasutti, Pierre ; Lepetit, Vincent ; Brédif, Mathieu ; Aujol, Jean-Francois ; Bugeau, Aurélie. / LU-Net: A Simple Approach to 3D LiDAR Point Cloud Semantic Segmentation. Paper presented at 2019 International Conference on Computer Vision, Seoul, Korea, Republic of.
@conference{7e7e87f73cb543fa88fb96b9f7f1c4e4,
title = "LU-Net: A Simple Approach to 3D LiDAR Point Cloud Semantic Segmentation",
abstract = "We propose LU-Net (for LiDAR U-Net), for the semantic segmentation of a 3D LiDAR point cloud. Instead of applying some global 3D segmentation method such as Point-Net, we propose an end-to-end architecture for LiDAR point cloud semantic segmentation that efficiently solves the problem as an image processing problem. First, a high-level 3D feature extraction module is used to compute 3D local features for each point given its neighbors. Then, these features are projected into a 2D multichannel range-image by considering the topology of the sensor. This range-image later serves as the input to a U-Net segmentation network, which is a simple architecture yet enough for our purpose. In this way, we can exploit both the 3D nature of the data and the specificity of the LiDAR sensor. This approach efficiently bridges between 3D point cloud processing and image processing as it outperforms the state-of-the-art by a large margin on the KITTI dataset, as our experiments show. Moreover, this approach operates at 24fps on a single GPU. This is above the acquisition rate of common LiDAR sensors which makes it suitable for real-time applications.",
author = "Pierre Biasutti and Vincent Lepetit and Mathieu Br{\'e}dif and Jean-Francois Aujol and Aur{\'e}lie Bugeau",
year = "2019",
language = "English",
note = "2019 International Conference on Computer Vision, ICCV 2019 ; Conference date: 27-10-2019 Through 02-11-2019",

}

TY - CONF

T1 - LU-Net: A Simple Approach to 3D LiDAR Point Cloud Semantic Segmentation

AU - Biasutti, Pierre

AU - Lepetit, Vincent

AU - Brédif, Mathieu

AU - Aujol, Jean-Francois

AU - Bugeau, Aurélie

PY - 2019

Y1 - 2019

N2 - We propose LU-Net (for LiDAR U-Net), for the semantic segmentation of a 3D LiDAR point cloud. Instead of applying some global 3D segmentation method such as Point-Net, we propose an end-to-end architecture for LiDAR point cloud semantic segmentation that efficiently solves the problem as an image processing problem. First, a high-level 3D feature extraction module is used to compute 3D local features for each point given its neighbors. Then, these features are projected into a 2D multichannel range-image by considering the topology of the sensor. This range-image later serves as the input to a U-Net segmentation network, which is a simple architecture yet enough for our purpose. In this way, we can exploit both the 3D nature of the data and the specificity of the LiDAR sensor. This approach efficiently bridges between 3D point cloud processing and image processing as it outperforms the state-of-the-art by a large margin on the KITTI dataset, as our experiments show. Moreover, this approach operates at 24fps on a single GPU. This is above the acquisition rate of common LiDAR sensors which makes it suitable for real-time applications.

AB - We propose LU-Net (for LiDAR U-Net), for the semantic segmentation of a 3D LiDAR point cloud. Instead of applying some global 3D segmentation method such as Point-Net, we propose an end-to-end architecture for LiDAR point cloud semantic segmentation that efficiently solves the problem as an image processing problem. First, a high-level 3D feature extraction module is used to compute 3D local features for each point given its neighbors. Then, these features are projected into a 2D multichannel range-image by considering the topology of the sensor. This range-image later serves as the input to a U-Net segmentation network, which is a simple architecture yet enough for our purpose. In this way, we can exploit both the 3D nature of the data and the specificity of the LiDAR sensor. This approach efficiently bridges between 3D point cloud processing and image processing as it outperforms the state-of-the-art by a large margin on the KITTI dataset, as our experiments show. Moreover, this approach operates at 24fps on a single GPU. This is above the acquisition rate of common LiDAR sensors which makes it suitable for real-time applications.

M3 - Paper

ER -