Auto Splats: Dynamic Point Cloud Visualization on the GPU

Reinhold Preiner, Stefan Jeschke, Michael Wimmer

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

Abstract

Capturing real-world objects with laser-scanning technology has become an everyday task. Recently, the acquisition of dynamic scenes at interactive frame rates has become feasible. A high-quality visualization of the resulting point cloud stream would require a per-frame reconstruction of object surfaces. Unfortunately, reconstruction computations are still too time-consuming to be applied interactively. In this paper we present a local surface reconstruction and visualization technique that provides interactive feedback for reasonably sized point clouds, while achieving high image quality. Our method is performed entirely on the GPU and in screen pace, exploiting the efficiency of the common rasterization pipeline. The approach is very general, as no assumption is made about point connectivity or sampling density. This naturally allows combining the outputs of multiple scanners in a single visualization, which is useful for many virtual and augmented reality applications.
Original languageEnglish
Title of host publicationProceedings of Eurographics Symposium on Parallel Graphics and Visualization
EditorsH. Childs, T. Kuhlen
Pages139-148
Number of pages10
Publication statusPublished - 1 May 2012
Externally publishedYes

Keywords

  • point clouds, surface reconstruction, point rendering, Auto Splats, KNN search, GPU rendering, point based rendering

Fingerprint

Dive into the research topics of 'Auto Splats: Dynamic Point Cloud Visualization on the GPU'. Together they form a unique fingerprint.

Cite this