Abstract
Capturing real-world objects with laser-scanning technology has become an everyday task. Recently, the acquisition of dynamic scenes at interactive frame rates has become feasible. A high-quality visualization of the resulting point cloud stream would require a per-frame reconstruction of object surfaces. Unfortunately, reconstruction computations are still too time-consuming to be applied interactively. In this paper we present a local surface reconstruction and visualization technique that provides interactive feedback for reasonably sized point clouds, while achieving high image quality. Our method is performed entirely on the GPU and in screen pace, exploiting the efficiency of the common rasterization pipeline. The approach is very general, as no assumption is made about point connectivity or sampling density. This naturally allows combining the outputs of multiple scanners in a single visualization, which is useful for many virtual and augmented reality applications.
Original language | English |
---|---|
Title of host publication | Proceedings of Eurographics Symposium on Parallel Graphics and Visualization |
Editors | H. Childs, T. Kuhlen |
Pages | 139-148 |
Number of pages | 10 |
Publication status | Published - 1 May 2012 |
Externally published | Yes |
Keywords
- point clouds, surface reconstruction, point rendering, Auto Splats, KNN search, GPU rendering, point based rendering