Point clouds are a quintessential 3D geometry representation format, and often the first model obtained from reconstructive efforts, such as LIDAR scans. IVILPC aims for fast, authentic, interactive, and high-quality processing of such point-based data sets. Our project explores high-performance software rendering routines for various point-based primitives, such as point sprites, gaussian splats, surfels, and particle systems. Beyond conventional use cases, point cloud rendering also forms a key component of point-based machine learning methods and novel-view synthesis, where performance is paramount. We will exploit the flexibility and processing power of cutting-edge GPU architecture features to formulate novel, high-performance rendering approaches. The envisioned solutions will be applicable to unstructured point clouds for instant rendering of billions of points. Our research targets minimally-invasive compression, culling methods, and level-of-detail techniques for point-based rendering to deliver high performance and quality on-demand. We explore GPU-accelerated editing of point clouds, as well as common display issues on next-generation display devices. IVILPC lays the foundation for interaction with large point clouds in conventional and immersive environments. Its goal is an efficient data knowledge transfer from sensor to user, with a wide range of use cases to image-based rendering, virtual reality (VR) technology, architecture, the geospatial industry, and cultural heritage.
Funding
- WWTF Wiener Wissenschafts-, Forschungs- und Technologiefonds
Team
News
Research Areas
- In this area, we concentrate on algorithms that synthesize images to depict 3D models or scenes, often by simulating or approximating the physics of light.
- Uses concepts from applied mathematics and computer science to design efficient algorithms for the reconstruction, analysis, manipulation, simulation and transmission of complex 3D models. Example applications are collision detection, reconstruction, compression, occlusion-aware surface handling and improved sampling conditions.
In this area, we focus on user experiences and rendering algorithms for virtual reality environments, including methods to navigate and collaborate in VR, foveated rendering, exploit human perception and simulate visual deficiencies.
Publications
Image | Bib Reference | Publication Type |
---|---|---|
2024 | ||
Panagiotis Papantonakis, Georgios Kopanas, Bernhard Kerbl, Alexandre Lanvin, George Drettakis Reducing the Memory Footprint of 3D Gaussian Splatting Proceedings of the ACM on Computer Graphics and Interactive Techniques, 7(1):1-17, May 2024. |
Journal Paper (without talk) | |
Markus Schütz, Lukas Herzberger, Michael Wimmer SimLOD: Simultaneous LOD Generation and Rendering Proceedings of the ACM in Computer Graphics and Interactive Techniques, 7:20-20, May 2024. |
Journal Paper with Conference Talk | |
Johannes Unterguggenberger, Lukas Lipp, Michael Wimmer, Bernhard Kerbl, Markus Schütz Fast Rendering of Parametric Objects on Modern GPUs In EGPGV24: Eurographics Symposium on Parallel Graphics and Visualization. May 2024. [image] [paper] |
Conference Paper | |
Annalena Ulschmid, Bernhard Kerbl, Katharina Krösl, Michael Wimmer Real-Time Editing of Path-Traced Scenes with Prioritized Re-Rendering In Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - GRAPP and VISIGRAPP, pages 46-57. 2024. [paper] [GitHub] |
Conference Paper | |
2023 | ||
Philip Voglreiter, Bernhard Kerbl, Alexander Weinrauch, Joerg Hermann Mueller, Thomas Neff, Markus Steinberger, Dieter Schmalstieg Trim Regions for Online Computation of From-Region Potentially Visible Sets ACM Transactions on Graphics, 42(4):1-15, August 2023. |
Journal Paper (without talk) |