Unterguggenberger, J. (2025). Schnelles Rendern hochdetailierter Geometrie in Echtzeit mit modernen GPUs [Dissertation, Technische Universität Wien]. reposiTUm. https://doi.org/10.34726/hss.2025.132181
E193 - Institut für Visual Computing and Human-Centered Technology
-
Date (published):
2025
-
Number of Pages:
135
-
Keywords:
real-time rendering; GPUs; levels of detail
en
Abstract:
Rasterization-based graphics pipelines are still essential for rendering today’s real-time rendering applications and games. We generally see high demand for efficient rasterization-based rendering techniques. With alternative approaches, such as hardware-accelerated ray tracing, it remains challenging to render more than 60 frames per second (FPS) for many real-time applications across different GPU models, or more than 90 FPS in stereo as often demanded for a smooth experience in Virtual Reality (VR) applications.In recent years, some trends emerged which put pressure on rasterization-based graphics pipelines with high geometry loads. One of these trends is VR rendering, which sometimes not only requires rendering a given scene faster and two times in every frame but some applications or settings require even more than two views to be rendered for the creation of one single frame. Another trend was mainly initiated by Epic Games’ Nanite technology, which enables the rendering of static meshes with sub-pixel geometric detail in real time. As a consequence, skinned models and other scene objects might well be expected to be rendered in similar geometric detail, increasing the geometry load even further.With this dissertation, we contribute fundamental methods and evaluations to high geometry-load scenarios in the context of real-time rendering using rasterization-based graphics pipelines to help reach the performance or quality requirements of modern real-time rendering applications and games:We contribute an in-depth analysis of the state of the art in multi-view rendering and introduce geometry shader-based pipeline variants that can help to improve compatibility and performance in challenging multi-view rendering scenarios. We describe a fundamental approach for artifact-free culling when rendering animated 3D models divided into clusters for ultra-detailed geometry scenarios. With our approach, also parts of skinned models can be culled in a fine-grained manner to match Nanite’s fine-grained culling of static clusters. In contrast to static meshes, finding conservative bounds for clusters of animated meshes is non-trivial, but is achieved with our approach. Finally, in order to render other scene objects—such as, e.g., items or generally shapes which can be described with a parametric function—in similar geometric detail, we describe a method to generate ultra-detailed geometry on the fly: After compute shader-based level of detail (LOD) determination, the resulting parametrically defined shapes are either rendered point-wise or geometry is generated on-chip using the hardware tessellator.In our research, we regard new technological developments such as hardware-accelerated multi-view rendering, new task and mesh shader stages, efficient usage of classical shader stages (such as tessellation shaders), and generally efficient usage of the vast set of features, stages, and peculiarities of modern GPUs, with the goal to accelerate real-time rendering of ultra-detailed geometry.
en
Additional information:
Arbeit an der Bibliothek noch nicht eingelangt - Daten nicht geprüft Abweichender Titel nach Übersetzung der Verfasserin/des Verfassers