Towards a High Quality Real-Time Graphics Pipeline
Doctoral Dissertation 2011,
Department of Computer Science
Lund University
Abstract
Modern graphics hardware pipelines create photorealistic images with high geometric complexity
in real time. The quality is constantly improving and advanced techniques from feature film visual
effects, such as high dynamic range images and support for higher-order surface primitives, have
recently been adopted. Visual effect techniques have large computational costs and significant memory
bandwidth usage.
In this thesis, we identify three problem areas and propose new algorithms that increase the performance
of a set of computer graphics techniques. Our main focus is on efficient algorithms for the real-time
graphics pipeline, but parts of our research are equally applicable to offline rendering.
Our first
focus is texture compression, which is a technique to reduce the memory bandwidth usage.
The core idea is to store images in small compressed blocks which are sent over the memory bus
and are decompressed on-the-fly when accessed. We present compression algorithms for two types of
texture formats. High dynamic range images capture environment lighting with luminance differences
over a wide intensity range. Normal maps store perturbation vectors for local surface normals, and
give the illusion of high geometric surface detail. Our compression formats are tailored to these
texture types and have compression ratios of 6:1, high visual fidelity, and low-cost decompression
logic.
Our second focus is tessellation culling. Culling is a commonly used technique in computer
graphics for removing work that does not contribute to the final image, such as completely hidden
geometry. By discarding rendering primitives from further processing, substantial arithmetic computations
and memory bandwidth can be saved. Modern graphics processing units include flexible tessellation
stages, where rendering primitives are subdivided for increased geometric detail. Images with highly
detailed models can be synthesized, but the incurred cost is significant. We have devised a simple
remapping technique that allowsfor better tessellation distribution in screen space. Furthermore,
we present programmable tessellation culling, where bounding volumes for displaced geometry are computed
and used to conservatively test if a primitive can be discarded before tessellation. We introduce a
general tessellation culling framework, and an optimized algorithm for rendering of displaced Bézier
patches, which is expected to be a common use case for graphics hardware tessellation.
Our third and final focus is forward-looking, and relates to efficient algorithms for stochastic
rasterization, a rendering technique where camera effects such as depth of field and motion blur can
be faithfully simulated. We extend a graphics pipeline with stochastic rasterization in spatio-temporal
space and show that stochastic motion blur can be rendered with rather modest pipeline modifications.
Furthermore, backface culling algorithms for motion blur and depth of field rendering are presented,
which are directly applicable to stochastic rasterization. Hopefully, our work in this field brings
us closer to high quality real-time stochastic rendering.
Downloads
PhD Thesis [74 MB]