Two Hybrid Methods of Volumetric Lighting

MSc Thesis
Lund University

Anders Jakob Nilsson
Illuminate Labs


Advisor: Petrik Clarberg

Completed: 2008-03-04



This thesis presents two new viable approaches to accelerating the visualization of light shafts from a point light source. A coarse representation of the three-dimensional scene is used to identify regions of space that are fully lit, fully in shadow and those that can not be determined from the coarse representation alone. The two methods then visualize the different regions as efficiently as possibly. The first of the methods fully relies on sampling, but ignores many of the evaluations required by previous sampling-based methods. The other method uses an analytical solution in the fully lit region, drastically reducing the required fillrate of the algorithm.



A light source in a computer scene emits photons that reach a camera from different directions. The recieved photons give rise to an image that is displayed to the user. Interactive applications often use approximations that roughly correspond to the assumption that photons travel uninterrupted in straight lines through the air.

In reality, however, the air is filled with tiny particles such as dust particles and water droplets, which interact with the photons, slightly altering their course. On a foggy day light can travel around an object, resulting in a halo around its silhouette. The cone of a light source will also be visible, and whenever an object blocks the path of the light, the absence of the light cone will be seen as a volumetric shadow.

Much like shadows cast on a floor give clues to where the objects are placed in the world, volumetric shadows give subtle clues to how objects are placed relative to each other. They also convey the properties of the medium that the objects reside in. In a computer-generated image, depicting a clear day, it might not add anything to the image. But, whenever we have a foggy environment it will greatly add to the realism.

While volumetric lighting clearly being something worth striving for, it is not obvious how to achieve it while still having an interactive application. Crude approximations are still necessary. One such approximated light model is described by:

  1. Photons are absorbed based on how far they travel along a line.
  2. Each photon can only be diverted from its path once.
Only using assumption (1) gives rise to a very simple fog model. Adding assumption (2) gives us volumetric light and volumetric shadows.

Previous Methods

A popular method that uses the assumptions depicted above is the sample-based method introduced by Dobashi et al. Several virtual planes are rendered at different depths with each rendered pixel approximating a small volumetric element.

First, the amount of light reaching that element from a light source is calculated, taking absorption into account. Second, the amount of that light that is diverted towards the camera, within the volumetric element, is calculated in a way that is consistent with the medium.

When evaluating if the photons from a light source reach the volumetric element, along a straight line, shadow mapping is used. The same shadow map can later be reused when calculating shadows on surfaces.

Like most sampling-based approaches the method is very easy to implement, but it is not very efficient. Many planes must be used if good quality is to be obtained, and then the GPU must render many screen-sized polygons.

Our Approach

By clipping the virtual planes to the influence volume of the light source, the number of pixels that must be processed is somewhat lessened. The total number of pixels that must be rendered, however, is still very large.

Our approach takes this idea, as presented by Mitchell, one step further. Each object is enclosed in a simple convex polyhedral bounding volume. A Beam Tree is constructed from the position of the light source, which tells us what parts of the objects' bounding volumes are visible from the light source. This gives us a decompositon of the full bounding volume of the light cone into many pyramids, which have the light source at their apex and the face of the bounding volume as their base.

Inside each such pyramid every shadow map evaluation would result in the pixels being visible. This tells us that the virtual planes can be clipped such that we only use the sampling method outside of these fully lit pyramids.

The contribution from the volume inside the pyramids can be obtained using an analytic method that is presented in the thesis.


PDF Thesis


Interactive editor to visualize light shafts.