[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ] [ Search: ]

4.9.14.1 About Volumetric Shadows

More information about the methods implemented and tested for the volumetric shadows render manager can be found at this blog: http://volumetricshadows.wordpress.com/. Although initially the volumetric render manager was to be implemented using Opacity Shadow Maps, thatís where the name osm_rm comes from, this algorithm has severe drawbacks so other methods, like Deep Opacity Maps and Bounding Opacity Maps were tested as well. The following sections briefly describe the algorithm, advantages and disadvantages of all tested methods.

Opacity Shadow Maps

Algorithm

This method is based on T.Y. Kim and U. Neumannís paper Opacity Shadow Maps from 2001. The algorithm consists of slicing the geometry with planes perpendicular with the lightís direction and rendering them to texture, as shown in Figure 1. Instead of storing information about the geometryís depth, like a shadow map, these textures contain information about density by accumulating the alpha values of the geometry encountered so far, yielding the opacity function shown in the bottom of Figure 1.

usingcs/engine/volumetric/OSM

Figure 1 Computing the opacity function.

Advantages

The main two advantages of this method are that it is both easy to implement, especially starting from a shadow map render manager and fast to compute, it only implies rendering the scene to texture multiple times without any further computations.

Disadvantages

The drawback of this algorithm is that it requires a substantial number of opacity maps in order to produce artifact free renderings. For instance in Figure 2.a the scene is rendered using only 16 maps and the artifacts are clearly visible, while in Figure 2.b using 64 maps, the artifacts become smaller and less visible.

usingcs/engine/volumetric/osm_16_64

Figure 2 Difference in rendering for Opacity Shadow Maps with 16 maps (a) and 64 maps (b).

Deep Opacity Maps

Algorithm

Deep Opacity Maps were introduced in 2008 by C. Yuksel and J. Keyser and remove the artifacts from Opacity Shadow Maps by aligning the maps with the initial shape of the object as seen from the lightís perspective. This is done by first computing a depth map and afterwards distributing the opacity maps based on that information (Figure 3).

usingcs/engine/volumetric/dom

Figure 3 Difference in distributing the opacity maps in Opacity Shadow Maps (a) and Deep Opacity Maps (b).

Advantages

The advantage of this method is that it manages to use only a few layers to generate renderings without visible layering artifacts as in the case of Opacity Shadow Maps. Moreover, it represents a compromise between performance and visuals, achieving renderings without any significant artifacts while having good frame rates as well.

Disadvantages

Although the layering artifacts are removed, when a very small number of layers is used, Figure 4.a has only 4 maps, artifacts at the end of the object may appear. This can be solved by either increasing the number of maps, Figure 4.b with 16 maps, or by better distributing the 4 maps as can be seen in the following section.

usingcs/engine/volumetric/dom_4_16

Figure 4 Difference in rendering for Deep Opacity Maps with 4 maps (a) and 16 maps (b).

Bounding Opacity Maps

Algorithm

The volumetric shadow render manager uses a novel approach for distributing the maps as described in Bounding Opacity Maps. This method consists of computing two depth maps from the lightís perspective storing information about both the initial and the final object shape (Figure 5). Furthermore the maps are distributed according to the objectsí density using a distribution that varies from logarithmic to linear.

usingcs/engine/volumetric/bom

Figure 5 A translucent full sphere as seen in real-life (a), the distribution of layers when using Deep Opacity Maps (b) and Bounding Opacity Maps (c).

Advantages

Because this method tends to follow the distribution of light from the real-world (Figure 5.a), by computing two depth maps and by distributing the maps according to the objectsí density, realistic renderings are obtained with just a few layers.

Disadvantages

The main drawback of this algorithm is that it requires more computations, mainly because it needs to find out two depth maps. Furthermore, the maps are distributed according to the overall density in the scene and not individually per object and the mapsí distribution canít be recomputed in real-time because it is currently done on the CPU.

Performance

A plot showing the variance between the performance, measured in FPS (frames-per-second) and the number of maps / layers used, for the three algorithms presented in this section is shown in Figure 6. When looking at these results it is important to take into account that even though Bounding Opacity Maps have the worst performance they also require only a few layers to produce realistic renderings.

usingcs/engine/volumetric/FPS

Figure 6 The variance between the number of layers and the FPS for Opacity Shadow Maps (red), Deep Opacity Maps (green) and Bounding Opacity Maps (blue).


[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

This document was generated using texi2html 1.76.