Category: Misc.


Permalink 09:49:19 pm, by Olliebrown Email , 683 words, 7092 views   English (US)
Categories: GSoC 2009, Misc.

Productive Fun

I've been busy for a few days with a rather boring user study and the results of it were less than great ... so I needed a pick-me-up. For me, this means some creative work. Hence, I decided to spend a little time modeling a new world to test my lighter2 changes. Something considerably more complex than the cornell box and construction in wood and that would logically include every type of light (point, directional and spot). It's fun because it's art (in a way ... right) and for me I learn how to use blender better and can refine my workflow for getting things into CrystalSpace's XML world format.

Anyways, Here's what I've created:

Sponza World in CrystalSpace with runtime lighting only
The Sponza Atrium with Stanford models scattered around the main floor (Lucy the angel statue is front and center). No static-lighting is present yet so shadows are missing.

It's a combination of a bunch of different cannonical models from the graphics community (from Stanford specifically ... yes the bunny is in there) all placed in the Sponza Atrium model that's been appearing in global illumination papers for the past several years now. The atrium is texture mapped (and has bump maps somewhere that I have yet to track down). The Stanford meshes are incredibly dense (100K faces each AFTER I decimated them, they had millions in their original form ... some 10s of millions). So, this is an intense little scene to render and particularly tough to run through lighter2. It's a more realistic example and hopefully will test just how robust the system is. Consequently it takes a VERY long time to light. Most of the time is spent laying out the lightmaps (not raytracing or photon mapping) but be prepared to wait a while if you want to re-calc the maps and put it on a machine that you can let run for potentially a few days. I'll post some more images as lighter2 finishes running.

Blender model: I setup the world in blender and here is the file for that. It took a lot to get all of this into blender. Here's a breakdown of what I did -

  • I started with the 3ds model available here
  • Loaded the 3ds model into wings3d and exported to obj (to preserve textures)
  • Loaded obj into blender using standard obj importer
  • Grabbed the different Stanford ply models from here (except lucy.ply which came from here ... the one at stanford site is too big to fit in memory (even 4G of it plus a huge swap file)).
  • Loaded each ply model into meshlab and decimated it to 100K faces (except bunny which was already small enough)
  • Exported each model to obj in meshlab and imported into blender sponza model
  • After positioning meshes and adding the pedestal for each I selected each mesh and set the faces to be smooth which will auto-compute normals (the original meshes were point scans and as such do not have normals in the ply files).
  • Added spot lights by hand and tweaked fill lighting and camera positions (there are two different places to start in this world).

The model is still missing several textures and when you export it there will be several errors that need to be fixed by hand. These are all present in the original 3ds file and were just easier to fix by editing the world file by hand. (15.2 MB, hosted externally)

Material fixes for world file: Here's the extra textures and a snippit of XML that can be pasted into the world file over it's 'textures' and 'materials' section to fix all the errors created by Blender2Crystal. Note that bump maps are still missing. I can't seem to find these on the net anywhere. (499 KB, hosted externally)

CrystalSpace world: Here's the exported world with the material and texture fixes already applied. (16.4 MB, hosted externally)

Static lit CS world: Here's the same world with lightmaps included so you don't have to wait for it to finish raytracing. This is just direct raytracing for now. A photon mapped version is forthcoming.

[Still rendering, will post soon]


Permalink 10:23:35 am, by Olliebrown Email , 367 words, 13879 views   English (US)
Categories: GSoC 2009, Misc.

Irradiance Cache

Lots of posts today, sorry for that. This one will be short.

So, Final Gather is slow as molasses. Furthermore it seems to be not working right in my code. But, it is very important for smoothing out noise in the photon map. Enter the irradiance cache.

The irradiance cache is a concept introduced by Greg Ward and co. back in 1988 and I believe is part of Ward's Radiance renderer. The paper that describes it in detail (and which is surprisingly easy to read) can be found here (hint to Scott, check this out):

The basic idea is that diffuse inter-reflections are very hard to compute using Monte Carlo methods (and Final Gather is a Monte Carlo method) but by-and-large they are very uniform and slowly changing across a surface (i.e. a good candidate for interpolation). So we would like to reduce the number of times it needs to be computed and interpolate everywhere else.

Ward describes a two tiered method: the primary method is the standard Monte Carlo method and the secondary method will interpolate cached values from previous computations. The secondary method needs to know when it can interpolate and how to weight the cached values. This is done by approximating the gradient of irradiance between the cached point and the point we need irradiance for. If the gradient estimate is too high a new value is needed. Otherwise we can weight the cached value by the inverse of the error inherent in the estimate and get a very good approximation that is considerably cheaper than the Monte Carlo method.

The devil's in the details and I won't bore you with it (other than to say it involves an octree based around the valid distance of each cached sample). But with Jensen's summary of Ward's paper (and the paper itself) I think I've got the Irradiance Cache implemented. It needs some testing and such (still to come) but hopefully this will help with noise and speed ... provided I can then figure out what is wrong with the Final Gather!

I'd kill for smaller unit testing right now but we'll see if we need it first.

Permalink 03:55:07 am, by Olliebrown Email , 466 words, 1956 views   English (US)
Categories: GSoC 2009, Planning Progress, Misc.

Taking Inventory

To round out my previous post concerning GSoC I thought I should list out the changes I have made and how the milestones are lining up.


  • Reimplemented the photon map data structure from the ground up.
  • Lots of changes to the photon emission code to make it physically accurate.
  • Code to emit from spotlights in place but untested.
  • Re-factoring of the indirect light and direct light classes into independent raytracer and photonmapper engines
  • A new structure (LightCalculator) that will loop through the sector elements and add up the components from each engine (raytracer and photon mapper).
  • Many new command line options for tweaking the photon mapper and calibrating it with the raytracer.
  • Filtering in the photon map to help eliminate artifacts and smooth out noise (still needs work).
  • A new final gather implementation (still needs work).
  • The beginnings of an irradiance cache to speed up final gather.
  • Updates to the texinfo documentation for lighter2 (still needs work).
  • A new version of the Cornell box model (with more lights and fixed colors).
  • a brand new model of 'Construction in Wood' (another example of color bleed).


  • Photon mapper is working and, although noisy and not artifact-free, giving usable results.
  • Photon mapper is an order of magnitude faster.
  • New rendering engines can be implemented easier in the future using the LightCalculator/LightComponent classes.
  • Work has already begun to further eliminate noise (final gather), speed up even more (irradiance cache) and remove artifacts (gaussian filter).

Despite the noise and artifacts still present I think it's safe to say Milestone 1 is complete (or nearly so). Here's a list of things still to finish/implement.

Immediate priorities (considered unfinished Google Summer of Code obligations):

  • Debug gaussian filter.
  • Test spotlight emission and ensure it is correct.
  • Add support for directional light sources.
  • Retrieve and use surface albedo from object material (texture, color, etc). Consider accounting for normal/bump maps.
  • Complete irradiance cache and then debug final gather.

Future changes (for fun and profit!):

  • Add support for filtering materials (translucent w/o refraction).
  • Add support for reflective/refractive materials and caustics.
  • Add support for static linear/area/polygon light sources. This would require changes to:
    1. photon mapper - sampling over area as well as hemisphere.

    2. raytracer - ability to distribute rays from area source as needed.

    3. raytracer - ability to detect collision with the geometry of an area source (not infinitely small like point source/spotlight)

    4. libcrystalspace - support to parse and represent area sources within a world (but static ones only).

    5. libcrystalspace - support to render the geometry of an area source (no longer infinitely small).

It seems appropriate to send thanks out to Res and Scott at this point as they have both been quite helpful this summer and I'm sure will continue to be.

And now, we carry on ...


Permalink 08:07:52 pm, by Olliebrown Email , 749 words, 7338 views   English (US)
Categories: GSoC 2009, Bug Hunting, Misc.

Calibration Revisited

My handling of light attenuation turned out to be incorrect. Res and Martin set me straight. I had the conceptual model that distance attenuation accounts for the phenomenon of light losing power as it travels through a medium (even just the atmosphere). This is called attenuation in physics and optics but in graphics this is not what distance attenuation is accounting for. Distance attenuation accounts for the fact that as light moves away from its source the energy spreads out (I assume like a wavefront spreading out). It does so like the surface of a sphere so the 'realistic' distance^2 factor accounts for this spreading out perfectly.

The consequence of this type of attenuation (the correct type) is that photon mapping attenuates automatically. We are distributing photons equally around the sphere of the light source and when they land they will be distributed according to how far away they are from the light. The density of this distribution already has this spreading effect built in automatically.

So, to really do calibration between raytracing and photon mapping I need to remove the attenuation from the photons (already done) and then switch all the lights to use 'realistic' attenuation (which is not the default). My apologies to res for second guessing his advice as this was his original suggestion. As soon as I did this it became apparent that things were dramatically more comparable between raytracing and photon mapping:

Photon Mapper
Raytraced 4.0 power lights

Photonmapped 1.0 power lights (4M photons, 4K samples)
Raytraced 6.0 power lights

Photonmapped 6.0 power lights (4M photons, 4K samples)
Raytraced 8.0 power lights

Photonmapped 8.0 power lights (4M photons, 4K samples)
Raytraced 10.0 power lights

Photonmapped 10.0 power lights (4M photons, 4K samples)
Raytraced 12.0 power lights

Photonmapped 12.0 power lights (4M photons, 4K samples)

These images show the results of direct lighting computed with the raytracer and the photon mapper. Each one is given the exact same input however they do not result in the exact same output. Furthermore, the difference is not constant or linear. Note that the images shown above match the range of the graph below from the lower knee up to the point that the curves cross (16 - 48 on the x-axis).

As you can see, despite the similarity resulting from the change to realistic attenuation there is still a marked difference in the exposure of the two. After revisiting this from many different angles, over and over again, and after changing the code in different ways and attempting both a mathetamical and visual calibration I've decided that this issue is going to have to wait. Here's an example of the problem:

Comparison of Average Luminance w/ Proposed Calibration
This graph compares the average luminance of the images generated by raytracing and photon mapping for light sources ranging from 0.25 to 50.0 in power (in 1.0 increments). There are four lights in the scene so this is a scene luminance from 1 to 200 (the x-axis of the graph). Note that the shape of the two curves is essentially the same (a standard exposure curve with a knee and shoulder) but that the have significant differences in where the curves features are occurring.

Note that the raytracing and photon mapping graphs have similar but miss-aligned shapes. This miss-alignment is the problem. There is no easy way to simply fudge things and fix it as it will be entirely dependent on the scene being rendered. Furthermore, I'm starting to think (after talking with colleagues) that there is a mistake somewhere in either the RT code or the PM code that is causing this miss-alignment and simply fudging things to fix it is not a permanent solution (or one that I should be spending so much time on).

So, three days gone on this but at least I have something to show for it. New configuration options! (well, and a lot of frustration!) Here's the new options:

'forcerealistic' - This option can be enabled or disabled and will force all the static and pseudo-dynamic lights in a world to use 'realistic' attenuation mode. This saves the trouble of having to re-do your world in order to use it with photon mapping.

'lightpowerscale' - Scale all the lights in a scene by the given scaling factor. This scale is applied to the light color which is essentially the same as it's power. When you use 'forcerealistic' things tend to get much darker so the lights need to be scaled up to compensate. Again, this option avoids having to edit the world file to achieve this.

'pmlightscale' - Like 'lightpowerscale' this will scale all the static and pseudo dynamic lights in the scene but only for the photon mapping phase. This is in addition to any scaling applied by 'lightpowerscale'. This allows you to fudge things from the command line and bring the exposure of the photon mapping simulation and the raytracer in line with one another.


Permalink 05:01:25 pm, by Olliebrown Email , 492 words, 8837 views   English (US)
Categories: GSoC 2009, Misc.

Final Gathering / Irradiance Caching

Final Gathering. No, it's not the name of yet another horror movie sequel (although Google isn't much help for figuring out what it in fact is).

It's a technique for effectively smoothing noise in global illumination from Lambertian surfaces. Basically, given a solution to global illumination (like radiosity or a photon map) instead of looking up the diffuse light value in the GI solution, you do one final bounce of light by shooting rays out across the hemisphere above the point you are rendering. These rays sample the secondary light that would be hitting this point much like a distribution raytracer would send out rays to sample the BRDF. In this case, a random sampling of the Lambertian distribution is not the best bet (according to Jensen anways). It is better to use a grid of points placed across the hemisphere according to Lambert's cosine law and then jitter these points slightly to ensure the full hemisphere gets sampled.

When these rays hit a surface a distribution raytracer would send out more rays to sample the light hitting that surface. In FG, you use the precomputed GI solution instead. So, like shadow rays, FG rays do not bounce. However, FG is intentionally used for Lambertian surfaces (perfectly diffuse surfaces). This means that the hemisphere above the point must be FULLY sampled and that takes a lot of rays. Doing this at every point in the scene is very inefficient.

Enter the irradiance cache. Diffuse lighting changes very slowly across a surface; think of a big white wall in an office (the one exception would be a caustic which is actually a diffuse effect but we'll ignore that for now). Slowly changing functions don't need to be sampled as frequently as quickly changing functions so re-computing the FG value at every point across a large surface is wasteful. Instead, we could sample it sparsely and use interpolation of nearby values to fill-in the gaps. This technique is known as irradiance caching and the math behind it is pretty intense.

We still have noise in our simulation and the best way to combat this will be with a final gathering step (something that the previous GSoC project had attempted to include but which I believe was not implemented properly). Unfortunately adding FG is going to severely tank our performance during the lighting calculation phase so (time permitting) we are going to also need an irradiance cache to make it work in a reasonable amount of time. The cache itself is quite simple (very similar in fact to a photon map) but the metrics used to determine where a new sample is needed and where a pre-existing one can be used instead are not so simple. Jensen discusses the irradiance cache in full detail in his book (although he never uses the term 'Final Gathering' that I can see) so implementation should be a matter of translating all the summations and integrals into effective code.


Info about progress on my Google Summer of Code 2009 project on Advanced Lighting & Shading in CrystalSpace.

February 2015
Sun Mon Tue Wed Thu Fri Sat
 << <   > >>
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28



XML Feeds

What is RSS?

Who's Online?

  • Guest Users: 183

powered by