Archives for: 2009

2009-09-08

Permalink 05:53:08 pm, by Olliebrown Email , 424 words, 2200 views   English (US)
Categories: GSoC 2009

With a Capitol "T" ...

We got trouble. Lighter2 is struggling with this new Sponza scene. This is not unexpected, I did want something that would push it to it's limits but I didn't want it to error. I wanted it to be able to complete the job eventually, just not necessarily efficiently.

We are, however, getting several errors:

  • Lighter2 fails to halt (seems to be stuck in an infinite loop) if light mapping is enabled for the dense Stanford models. This can be worked around by enabling per-vertex colors for these models and this is probably preferable anyways as the vertices are quite dense. Regardless, the failure to halt error is a problem and one that may need to be addressed in the future.
  • Per-vertex lighting has the infamous raytracing black-speckles. This happens when a ray bouncing off an object is allowed to intersect with itself. Due to floating-point error the intersection might occur immediately making the surface seem to be in shadow when it isn't. This is a smaller error and one that can be addressed with a little tweaking of the ray intersection algorithm. For now, there is an option to disable self-shadowing on a per-mesh basis that will hopefully address this.
  • Some lights are being completely ignored. They produce no light when raytracing and no photons when photon-mapping as if they weren't even there. There are a total of 10 lights in the room which is unusually high but still, when ray-tracing and photon mapping there is no reason to leave out lights (at least for the final light map calculation when speed is not important). This must be an error somewhere and one that I will need to address.

So, this is why there are no light mapped pictures yet. I'm going to work through these problems (at least, the last two in particular) before I post anything and I may change the configuration of the room lighting slightly so the shadows are more dramatic. I will keep things updated as I work.

Oh, I almost forgot to mention, the spotlights are working with photon mapping. So, I think we can check that one off. It might need some robustness testing (in particular, I haven't checked the full range of angles) but for now, the basics are implemented and working in the sponza scene.

It's worth noting that school has started back up again so things might slow down a bit but as of right now I am un-funded for this semester so there is not much distraction yet (just a lot of worry). :-(

2009-08-27

Permalink 09:49:19 pm, by Olliebrown Email , 683 words, 4382 views   English (US)
Categories: GSoC 2009, Misc.

Productive Fun

I've been busy for a few days with a rather boring user study and the results of it were less than great ... so I needed a pick-me-up. For me, this means some creative work. Hence, I decided to spend a little time modeling a new world to test my lighter2 changes. Something considerably more complex than the cornell box and construction in wood and that would logically include every type of light (point, directional and spot). It's fun because it's art (in a way ... right) and for me I learn how to use blender better and can refine my workflow for getting things into CrystalSpace's XML world format.

Anyways, Here's what I've created:

Sponza World in CrystalSpace with runtime lighting only
The Sponza Atrium with Stanford models scattered around the main floor (Lucy the angel statue is front and center). No static-lighting is present yet so shadows are missing.

It's a combination of a bunch of different cannonical models from the graphics community (from Stanford specifically ... yes the bunny is in there) all placed in the Sponza Atrium model that's been appearing in global illumination papers for the past several years now. The atrium is texture mapped (and has bump maps somewhere that I have yet to track down). The Stanford meshes are incredibly dense (100K faces each AFTER I decimated them, they had millions in their original form ... some 10s of millions). So, this is an intense little scene to render and particularly tough to run through lighter2. It's a more realistic example and hopefully will test just how robust the system is. Consequently it takes a VERY long time to light. Most of the time is spent laying out the lightmaps (not raytracing or photon mapping) but be prepared to wait a while if you want to re-calc the maps and put it on a machine that you can let run for potentially a few days. I'll post some more images as lighter2 finishes running.

Blender model: I setup the world in blender and here is the file for that. It took a lot to get all of this into blender. Here's a breakdown of what I did -

  • I started with the 3ds model available here
  • Loaded the 3ds model into wings3d and exported to obj (to preserve textures)
  • Loaded obj into blender using standard obj importer
  • Grabbed the different Stanford ply models from here (except lucy.ply which came from here ... the one at stanford site is too big to fit in memory (even 4G of it plus a huge swap file)).
  • Loaded each ply model into meshlab and decimated it to 100K faces (except bunny which was already small enough)
  • Exported each model to obj in meshlab and imported into blender sponza model
  • After positioning meshes and adding the pedestal for each I selected each mesh and set the faces to be smooth which will auto-compute normals (the original meshes were point scans and as such do not have normals in the ply files).
  • Added spot lights by hand and tweaked fill lighting and camera positions (there are two different places to start in this world).

The model is still missing several textures and when you export it there will be several errors that need to be fixed by hand. These are all present in the original 3ds file and were just easier to fix by editing the world file by hand.

SponzaBlender.zip (15.2 MB, hosted externally)

Material fixes for world file: Here's the extra textures and a snippit of XML that can be pasted into the world file over it's 'textures' and 'materials' section to fix all the errors created by Blender2Crystal. Note that bump maps are still missing. I can't seem to find these on the net anywhere.

SponzaMaterialFix.zip (499 KB, hosted externally)

CrystalSpace world: Here's the exported world with the material and texture fixes already applied.

SponzaCSWorld.zip (16.4 MB, hosted externally)

Static lit CS world: Here's the same world with lightmaps included so you don't have to wait for it to finish raytracing. This is just direct raytracing for now. A photon mapped version is forthcoming.

[Still rendering, will post soon]

2009-08-21

Permalink 02:07:27 am, by Olliebrown Email , 331 words, 4674 views   English (US)
Categories: GSoC 2009, Bug Hunting

A Bit of Unit Testing

I had a thought on how to do some basic unit testing to eliminate one possibility for error in my code. Each of the distribution functions I've implemented has never been tested, just copied out of the book (or worked up by myself) and assumed correct. This includes the code to distribute rays from a spotlight. So, I decided to write a quick program that would visualize each of these distribution functions as a way of verifying them (and also a first step towards testing the spotlight code).

Here's some images for each distribution:

Equal distribution around the hemisphere
This is a visualizaiton of the EqualScatter function that randomly distributes rays around the hemisphere. Initially I found an error here that distributed rays to the full sphere instead of the hemisphere but as you can see, this has been fixed.

Diffusely distributing across the hemisphere.
This is the DiffuseScatter function which scatters rays across the hemisphere that fall-off according to the cosine function away from the surface normal. Note the less-dense rays near the surface.

Stratified sampling around the hemisphere.
This is the StratifiedSample function which discretizes the hemisphere into an MxN grid and sends out one random ray in each grid cell. The grid has small cells around the normal that increase in size towards the surface to simulate diffuse reflection. Note less density at the surface and the relatively more even distribution around the hemisphere.

I found, by doing this, that there was an error in my equal distribution function. Namely, it was distributing rays in the entire sphere not just the hemisphere. Easy fix. this would results in lots of lost photons and did help brighten the photon simulation which is always good considering how much darker it is than the raytracing version.

Here's the spotlight with different 'outter' values:

Distribution from a spotlightWider Distribution from a spotlightNarrower distribution from a spotlight.
These three images show the spotlight photon distribution function set with different 'outer' values. Note that spotlight falloff between inner and outer is not supported.

This all looks good so I think the spotlight code is correct assuming I've interpreted the parameters correctly.

2009-08-19

Permalink 10:23:35 am, by Olliebrown Email , 367 words, 12267 views   English (US)
Categories: GSoC 2009, Misc.

Irradiance Cache

Lots of posts today, sorry for that. This one will be short.

So, Final Gather is slow as molasses. Furthermore it seems to be not working right in my code. But, it is very important for smoothing out noise in the photon map. Enter the irradiance cache.

The irradiance cache is a concept introduced by Greg Ward and co. back in 1988 and I believe is part of Ward's Radiance renderer. The paper that describes it in detail (and which is surprisingly easy to read) can be found here (hint to Scott, check this out):
http://radsite.lbl.gov/radiance/papers/sg88/paper.html

The basic idea is that diffuse inter-reflections are very hard to compute using Monte Carlo methods (and Final Gather is a Monte Carlo method) but by-and-large they are very uniform and slowly changing across a surface (i.e. a good candidate for interpolation). So we would like to reduce the number of times it needs to be computed and interpolate everywhere else.

Ward describes a two tiered method: the primary method is the standard Monte Carlo method and the secondary method will interpolate cached values from previous computations. The secondary method needs to know when it can interpolate and how to weight the cached values. This is done by approximating the gradient of irradiance between the cached point and the point we need irradiance for. If the gradient estimate is too high a new value is needed. Otherwise we can weight the cached value by the inverse of the error inherent in the estimate and get a very good approximation that is considerably cheaper than the Monte Carlo method.

The devil's in the details and I won't bore you with it (other than to say it involves an octree based around the valid distance of each cached sample). But with Jensen's summary of Ward's paper (and the paper itself) I think I've got the Irradiance Cache implemented. It needs some testing and such (still to come) but hopefully this will help with noise and speed ... provided I can then figure out what is wrong with the Final Gather!

I'd kill for smaller unit testing right now but we'll see if we need it first.

Permalink 03:55:07 am, by Olliebrown Email , 466 words, 1903 views   English (US)
Categories: GSoC 2009, Planning Progress, Misc.

Taking Inventory

To round out my previous post concerning GSoC I thought I should list out the changes I have made and how the milestones are lining up.

Changes:

  • Reimplemented the photon map data structure from the ground up.
  • Lots of changes to the photon emission code to make it physically accurate.
  • Code to emit from spotlights in place but untested.
  • Re-factoring of the indirect light and direct light classes into independent raytracer and photonmapper engines
  • A new structure (LightCalculator) that will loop through the sector elements and add up the components from each engine (raytracer and photon mapper).
  • Many new command line options for tweaking the photon mapper and calibrating it with the raytracer.
  • Filtering in the photon map to help eliminate artifacts and smooth out noise (still needs work).
  • A new final gather implementation (still needs work).
  • The beginnings of an irradiance cache to speed up final gather.
  • Updates to the texinfo documentation for lighter2 (still needs work).
  • A new version of the Cornell box model (with more lights and fixed colors).
  • a brand new model of 'Construction in Wood' (another example of color bleed).

Results:

  • Photon mapper is working and, although noisy and not artifact-free, giving usable results.
  • Photon mapper is an order of magnitude faster.
  • New rendering engines can be implemented easier in the future using the LightCalculator/LightComponent classes.
  • Work has already begun to further eliminate noise (final gather), speed up even more (irradiance cache) and remove artifacts (gaussian filter).

Despite the noise and artifacts still present I think it's safe to say Milestone 1 is complete (or nearly so). Here's a list of things still to finish/implement.

Immediate priorities (considered unfinished Google Summer of Code obligations):

  • Debug gaussian filter.
  • Test spotlight emission and ensure it is correct.
  • Add support for directional light sources.
  • Retrieve and use surface albedo from object material (texture, color, etc). Consider accounting for normal/bump maps.
  • Complete irradiance cache and then debug final gather.

Future changes (for fun and profit!):

  • Add support for filtering materials (translucent w/o refraction).
  • Add support for reflective/refractive materials and caustics.
  • Add support for static linear/area/polygon light sources. This would require changes to:
    1. photon mapper - sampling over area as well as hemisphere.

    2. raytracer - ability to distribute rays from area source as needed.

    3. raytracer - ability to detect collision with the geometry of an area source (not infinitely small like point source/spotlight)

    4. libcrystalspace - support to parse and represent area sources within a world (but static ones only).

    5. libcrystalspace - support to render the geometry of an area source (no longer infinitely small).

It seems appropriate to send thanks out to Res and Scott at this point as they have both been quite helpful this summer and I'm sure will continue to be.

And now, we carry on ...

2009-08-14

Permalink 08:50:20 pm, by Olliebrown Email , 505 words, 7161 views   English (US)
Categories: GSoC 2009, Code Progress

State of the SoC

With the end of GSoC approaching I want to take inventory of what has been achieved so far. To reiterate I do fully intend to stick with this and keep improving lighter2, assuming I have not overstayed my welcome.

Here's some results. I've spent today rendering images so that I can show what I can achieve with my photon mapper when I hold it's hand and try my best to get good results.

First up, an image of the Cornell Box with just basic raytracing. This scene was rendered with global ambient turned off, all lights forced to realistic attenuation and all light power scaled by 8.0, as such:

lighter2 --directlight=raytracer --noglobalambient --forcerealistic --lmdensity=20.0 --lightpowerscale=8.0 data/NewCornell

lighter2 raytraced cornell box
This image uses light maps with direct light only computed with raytracing. Note the dark shadows and the relatively small amount of light on the ceiling.

Next, an image of the Cornell Box with just Photon Mapping (for both direct and indirect light). Here we shot 5M photons and sampled 5K times for each density estimation. The command line was like such:

lighter2 --directlight=photonmapper --indirectlight=photonmapper --numphotons=5000000 --maxdensitysamples=5000 --sampledistance=20.0 --nofinalgather --lmdensity=20.0 --pmlightscale=100.0 data/NewCornell

lighter2 photon mapped Cornell Box
This image shows the results of lighter2 photon mapping only. The image captures all of the major features of the raytraced image (shadows, shapes of light attenuation) but it also captures some indirect light (though not yet with color). The shadows and ceiling are brighter in this image than in the raytraced image. Unfortunately, the noise is still too high but better than it has been all summer.

Lastly, an image of the Cornell Box with direct light done with raytracing and indirect light with photon mapping. It was VERY difficult to get the two values to have a comprable exposure (i.e. photon mapping was consistently too dark). Recent changes to the way light is scattered have made this matter worse but are conceptually necessary to get the simulation to be correct. Needless to say, I had to fudge the light power manually until the image 'looked' okay. Very imprecise but good enough for today. Here's what the final command line looked like. Note that I bumped the number of photons up to 25M to help fight noise which can be particularly noticeable for indirect lighting:

lighter2 --directlight=raytracer --noglobalambient --forcerealistic --lightpowerscale=8.0 --indirectlight=photonmapper --numphotons=25000000 --maxdensitysamples=5000 --sampledistance=20.0 --nofinalgather --lmdensity=20.0 --pmlightscale=16.0 data/NewCornell

lighter2 raytraced & photon mapped Cornell Box
This image shows the results of lighter2 combining raytracing and photon mapping (for direct and indirect light respectively). Note the brighter shadows and ceiling as well as the indirect brightening happening on the rear box. All of this increases the realism of the final image at the expense of noise but also requires significant hand-holding and tweaking of variables to achieve.

The code I check in today will be able to do all of this. Note that I used an old version of walktest.exe to render these images (from the 08 SOC branch for lighter2). The one in my branch is still not working with light maps for unknown reasons.

Permalink 08:07:52 pm, by Olliebrown Email , 749 words, 4404 views   English (US)
Categories: GSoC 2009, Bug Hunting, Misc.

Calibration Revisited

My handling of light attenuation turned out to be incorrect. Res and Martin set me straight. I had the conceptual model that distance attenuation accounts for the phenomenon of light losing power as it travels through a medium (even just the atmosphere). This is called attenuation in physics and optics but in graphics this is not what distance attenuation is accounting for. Distance attenuation accounts for the fact that as light moves away from its source the energy spreads out (I assume like a wavefront spreading out). It does so like the surface of a sphere so the 'realistic' distance^2 factor accounts for this spreading out perfectly.

The consequence of this type of attenuation (the correct type) is that photon mapping attenuates automatically. We are distributing photons equally around the sphere of the light source and when they land they will be distributed according to how far away they are from the light. The density of this distribution already has this spreading effect built in automatically.

So, to really do calibration between raytracing and photon mapping I need to remove the attenuation from the photons (already done) and then switch all the lights to use 'realistic' attenuation (which is not the default). My apologies to res for second guessing his advice as this was his original suggestion. As soon as I did this it became apparent that things were dramatically more comparable between raytracing and photon mapping:

Raytracer
Photon Mapper
4.0
Raytraced 4.0 power lights

Photonmapped 1.0 power lights (4M photons, 4K samples)
6.0
Raytraced 6.0 power lights

Photonmapped 6.0 power lights (4M photons, 4K samples)
8.0
Raytraced 8.0 power lights

Photonmapped 8.0 power lights (4M photons, 4K samples)
10.0
Raytraced 10.0 power lights

Photonmapped 10.0 power lights (4M photons, 4K samples)
12.0
Raytraced 12.0 power lights

Photonmapped 12.0 power lights (4M photons, 4K samples)

These images show the results of direct lighting computed with the raytracer and the photon mapper. Each one is given the exact same input however they do not result in the exact same output. Furthermore, the difference is not constant or linear. Note that the images shown above match the range of the graph below from the lower knee up to the point that the curves cross (16 - 48 on the x-axis).

As you can see, despite the similarity resulting from the change to realistic attenuation there is still a marked difference in the exposure of the two. After revisiting this from many different angles, over and over again, and after changing the code in different ways and attempting both a mathetamical and visual calibration I've decided that this issue is going to have to wait. Here's an example of the problem:

Comparison of Average Luminance w/ Proposed Calibration
This graph compares the average luminance of the images generated by raytracing and photon mapping for light sources ranging from 0.25 to 50.0 in power (in 1.0 increments). There are four lights in the scene so this is a scene luminance from 1 to 200 (the x-axis of the graph). Note that the shape of the two curves is essentially the same (a standard exposure curve with a knee and shoulder) but that the have significant differences in where the curves features are occurring.

Note that the raytracing and photon mapping graphs have similar but miss-aligned shapes. This miss-alignment is the problem. There is no easy way to simply fudge things and fix it as it will be entirely dependent on the scene being rendered. Furthermore, I'm starting to think (after talking with colleagues) that there is a mistake somewhere in either the RT code or the PM code that is causing this miss-alignment and simply fudging things to fix it is not a permanent solution (or one that I should be spending so much time on).

So, three days gone on this but at least I have something to show for it. New configuration options! (well, and a lot of frustration!) Here's the new options:

'forcerealistic' - This option can be enabled or disabled and will force all the static and pseudo-dynamic lights in a world to use 'realistic' attenuation mode. This saves the trouble of having to re-do your world in order to use it with photon mapping.

'lightpowerscale' - Scale all the lights in a scene by the given scaling factor. This scale is applied to the light color which is essentially the same as it's power. When you use 'forcerealistic' things tend to get much darker so the lights need to be scaled up to compensate. Again, this option avoids having to edit the world file to achieve this.

'pmlightscale' - Like 'lightpowerscale' this will scale all the static and pseudo dynamic lights in the scene but only for the photon mapping phase. This is in addition to any scaling applied by 'lightpowerscale'. This allows you to fudge things from the command line and bring the exposure of the photon mapping simulation and the raytracer in line with one another.

2009-08-11

Permalink 09:45:48 pm, by Olliebrown Email , 272 words, 3079 views   English (US)
Categories: GSoC 2009, Code Progress, Bug Hunting

Attenuation & Calibration

After posting about my intention to calibrate the energy between photonmapping and raytracing 'res' sent me a message concerning attenuation. This reminded me that I had previously considered that light was being attenuated in the direct lighting raytracer but didn't seem to be in the photonmapper. I had left this in light of larger problems with the photonmapper but as res pointed out, it is critical to address this prior to calibration.

So, I explored the Light class and noticed that it has an internal mechanism to call an attenuation function that will attenuate for distance based on the attenuation coefficients and mode. I decided to move this function (ComputeAttenuation()) from the protected section of the class to the public section so I could access it from the photonmapping code. So now, each photon gets attenuated after each bounce by the distance it traveled according to the attenuation parameters of the light it was emitted from. This small change already made a big difference in the quality of the simulation! It also caused the calibration problem to become even worse as everything got noticeably dimmer in the photonmap.

So, now we're ready to calibrate. To do this, I adjusted the lights from 1.0 down to 0.1 in the world file in 0.1 increments (changing each color channel equally) and generated lightmaps that contained direct light only (one with raytracing, one with photon mapping). For now, I'm just trying to do a visual comparison between the results and scale the photonmapping version until it approximately matches the raytraced version. You can see the progress below. I will add more as I am able.

Raytracer
PM Before
PM After
0.1 Raytraced 0.1 power lights 1K photons, 100 samples per element Calibrated PM 0.1 power lights
0.3 Raytraced 0.3 power lights 10K photons, 100 samples per element Calibrated PM 0.3 power lights
0.5 Raytraced 0.5 power lights 100K photons, 100 samples per element Calibrated PM 0.5 power lights
0.7 Raytraced 0.7 power lights 1M photons, 100 samples per element Calibrated PM 0.7 power lights
0.9 Raytraced 0.9 power lights Photonmapped 0.9 power lights Calibrated PM 0.9 power lights

2009-08-05

Permalink 05:01:25 pm, by Olliebrown Email , 492 words, 8267 views   English (US)
Categories: GSoC 2009, Misc.

Final Gathering / Irradiance Caching

Final Gathering. No, it's not the name of yet another horror movie sequel (although Google isn't much help for figuring out what it in fact is).

It's a technique for effectively smoothing noise in global illumination from Lambertian surfaces. Basically, given a solution to global illumination (like radiosity or a photon map) instead of looking up the diffuse light value in the GI solution, you do one final bounce of light by shooting rays out across the hemisphere above the point you are rendering. These rays sample the secondary light that would be hitting this point much like a distribution raytracer would send out rays to sample the BRDF. In this case, a random sampling of the Lambertian distribution is not the best bet (according to Jensen anways). It is better to use a grid of points placed across the hemisphere according to Lambert's cosine law and then jitter these points slightly to ensure the full hemisphere gets sampled.

When these rays hit a surface a distribution raytracer would send out more rays to sample the light hitting that surface. In FG, you use the precomputed GI solution instead. So, like shadow rays, FG rays do not bounce. However, FG is intentionally used for Lambertian surfaces (perfectly diffuse surfaces). This means that the hemisphere above the point must be FULLY sampled and that takes a lot of rays. Doing this at every point in the scene is very inefficient.

Enter the irradiance cache. Diffuse lighting changes very slowly across a surface; think of a big white wall in an office (the one exception would be a caustic which is actually a diffuse effect but we'll ignore that for now). Slowly changing functions don't need to be sampled as frequently as quickly changing functions so re-computing the FG value at every point across a large surface is wasteful. Instead, we could sample it sparsely and use interpolation of nearby values to fill-in the gaps. This technique is known as irradiance caching and the math behind it is pretty intense.

We still have noise in our simulation and the best way to combat this will be with a final gathering step (something that the previous GSoC project had attempted to include but which I believe was not implemented properly). Unfortunately adding FG is going to severely tank our performance during the lighting calculation phase so (time permitting) we are going to also need an irradiance cache to make it work in a reasonable amount of time. The cache itself is quite simple (very similar in fact to a photon map) but the metrics used to determine where a new sample is needed and where a pre-existing one can be used instead are not so simple. Jensen discusses the irradiance cache in full detail in his book (although he never uses the term 'Final Gathering' that I can see) so implementation should be a matter of translating all the summations and integrals into effective code.

Permalink 04:30:45 pm, by Olliebrown Email , 274 words, 2430 views   English (US)
Categories: GSoC 2009, Code Progress

All Together Now...

Photon mapping simulates both direct illumination and indirect illumination. However, the simulation of direct illumination is not as precise as a raytracing solution. Standard raytracing is very efficient and exact at simulating direct illumination and lighter2 already has a good implementation of this. The best solution would be to combine the results of raytracing and just the indirect lighting from the photon map.

To do this I've played around with ignoring the first bounce of the photons (this would be the direct illumination) and only storing photons that have scattered at least once. We then add the irradiance estimate to the direct lighting solution from raytracing. The results are quite promising but need to be calibrated. That is to say, the 'energy' in the photon mapped solution does not match the energy in the raytraced solution.

To calibrate, I think the best plan is to do some simple direct lighting simulations with just the photon map (include only first emitted photons and exclude the scattered ones). We can compare the overall brightness at different light power levels to the raytraced solution and hopefully figure out how to scale the two so that they match.

In the meanwhile, I've restructured lighter2's options a bit. Instead of just enabling direct and indirect you now specify which engine you want to use for each (raytracing or photon mapping for direct and photon mapping or none for indirect). This will make this calibration easy to perform and will give the option to those that would prefer it to use photon mapping for the entire lighting solution.

I'll add some images to support this post a little later.

2009-08-04

Permalink 04:21:39 pm, by Olliebrown Email , 126 words, 5347 views   English (US)
Categories: GSoC 2009, Code Progress

Gentlemen, to evil

I think we've got it. I did some playing around with the different sampling parameters and after some scaling and few bug fixes to make sure the energy stayed consistent no matter the number of photons and now we're getting some good results. Here's the latest light map for the Cornell Box:

Raw lightmaps for the Cornell Box (1M photons, 1000 samples)
This light map contains only photon mapped light. It was generated with 1M photons and 1000 samples per element. While the image is still too noisy we are seeing all the desirable lighting effects and the overall impression is much closer to the desired result.

Here's a table of many different photon counts (y axis) and sampling amounts (x axis) (click on any image for a full size view):

10
100
1,000
1K 1K photons, 10 samples per element 1K photons, 100 samples per element
N/A
10K 10K photons, 10 samples per element 10K photons, 100 samples per element 10K photons, 1K samples per element
100K 100K photons, 10 samples per element 100K photons, 100 samples per element 100K photons, 1K samples per element
1M 1M photons, 10 samples per element 1M photons, 100 samples per element 1M photons, 1K samples per element

2009-07-28

Permalink 09:10:35 pm, by Olliebrown Email , 343 words, 667 views   English (US)
Categories: GSoC 2009

Speedup & Progress

The current PhotonMap class was painfully slow when accessing the KD-tree for the purposes of irradiance estimation (the final gathering phase of photon mapping). Upon further inspection I found that the tree was implemented as a linked structure instead of the more efficient heap approach and that it was not being balanced. I replaced the PhotonMap with the code from Jensen's book which not only keeps the KD-tree in a heap but balances it before accessing it (in oder to guarantee O(log(n)) performance) and now things are significantly faster. For example, 1M photons used to take over 43 minutes to simulate start to finish (without final gather). Now, this is working in about 30 seconds!

Of course, this could be a fluke, perhaps I have missed something in the new lightmap. However, the results are looking promising:

Lighter2 generated lightmap of cornell box - indirect light only
The latest results of photon mapping. This is the lightmap for the Cornell Box example with indirect light ONLY. Note light no longer leaks under the boxes and the shadows are there, just poorly sampled.

Now, I need to clean up the gathering phase. The simulation looks better than before but now it is too bright and too noisy. Furthermore, I need to enable Final Gather on Jensen's photon map. It does not implement this out-of-the-box.

I want to re-work Jensen's allocation scheme for the heap. At present, it requires a 'maxPhotons' when you initialize the map and this is all the space that gets allocated. Since photon emission is stochastic you can't accurately predict just how many will be emitted before hand, you can only give a maximum upper bound. In practice this upper bound is about 3 times bigger than it needs to be. A data structure than can take an initial guess and resize as needed would be preferable to avoid the initial over-allocation this causes. This may be difficult as not only do Jensen's functions expect pointers to the photons but it expects them to be stored sequentially which I'm not sure an expanding array structure can guarantee (or allow me to access).

2009-07-23

Permalink 08:37:02 pm, by Olliebrown Email , 228 words, 788 views   English (US)
Categories: GSoC 2009

On Second Thought ...

The more at look at the photon map visualizations the more I think that the shadows are there. If I view them from across the room and squint my eyes the density of photons seems to be less in the areas where the shadows from the boxes should be. Furthermore, the whole point of indirect illumination in this scene is to add light to the shadowed parts that direct illumination cannot account for so they shouldn't be as sharp and pronounced as in the direct lighting version.

So, I think it's safe to say that the photon emission phase is okay (or at the very least, it's not the source of the current problems). I went ahead and added some attenuation of the total photon count (now, the power is divided by the total number of photons being emitted) but as a principle of russian roulette you shouldn't decrease the power of bouncing photons so I think I'm going to leave it there.

Given the observation that light is getting under the boxes at the gathering phase I know that this phase needs more work. Furthermore, the kd-tree implementation at this phase is slower than it should be, not to mention the other problems I'm seeing (the really dim results) don't seem to be coming from the emissions phase, I think it's time to turn my attention there.

Permalink 07:37:55 pm, by Olliebrown Email , 425 words, 3020 views   English (US)
Categories: GSoC 2009, Code Progress, Bug Hunting

When in Doubt, Visualize

That should be the mantra of every graphics programmer ... at least, that's what some professor told me one time.

I worked up a direct visualization of the photon emission stage by simply drawing points in space for each photon. I set the color of each point to the power of the photon and now I'm seeing something very important. The power is not attenuating ... AT ALL. That's why we aren't getting any shadows and that's probably why everything is a constant power and too dim.

I thought that I could safely ignore the photon power until milestone 2 but I think I need to deal with it now so that's going to be the current task.

Here's the visualizations:

Directly visualizing the Cornell Box Photon Map
Directly visualizing the Cornell Box Photon Map with 1000 photons emitted total (250 from each light).

Directly visualizing the Cornell Box Photon Map
Directly visualizing the Cornell Box Photon Map with 10,000 photons emitted total (2,500 from each light).

Directly visualizing the Cornell Box Photon Map
Directly visualizing the Cornell Box Photon Map with 100,000 photons emitted total (25,000 from each light).

Directly visualizing the Cornell Box Photon Map
Directly visualizing the Cornell Box Photon Map with 1,000,000 photons emitted total (250,000 from each light).

Note: the number of photons listed is the number of emitted photons. Since photons are recorded each bounce there are actually MANY more being added to the map and drawn. With russian roulette in play, the photons are bouncing about 5 times on average so multiply the number of emitted photons by 6 to get the number being drawn and the number of rays being traced. This is all still happing quite efficiently. The last case has about 6M rays to trace and it does so in only a few minutes. Not bad! Unfortunately, the splatting/final gather phase is painfully slow still. I think it's because the kd-Tree for the photon map is not being properly balanced.

Some Observations about these images:

  • Almost no photons are landing under the boxes (which is as it should be). The ones that appear to be there are actually on the front face of the boxes. I thought this was a problem as there is light appearing in the light map in these areas but it must be getting there in the splatting/gathering phase
  • The photons are VERY uniformly distributed. Again, this is as it should be since we are doing purely diffuse bounces.
  • The shadows are missing. My new theory is that this is because we are not attenuating the power after each bounce
  • Each scene just gets brighter and brighter. This shouldn't happen as the power should be evenly divided between the photons emitted. This is also a problem with the power attenuation.
Permalink 04:35:36 pm, by Olliebrown Email , 378 words, 1895 views   English (US)
Categories: GSoC 2009, Code Progress, Bug Hunting

Status Update

So far, it's been a lot of house cleaning. There's still several key problems with the photon map algorithm that did not resolve themselves as I expected.

The key change was to the photon emitting phase. I added a progress structure to this phase so that we could see when it was happening and how many rays it was creating. More importantly, I changed the photon scattering code to scatter photons diffusely instead of specularly. In the end, we are going to need both but for now, the diffuse scattering is more important and I don't think the specular scattering was being done right anyways. My hope is that by changing to diffuse and by ramping up the number of photons being emitted we would get better results right away. This has not been the case.

There are two key problems in the final light maps that have yet to be resolved:

  1. They are WAY too dim. About a tenth of the brightness we get with the direct lighting version.

  2. There are no shadows. PM should get the shadows if you shoot enough photons and we're shooting millions so they should be there.

While these are not the only problems, these problems are the most troubling ones and ones that I theorized were caused by improper photon scattering.

To proceed, I'm going to finish up the scattering with both diffuse and specular components chosen with statistical russian roulette (exactly as suggested in Jensen's book) and then start working on the splatting / gathering phase of the simulation. The code for this phase comes straight from Jensen's book so mostly I'm just going to confirm that its correct before I start to play with it and debug the implementation.

Here's some visuals for what's going on. These images are the actual lightmaps generated by lighter2. In both cases lmdensity was set to 10.0 so that the images generated would be high enough resolution to examine directly:

Cornell Box lightmap - Direct lighting only
This is the current production state of lighter2. Only direct illumination is simulated which captures shadows well but no global effects.

Cornell Box lightmap - Indirect lighting only
This is the state of the photon mapping implementation after my changes (and with an artificial brightening by 4 throughout). Shadows are missing, flat surfaces are noisy and light is getting into places it shouldn't.

2009-07-20

Permalink 09:17:11 pm, by Olliebrown Email , 433 words, 4357 views   English (US)
Categories: GSoC 2009

Baseline / Test Cases

Here are some test cases I'm working with to debug and develop the global illumination changes. They are small tests that display important effects of globally lighting very clearly and have appeared in the literature describing various algorithms for such.

Cornell Box:

The scene for the cornell box already exists in the 'data' directory of the main development branch but the materials describing the different colors are broken. I instead uses a scene of the Cornell Box for Blender. First I generated a ground truth image using the radiosity system in Blender, then I exported the geometry and fixed up the color materials inside the world file to make sure we can achieve the same result in lighter2. Here are some images to show the differences. I will use images of this scene to show progress through each milestone.

Cornell box rendered using the radiosity system in BlenderCornell box rendered using direct lighting only in Lighter2
The Classical Cornell Box: (Top) Goal/Ground Truth image - radiosity simulation for the classical Cornell Box scene generated by the built-in radiosity solver in Blender. (Bottom) lighter2 results for direct illumination (no global illumination at all)

Notes:

  • Area light has been approximated by point lights
  • Milestone 1: we will look for brightening in the shadow areas
  • Milestone 2: we will look for the color bleeding from the walls to the boxes

Construction in Wood:

One very interesting test case for radiosity is a sculpture in the Hirshhorn Museum in Washington D.C. by John Ferren, entitled "Construction in Wood, A Daylight Experiment". It was discovered by some of the early radiosity researchers (particular credit goes to Cindy Gorn who first modeled the sculptured and presented it in her thesis) and used it to show how important diffuse-to-diffuse light interaction can be. All of the color visible on the viewing side of this sculpture comes from light bouncing off the surfaces on the back of the sculpture diffusely (not specularly). The result is a structure that looks completely white and boring when naïvely ray-traced or directly lit but vibrantly colorful when a global lighting solution is computed. I will also use this scene to evaluate and demonstrate progress on this project.

The Ferren Sculpture rendered with the radiosity simulator in BlenderFerren Sculpture - direct light onlhy
John Ferren Sculpture: (Top) Goal/Ground Truth image - radiosity simulation for the Ferren sculpture generated by the built-in radiosity solver in Blender. (Bottom) lighter2 results with only direct illumination showing almost nothing (except some very nice shadows) as expected.

Notes:

  • I'm still working on getting the materials in CS for this model. I've let it go for now and will try again later.
  • We will look for the same effects in this example for each milestone but they should be easier to discern

2009-07-03

Permalink 08:02:01 pm, by Olliebrown Email , 474 words, 1528 views   English (US)
Categories: GSoC 2009, Code Progress, Planning Progress

Change of Plans

A New Plan
Thanks to all who offered feedback for my previous post. With the discovery of the GSoC '08 branch for lighter2 with photon mapping plans need to change. I've been examining Greg Hoffman's changes to lighter2 to determine what work could be done and I think there's a good chunk here to constitute a project. Here's my assessment of what the branch contains:

  • There's a basic Photon Map data structure and code to emit and gather photons in a single sector.
  • This code does seem to do something but I don't think it's correct in all cases (or at least robust) yet
  • There are things missing (proper handling of materials, diffuse to diffuse light paths)
  • It's missing some options to fine tune the convergence (no max error, max recursion depth, no control over number of photons emitted)
  • Right now it's slow and has room for optimization

So, it seems given the original content of my proposal and this discovery from last summer that the new course of action should be to work on the photon mapping implementation. So, here's a basic outline of what I could do again welcoming comments:

Milestone 1: Repair

  1. Ensure the PM calculation is correct and fix where needed (such as handling LD+SE paths)

  2. Add the missing settings for controlling convergence

  3. Ensure it scales well from small test cases (like the Cornell Box) to large game levels

Milestone 2: Improve Quality

  1. Handle light traveling across portals

  2. Handle all materials properly (materials are ignored right now)

Milestone 3: Improve Speed/Features

  1. Add importance sampling to avoid redundant photon emission

  2. Move calculations to GPU (BIG speedup)

  3. Optional: Add support for 'dielectrics' (refracting materials) and caustics (via reflection and refraction)

Concerning the optional task under Milestone 2, Photon Mapping just handles caustics well (it's famous for it) and as such it would be easy to render this if the information about refraction is available in the material structure (namely index of refraction). It could make for some interesting but very specialized effects.

Time-line
I'm planning about two weeks for each milestone with an extra week for the first one just for getting out of the starting gate. Here's a rough time-line to completion of these milestones:

  • Milestone 1 (already in progress): Now - July 21st
  • Milestone 2: July 22nd - Aug 4th
  • Milestone 3: Aug 5th - Aug 18th

I want to make sure that the amount of work I'm doing is worthy of a full SoC project regardless of the time frame. I'm definitely slow getting started here and I want to ensure all involved that I will make that up as we go either by putting in extra time now or beyond the scheduled GSoC end. Therefore, I think it is best to make sure I get a project defined that is of a scope appropriate for SoC so that no one feels short changed.

2009-06-16

Permalink 08:43:01 pm, by Olliebrown Email , 1219 words, 1512 views   English (US)
Categories: GSoC 2009, Planning Progress

Long overdue ...

We are underway and I am long overdue in posting an entry here so there is much to discuss.

What have I been doing:
Planning! I have been getting very familiar with lighter2 and determining where change would be most appreciated. This has been a slow task as much of lighter2 is un-commented (or at least, the comments are not very detailed). Also, I have had to learn much about the CS app framework, the instance tracking classes used in CS and the i* classes used throughout lighter2. Conceptually, I reached a good place to actually propose some changes to Scott my mentor last week and we met to discuss just that.

What is the current status:
At present, we have identified the following concerns or features that need attention in lighter2 and would pertain to my proposal and my areas of expertise -

  1. lighter2 is an external dependency for any CS app. For redistribution purposes, it would be better if it was internal (either built into the library or as a plugin).

  2. lighter2 uses the 'direct lighting' approach for its central calculation. This works well and is efficient but is below the standard of other game engines which use Radiosity. For example, see this page under the heading 'Dynamic Lighting and Shadows' (http://developer.valvesoftware.com/wiki/Source_Engine_Features).

  3. While lighter2 is a single run, non-interactive component efficiency is still quite important. This is because during development of a game, static lighting needs to be recalculated every time the world is changed. This can be tedious if it takes more than a minute or two to give a result(and seconds would be even better).

  4. Conceptually, any light-mapping application can be thought of as a bootstrap. The rendering system will use the light maps to render the world but the light-mapping application needs a rendering system (or at least part of one) to construct the light maps. Therefore, lighter2 naturally depends on some components of the CS library (mostly viewing and projection calculations and geometry loading components).

    Some of the conceptual components of lighter2 (like the 'scene' and 'segment' classes) should be part of the CS library. I have not examined the library itself too deeply to see if it provides these components but Scott suggested that they are in fact copies of classes from the library. This is conceptually undesirable but there may be reasons for it. More discussion of this is in order.


  5. At the heart of the 'direct lighting' approach is a BRDF approximation used to calculate how much ambient/diffuse illumination each surface gives off for each light source. I have not yet identified where this calculation takes place but I'm operating on the assumption that this calculation is being done somewhere and is using a very simple BRDF model like the Blinn-Phong model.

What conclusions can be drawn:
From all of this I've identified some requirements for this project -

  1. New code should be easy to integrate (or already be integrated) into the CS library.

  2. The current 'lighter2' work flow (which will probably be more efficient than any new work flow) should be retained as a fallback for when quick calculations are important. This includes both the 'direct lighting' algorithm and the current BRDF model.

  3. We should consider re-writing lighter as a plugin. This is secondary to the main proposal objectives but if we are making enough changes to lighter this may happen anyways as a side-effect or as an add on at the end if there is extra time.

So what are the plans:
I'm going to start moving forward with changes now. Here's the initial proposal of work in the order it will be undertaken (this may change in a typical design-build fasion) -

  1. The direct lighting BRDF model will be upgraded to the Oren-Nayer model. This should be very easy and will satisfy part of the original proposal. If nothing else gets completed on this project at least this will be in place. It will also serve as a way to finish examining the details of lighter2 that will be more productive than just reading the source. Note that we may need to convert the Blinn-Phong parameters to Oren-Nayer parameters so that material properties do not need to be redesigned for every existing world to take advantage of this improved model.

  2. The Radiosity algorithm will be implemented as a CS plugin. I have implemented Radiosity before and believe this can be done relatively quickly given the tools that CS already has built-in. Implementing it as a plugin will make it available for use in lighter2 or even as its own internal step in the standard CS rendering pipeline. Here's a breakdown of how this will be undertaken -

    1. The most efficient approach to Radiosity that I know of is the hemi-cube method. This fits well in any scanline rendering system that can render-to-texture. We will start by developing a CS app that renders the scene from the point of view of the light sources to a texture map. This will then be integrated into a bigger radiosity plugin.

    2. In the radiosity algorithm all surface patches can be light sources. To avoid the need to supply seperate lights just for the radiosity system, existing light sources will have to be approximated with proxy geometry and special work may be required to support spot-lights and other non-area light sources. Point light source will need to be approximated with very small area light sources.

    3. The hemi-cube textures are used to compute form factors and fill in a large matrix that describes which surfaces can see what parts of which other surfaces. This is a straight-forward calculation.

    4. With the form factors calculated we need a large linear system solver that computes the equilibrium achieved by this world. This will initially be a standard Gauss-Seidel solver and can be upgraded to something fine-tuned to the radiosity problem at a later time.

    5. The solved system must be written back to the geometry by placing the computed colors into the mesh vertices which normally requires some type of interpolation. From here, we could further propagate this data out into textures which would then become the light maps for the scene.


At this point we will reevaluate and decide what is to be done next. Additional project tasks may include:

  • Improving radiosity with things like: a better linear system solver, automatic geometry or light map subdivision at high-frequency artifacts and support for more types of light sources.
  • Re-working of lighter2 as an internal dependency (see above)
  • Other changes to lighter2 to bring it up to speed with the CS library and make it more maintainable into the future.

Conclusions:
What's described here will constitute the bulk of this project and work will begin immediately. I intend to reevaluate progress as I go and learn more about CS and lighter2. All comments are welcome and encouraged as there are a plethora of assumptions underlying these ideas and any number of them could prove to be wrong. The collective knowledge of the CS community can do far better to identify these problems than I can digging through the mountains of code and documentation. My time is better served now making changes rather than fact-checking!

Thanks to all for reading this! I will post more as I go.

Seth

2009-04-25

Permalink 09:42:45 pm, by Olliebrown Email , 157 words, 2129 views   English (US)
Categories: GSoC 2009, Personal

Community Bondage

Just a quick greeting to all those out there in CrystalSpace. I'm thrilled to be a part of the GSoC and CS and look forward to contributing something worthwhile. Thanks all around to the mentors and admins on CS for selecting my proposal. Scott and I have already chatted about the future of this project and will take up full planning once the school semester closes in a couple of weeks.

For those that don't know me yet, I'm a doctoral candidate in my sixth year of graduate study at the University of Minnesota. Computer graphics is my primary area of interest and most of my programming experience involves real-time lighting and shading to some degree. These days I'm working on image based rendering recreating work on light fields with the hope of applying it to a new area.

I will do my best to keep things up to date on this blog as the summer progresses.

OllieBrown

Info about progress on my Google Summer of Code 2009 project on Advanced Lighting & Shading in CrystalSpace.

2009
 << Current>>
Jan Feb Mar Apr
May Jun Jul Aug
Sep Oct Nov Dec

Search

Misc

XML Feeds

What is RSS?

Who's Online?

  • Guest Users: 221

powered by
b2evolution