The current PhotonMap class was painfully slow when accessing the KD-tree for the purposes of irradiance estimation (the final gathering phase of photon mapping). Upon further inspection I found that the tree was implemented as a linked structure instead of the more efficient heap approach and that it was not being balanced. I replaced the PhotonMap with the code from Jensen's book which not only keeps the KD-tree in a heap but balances it before accessing it (in oder to guarantee O(log(n)) performance) and now things are significantly faster. For example, 1M photons used to take over 43 minutes to simulate start to finish (without final gather). Now, this is working in about 30 seconds!
Of course, this could be a fluke, perhaps I have missed something in the new lightmap. However, the results are looking promising:
Now, I need to clean up the gathering phase. The simulation looks better than before but now it is too bright and too noisy. Furthermore, I need to enable Final Gather on Jensen's photon map. It does not implement this out-of-the-box.
I want to re-work Jensen's allocation scheme for the heap. At present, it requires a 'maxPhotons' when you initialize the map and this is all the space that gets allocated. Since photon emission is stochastic you can't accurately predict just how many will be emitted before hand, you can only give a maximum upper bound. In practice this upper bound is about 3 times bigger than it needs to be. A data structure than can take an initial guess and resize as needed would be preferable to avoid the initial over-allocation this causes. This may be difficult as not only do Jensen's functions expect pointers to the photons but it expects them to be stored sequentially which I'm not sure an expanding array structure can guarantee (or allow me to access).
The more at look at the photon map visualizations the more I think that the shadows are there. If I view them from across the room and squint my eyes the density of photons seems to be less in the areas where the shadows from the boxes should be. Furthermore, the whole point of indirect illumination in this scene is to add light to the shadowed parts that direct illumination cannot account for so they shouldn't be as sharp and pronounced as in the direct lighting version.
So, I think it's safe to say that the photon emission phase is okay (or at the very least, it's not the source of the current problems). I went ahead and added some attenuation of the total photon count (now, the power is divided by the total number of photons being emitted) but as a principle of russian roulette you shouldn't decrease the power of bouncing photons so I think I'm going to leave it there.
Given the observation that light is getting under the boxes at the gathering phase I know that this phase needs more work. Furthermore, the kd-tree implementation at this phase is slower than it should be, not to mention the other problems I'm seeing (the really dim results) don't seem to be coming from the emissions phase, I think it's time to turn my attention there.
That should be the mantra of every graphics programmer ... at least, that's what some professor told me one time.
I worked up a direct visualization of the photon emission stage by simply drawing points in space for each photon. I set the color of each point to the power of the photon and now I'm seeing something very important. The power is not attenuating ... AT ALL. That's why we aren't getting any shadows and that's probably why everything is a constant power and too dim.
I thought that I could safely ignore the photon power until milestone 2 but I think I need to deal with it now so that's going to be the current task.
Here's the visualizations:
Note: the number of photons listed is the number of emitted photons. Since photons are recorded each bounce there are actually MANY more being added to the map and drawn. With russian roulette in play, the photons are bouncing about 5 times on average so multiply the number of emitted photons by 6 to get the number being drawn and the number of rays being traced. This is all still happing quite efficiently. The last case has about 6M rays to trace and it does so in only a few minutes. Not bad! Unfortunately, the splatting/final gather phase is painfully slow still. I think it's because the kd-Tree for the photon map is not being properly balanced.
Some Observations about these images:
So far, it's been a lot of house cleaning. There's still several key problems with the photon map algorithm that did not resolve themselves as I expected.
The key change was to the photon emitting phase. I added a progress structure to this phase so that we could see when it was happening and how many rays it was creating. More importantly, I changed the photon scattering code to scatter photons diffusely instead of specularly. In the end, we are going to need both but for now, the diffuse scattering is more important and I don't think the specular scattering was being done right anyways. My hope is that by changing to diffuse and by ramping up the number of photons being emitted we would get better results right away. This has not been the case.
There are two key problems in the final light maps that have yet to be resolved:
While these are not the only problems, these problems are the most troubling ones and ones that I theorized were caused by improper photon scattering.
To proceed, I'm going to finish up the scattering with both diffuse and specular components chosen with statistical russian roulette (exactly as suggested in Jensen's book) and then start working on the splatting / gathering phase of the simulation. The code for this phase comes straight from Jensen's book so mostly I'm just going to confirm that its correct before I start to play with it and debug the implementation.
Here's some visuals for what's going on. These images are the actual lightmaps generated by lighter2. In both cases lmdensity was set to 10.0 so that the images generated would be high enough resolution to examine directly:
Here are some test cases I'm working with to debug and develop the global illumination changes. They are small tests that display important effects of globally lighting very clearly and have appeared in the literature describing various algorithms for such.
The scene for the cornell box already exists in the 'data' directory of the main development branch but the materials describing the different colors are broken. I instead uses a scene of the Cornell Box for Blender. First I generated a ground truth image using the radiosity system in Blender, then I exported the geometry and fixed up the color materials inside the world file to make sure we can achieve the same result in lighter2. Here are some images to show the differences. I will use images of this scene to show progress through each milestone.
One very interesting test case for radiosity is a sculpture in the Hirshhorn Museum in Washington D.C. by John Ferren, entitled "Construction in Wood, A Daylight Experiment". It was discovered by some of the early radiosity researchers (particular credit goes to Cindy Gorn who first modeled the sculptured and presented it in her thesis) and used it to show how important diffuse-to-diffuse light interaction can be. All of the color visible on the viewing side of this sculpture comes from light bouncing off the surfaces on the back of the sculpture diffusely (not specularly). The result is a structure that looks completely white and boring when naïvely ray-traced or directly lit but vibrantly colorful when a global lighting solution is computed. I will also use this scene to evaluate and demonstrate progress on this project.
A New Plan
Thanks to all who offered feedback for my previous post. With the discovery of the GSoC '08 branch for lighter2 with photon mapping plans need to change. I've been examining Greg Hoffman's changes to lighter2 to determine what work could be done and I think there's a good chunk here to constitute a project. Here's my assessment of what the branch contains:
So, it seems given the original content of my proposal and this discovery from last summer that the new course of action should be to work on the photon mapping implementation. So, here's a basic outline of what I could do again welcoming comments:
Milestone 1: Repair
Milestone 2: Improve Quality
Milestone 3: Improve Speed/Features
Concerning the optional task under Milestone 2, Photon Mapping just handles caustics well (it's famous for it) and as such it would be easy to render this if the information about refraction is available in the material structure (namely index of refraction). It could make for some interesting but very specialized effects.
I'm planning about two weeks for each milestone with an extra week for the first one just for getting out of the starting gate. Here's a rough time-line to completion of these milestones:
I want to make sure that the amount of work I'm doing is worthy of a full SoC project regardless of the time frame. I'm definitely slow getting started here and I want to ensure all involved that I will make that up as we go either by putting in extra time now or beyond the scheduled GSoC end. Therefore, I think it is best to make sure I get a project defined that is of a scope appropriate for SoC so that no one feels short changed.
Info about progress on my Google Summer of Code 2009 project on Advanced Lighting & Shading in CrystalSpace.
|<< <||Current||> >>|