I've been busy for a few days with a rather boring user study and the results of it were less than great ... so I needed a pick-me-up. For me, this means some creative work. Hence, I decided to spend a little time modeling a new world to test my lighter2 changes. Something considerably more complex than the cornell box and construction in wood and that would logically include every type of light (point, directional and spot). It's fun because it's art (in a way ... right) and for me I learn how to use blender better and can refine my workflow for getting things into CrystalSpace's XML world format.
Anyways, Here's what I've created:
It's a combination of a bunch of different cannonical models from the graphics community (from Stanford specifically ... yes the bunny is in there) all placed in the Sponza Atrium model that's been appearing in global illumination papers for the past several years now. The atrium is texture mapped (and has bump maps somewhere that I have yet to track down). The Stanford meshes are incredibly dense (100K faces each AFTER I decimated them, they had millions in their original form ... some 10s of millions). So, this is an intense little scene to render and particularly tough to run through lighter2. It's a more realistic example and hopefully will test just how robust the system is. Consequently it takes a VERY long time to light. Most of the time is spent laying out the lightmaps (not raytracing or photon mapping) but be prepared to wait a while if you want to re-calc the maps and put it on a machine that you can let run for potentially a few days. I'll post some more images as lighter2 finishes running.
Blender model: I setup the world in blender and here is the file for that. It took a lot to get all of this into blender. Here's a breakdown of what I did -
The model is still missing several textures and when you export it there will be several errors that need to be fixed by hand. These are all present in the original 3ds file and were just easier to fix by editing the world file by hand.
SponzaBlender.zip (15.2 MB, hosted externally)
Material fixes for world file: Here's the extra textures and a snippit of XML that can be pasted into the world file over it's 'textures' and 'materials' section to fix all the errors created by Blender2Crystal. Note that bump maps are still missing. I can't seem to find these on the net anywhere.
SponzaMaterialFix.zip (499 KB, hosted externally)
CrystalSpace world: Here's the exported world with the material and texture fixes already applied.
SponzaCSWorld.zip (16.4 MB, hosted externally)
Static lit CS world: Here's the same world with lightmaps included so you don't have to wait for it to finish raytracing. This is just direct raytracing for now. A photon mapped version is forthcoming.
[Still rendering, will post soon]
Lots of posts today, sorry for that. This one will be short.
So, Final Gather is slow as molasses. Furthermore it seems to be not working right in my code. But, it is very important for smoothing out noise in the photon map. Enter the irradiance cache.
The irradiance cache is a concept introduced by Greg Ward and co. back in 1988 and I believe is part of Ward's Radiance renderer. The paper that describes it in detail (and which is surprisingly easy to read) can be found here (hint to Scott, check this out):
The basic idea is that diffuse inter-reflections are very hard to compute using Monte Carlo methods (and Final Gather is a Monte Carlo method) but by-and-large they are very uniform and slowly changing across a surface (i.e. a good candidate for interpolation). So we would like to reduce the number of times it needs to be computed and interpolate everywhere else.
Ward describes a two tiered method: the primary method is the standard Monte Carlo method and the secondary method will interpolate cached values from previous computations. The secondary method needs to know when it can interpolate and how to weight the cached values. This is done by approximating the gradient of irradiance between the cached point and the point we need irradiance for. If the gradient estimate is too high a new value is needed. Otherwise we can weight the cached value by the inverse of the error inherent in the estimate and get a very good approximation that is considerably cheaper than the Monte Carlo method.
The devil's in the details and I won't bore you with it (other than to say it involves an octree based around the valid distance of each cached sample). But with Jensen's summary of Ward's paper (and the paper itself) I think I've got the Irradiance Cache implemented. It needs some testing and such (still to come) but hopefully this will help with noise and speed ... provided I can then figure out what is wrong with the Final Gather!
I'd kill for smaller unit testing right now but we'll see if we need it first.
To round out my previous post concerning GSoC I thought I should list out the changes I have made and how the milestones are lining up.
Despite the noise and artifacts still present I think it's safe to say Milestone 1 is complete (or nearly so). Here's a list of things still to finish/implement.
Immediate priorities (considered unfinished Google Summer of Code obligations):
Future changes (for fun and profit!):
It seems appropriate to send thanks out to Res and Scott at this point as they have both been quite helpful this summer and I'm sure will continue to be.
And now, we carry on ...
My handling of light attenuation turned out to be incorrect. Res and Martin set me straight. I had the conceptual model that distance attenuation accounts for the phenomenon of light losing power as it travels through a medium (even just the atmosphere). This is called attenuation in physics and optics but in graphics this is not what distance attenuation is accounting for. Distance attenuation accounts for the fact that as light moves away from its source the energy spreads out (I assume like a wavefront spreading out). It does so like the surface of a sphere so the 'realistic' distance^2 factor accounts for this spreading out perfectly.
The consequence of this type of attenuation (the correct type) is that photon mapping attenuates automatically. We are distributing photons equally around the sphere of the light source and when they land they will be distributed according to how far away they are from the light. The density of this distribution already has this spreading effect built in automatically.
So, to really do calibration between raytracing and photon mapping I need to remove the attenuation from the photons (already done) and then switch all the lights to use 'realistic' attenuation (which is not the default). My apologies to res for second guessing his advice as this was his original suggestion. As soon as I did this it became apparent that things were dramatically more comparable between raytracing and photon mapping:
As you can see, despite the similarity resulting from the change to realistic attenuation there is still a marked difference in the exposure of the two. After revisiting this from many different angles, over and over again, and after changing the code in different ways and attempting both a mathetamical and visual calibration I've decided that this issue is going to have to wait. Here's an example of the problem:
Note that the raytracing and photon mapping graphs have similar but miss-aligned shapes. This miss-alignment is the problem. There is no easy way to simply fudge things and fix it as it will be entirely dependent on the scene being rendered. Furthermore, I'm starting to think (after talking with colleagues) that there is a mistake somewhere in either the RT code or the PM code that is causing this miss-alignment and simply fudging things to fix it is not a permanent solution (or one that I should be spending so much time on).
So, three days gone on this but at least I have something to show for it. New configuration options! (well, and a lot of frustration!) Here's the new options:
'forcerealistic' - This option can be enabled or disabled and will force all the static and pseudo-dynamic lights in a world to use 'realistic' attenuation mode. This saves the trouble of having to re-do your world in order to use it with photon mapping.
'lightpowerscale' - Scale all the lights in a scene by the given scaling factor. This scale is applied to the light color which is essentially the same as it's power. When you use 'forcerealistic' things tend to get much darker so the lights need to be scaled up to compensate. Again, this option avoids having to edit the world file to achieve this.
'pmlightscale' - Like 'lightpowerscale' this will scale all the static and pseudo dynamic lights in the scene but only for the photon mapping phase. This is in addition to any scaling applied by 'lightpowerscale'. This allows you to fudge things from the command line and bring the exposure of the photon mapping simulation and the raytracer in line with one another.
Final Gathering. No, it's not the name of yet another horror movie sequel (although Google isn't much help for figuring out what it in fact is).
It's a technique for effectively smoothing noise in global illumination from Lambertian surfaces. Basically, given a solution to global illumination (like radiosity or a photon map) instead of looking up the diffuse light value in the GI solution, you do one final bounce of light by shooting rays out across the hemisphere above the point you are rendering. These rays sample the secondary light that would be hitting this point much like a distribution raytracer would send out rays to sample the BRDF. In this case, a random sampling of the Lambertian distribution is not the best bet (according to Jensen anways). It is better to use a grid of points placed across the hemisphere according to Lambert's cosine law and then jitter these points slightly to ensure the full hemisphere gets sampled.
When these rays hit a surface a distribution raytracer would send out more rays to sample the light hitting that surface. In FG, you use the precomputed GI solution instead. So, like shadow rays, FG rays do not bounce. However, FG is intentionally used for Lambertian surfaces (perfectly diffuse surfaces). This means that the hemisphere above the point must be FULLY sampled and that takes a lot of rays. Doing this at every point in the scene is very inefficient.
Enter the irradiance cache. Diffuse lighting changes very slowly across a surface; think of a big white wall in an office (the one exception would be a caustic which is actually a diffuse effect but we'll ignore that for now). Slowly changing functions don't need to be sampled as frequently as quickly changing functions so re-computing the FG value at every point across a large surface is wasteful. Instead, we could sample it sparsely and use interpolation of nearby values to fill-in the gaps. This technique is known as irradiance caching and the math behind it is pretty intense.
We still have noise in our simulation and the best way to combat this will be with a final gathering step (something that the previous GSoC project had attempted to include but which I believe was not implemented properly). Unfortunately adding FG is going to severely tank our performance during the lighting calculation phase so (time permitting) we are going to also need an irradiance cache to make it work in a reasonable amount of time. The cache itself is quite simple (very similar in fact to a photon map) but the metrics used to determine where a new sample is needed and where a pre-existing one can be used instead are not so simple. Jensen discusses the irradiance cache in full detail in his book (although he never uses the term 'Final Gathering' that I can see) so implementation should be a matter of translating all the summations and integrals into effective code.
Info about progress on my Google Summer of Code 2009 project on Advanced Lighting & Shading in CrystalSpace.
|<< <||> >>|
If this error persits, please report it to the administrator.
Table 'evo1_sessions' is marked as crashed and should be repaired(Errno=1194)
Your query: Sessions: get list of relevant users.
WHERE sess_lastseen > "2013-05-24 04:32:58"
AND sess_key IS NOT NULL