crystalspace3d.org/planet

August 17, 2014

Naman22's blog

State of shadows

While both renderer deferred and pssm works great but their shadows doesn't seems to be working as they should. If we load a scene with deferred renderer enabled, it appears that everything works fine

deferredShadows

Until, you load point or spot light. Things begin to look weird.
When the light is close to the object it is casting shadow of things look fine.

Shadows

However, if light move away from the object, there is no shadows

noShadows

Same thing happens for point light.

Possible reason for this issue could be that the shadowmaps aren't rendered correctly but dump texture of shadowmap says otherwise.

halfShadows
shadowMap

Maybe its the hardware issue. I tried it on different hardware but still the issue persists.

Now the only possible reason for this issue to occur could be at the shader level.
shadow_depth.xml is responsible for mapping the shadows but nothing seems to be wrong there. Could be, the shader is taking slices into account which it shouldn't or coordinates from use_buffer.xml aren't sent right.

In addition to that pssm render manager is broken. It used to work earlier but now it doesn't exhibits shadows. weird!

pssm

by naman22 at August 17, 2014 09:40 PM

Working with lights

There are several ways to enable lights in Crystal Space. One being from world file (XML). Following is the syntax to lit them in your environment.

1). Point light :-
<light name='Lamp'>
<center y='10' x='0' z='0'/>
<color blue='1.0' green='1.0' red='1.0'/>
<radius>100</radius>
<type>point</type>
<move>
<matrix>
<roty>0</roty>
<rotz>0</rotz>
<rotx>0</rotx>
</matrix>
</move>
</light>

PointLight

2). Directional light :- Same as point. rotate the matrix to change it's direction
<light name='Lamp'>
<center y='10' x='0' z='0'/>
<color blue='1.0' green='1.0' red='1.0'/>
<radius>100</radius>
<type>directional</type>
<move>
<matrix>
<roty>0</roty>
<rotz>0</rotz>
<rotx>-1.57</rotx>
</matrix>
</move>
</light>

DirectionalLight

3). Spot light :- requires some additional values
<light name='Lamp'>
<center y='10.00' x='0' z='0'/>
<color blue='1.0' green='1.0' red='1.0'/>
<radius>200</radius>
<type>spot</type>
<move>
<matrix>
<roty>0</roty>
<rotz>0</rotz>
<rotx>-1.57</rotx>
</matrix>
</move>
<spotlightfalloff outer="50" />
<attenuation>none</attenuation>
<halo>
<type>nova</type>
<seed>0</seed>
<numspokes>100</numspokes>
<roundness>0.5</roundness>
</halo>
</light>

SpotLight

by naman22 at August 17, 2014 07:07 PM

May 25, 2014

Naman22's blog

Shadows!

Yet another GSoC, only this time its shadows.
Game on!

by naman22 at May 25, 2014 08:30 AM

Shader Weaver

Hello,

Shader weavers are intimidating at first mainly because it is difficult to debug them and connections are quite confusing.
The basic idea of weaver can be learned from here. However, the code written in it is outdated... better to go with code.

Following are some weaver debugging techniques that I learned.
1. Debugging by altering the output color. ex

if(something_happens) 
        bug=1.0;
 op_col.r+=bug;

2. CS flag to Disable shader cache : -cfgset=Video.ShaderManager.EnableShaderCache=false

3. CS flag to Dump shader program at %temp%\shader : -cfgfile=/config/shader-debug.cfg

4. Dump textures at %temp%\textures

   a. CS flag : -plugin=bugplug
   b. ctrl+d
   c. ctrl+shift+t

5. Switching between CG/GLSL

   a. simplest way is to delete the shader plugin that isn't needed.
   b. Disable GLSL only : 
      -cfgset=Video.OpenGL.UseExtension.GL_ARB_shader_objects=false
   c. remove/edit the GLSL/CG technique in main entry weaver. 

6. CS flag to check syntax error: -verbose

-Naman

by naman22 at May 25, 2014 08:16 AM

February 26, 2014

CrystalSpace News

GoogleSummerOfCode2014 Welcome


An introduction for potential future Google Summer of Code students has been posted: https://sourceforge.net/p/crystal/discussion/gsoc/

by Jorrit at February 26, 2014 08:53 AM

February 25, 2014

CrystalSpace News

GoogleSummerOfCode2014



Crystal Space was again accepted for Google Summer of code 2014! (http://www.google-melange.com/gsoc/homepage/google/gsoc2014). Google Sumer of Code is a yearly event where Google gives students a chance to do a paid summer job for Open Source projects. So if you are a student and are interested in earning a bit of money during the summer then you can apply for any of the 177 accepted projects. More specifically you can apply for Crystal Space. You can apply (both as a mentor or as a student) at the Crystal Space Google Summer of Code site (http://www.google-melange.com/gsoc/org2/google/gsoc2014/crystal). From there you can also visit the ideas page. But of course you are free to suggest your own ideas too. I would especially like to see ideas related to the AresEd project for example.

Discussion about Google Summer of Code can be done on our mailing lists or forums: Community

We're looking forward to working with you!


by Jorrit at February 25, 2014 08:04 AM

September 26, 2013

Naman22's blog

Concluding GSoC 2013

Hi,
Entire water's shader code has been ported to weaver which will enable to add post effects easily. Plus heightmaps to generate natural water waves and larger ocean size.
Now it looks like this...

pertub
LargeWater

-Naman

by naman22 at September 26, 2013 02:58 AM

Update GSoC 2013

Howdy folks!,

So I had prepared couple of thing that had to be done before actually starting with interactivity.

1). Porting XmlShaders to ShaderWeaver.
2). Mismatching of texturemaps.
3). Large Size of Oceans
4). FFT in oceans
5). miscellaneous

As for 1). I tried to port the code, but I'm kinda still struggling to get it right. Thanks to res, Rlydontknow and Sueastside I'm able to understand them quite well but still weird errors priest.

The Mismatching actually didn't existed in temporary weaver code. so I guess the complete weavers should fix this issue.

The values of CELL_WID, CELL_LEN and MAX_OCEAN_DISTANCE are increased to spread the ocean even wider. but they increased the pre-computation of LODs thus the "gran" (vertex per tile) had to altered accordingly. gran = pow(2.0,LOD_type)/64.0f;

FFT needed VS texture mapping, but weavers weren't complete at the moment. So I decided to give it a shot in old shaders. If you look at the FFT implementation then you'll find that FFT is essentially creats a complex "texture". FFT calculations are very heavy. Thus it would rather be easier to just take a heightmap and use for ocean waves. It actually gives pretty decent result.

Heightmap

Along the way I found that the ocean water move along the camera which shouldn't happen because in a game water usually stays at one particular position. It does that due to LOD calculations. So along with taking waves parameters from the world file, the plugin will also take position and radius to which the user wants it to extend.

-Naman

by naman22 at September 26, 2013 02:35 AM

August 24, 2013

Coder's nook

Implementing the Diffusion DoF

As explained previously, the diffusion DoF algorithm can be separated in the following steps:


  • Calculate the matrix coefficients (a, b, c) based on scene CoC for each row of the image.
  • Solve the tridiagonal system for each row.
  • Calculate the matrix coefficients (a, b, c) based on scene CoC for each column of the image.
  • Solve the  tridiagonal system for each column.

Calculating the coefficients

From the previous post one know that:

a = -s_{i-1}
b = 1 + s_{i-1} + s_i
c = -s_i

The index is used because the CoC changes between pixels.
And with some math explained here one know that:

s_i = CoC_i^2

Solving the system

Solving the system is the more complicated step. As you probably noticed, solving it in CPU with the TDMA algorithm is easy but this algorithm is serial, thus we have to find a more appropriated algorithm.
The algorithm I choose was the Cyclic Reduction. This algorithm, given a tridiagonal system, reduces the original system to a new one with half the variables. Therefore one can recursively reduce it until one reaches a system with fewer variables that can be solved fast. After that one can perform a back substitution in the system of n variables using the result from system n/2 until one reaches the final solution.

Supose one have the following system:

a_1x_1 + b_1x_2 + c_1x_3 = y_1
a_2x_2 + b_2x_3 + c_2x_4 = y_2
a_3x_3 + b_3x_4 + c_3x_5 = y_3


let:
k_1 = - \frac{a_2}{b_1}
k_2 = - \frac{c_2}{b_3}

Then multiply eq 1 by k1 , eq 3 by k2 and sum the three eqs:


(a_1k_1)x_1 + (c_1k_1 + b_2 + a_3k_2)x_3 + (c_3k_2)x_5 = y_1k_1 + y_2 + y_3 k_2

So now one have a reduced system also tridiagonal and with only odd indices. One can recursively apply this technique until reach a system with only one or two unknowns.
Thus the new coefficients are:

a_1^{'} = a_1 k_1
b_{1}^{'} = c_{1} k_{1} + b_{2} + a_{3} k_{2}
c_{1}^{'} = c_{3} k_{2}
y_{1}^{'} = y_{1} k_{1} + y_{2} + y_{3} k_{2}


Then after reducing and solving the odd indices one can perform the back substitution solving the even indices:



a_1x_1 + b_1x_2 + c_1x_3 = y_1
x_2 = \frac{(y_1 - a_1x_1 - c_1x_3)}{b_1}

With this algorithm one can reduce in parallel all odd indices and solve in parallel all even indices.
Notice that one can also reduce both odd and even indices at the same time, getting two systems with half unknowns, with this method in each step one doubles the number of systems and reduces the unknowns by half, this is called PCR (Parallel Cyclic Reduction).

So this algorithm is pretty simple but its implementation using fragment shaders was painful.
As we are solving a system with CR, any deviation in the values can cause a huge difference in the result because the error propagates in each step. Thus the use of texture coordinates to access the data lead to some problems:
  • A small offset in the texcoord causes interpolation between neighborhood pixels leading to wrong values.
  • We must treat  access to out of bounds texture values.
  • In back substitution stage we must 'process' only even pixels and copy odd ones.
Then when I'm writing code, always passes a mistake or another, and you can't easily debug shaders, or even notice a small deviation in pixel colors (in that case the problem variables). Summing all that, it took a whole week and some days to get everything working properly. But in the end it was rewarding to see the result.

 
Notice that the video quality isn't very good, it looks much better live.

by Pedro Arthur (noreply@blogger.com) at August 24, 2013 11:39 AM

DoF techniques

This week I started to research about DoF (Depth of Field) techniques, besides the classical gaussian blur I found 2 interesting techniques: the hexagonal DoF with bokeh from Frostbite2 and the Diffusion DoF used in Metro2033.
For the first, I was able to implement the hexagonal blur, but I yet didn't figured out how to compose the final image based on the blured image and the per-pixel CoC (Circle of Confusion). Simply interpolating between the original image and the blurred based on the CoC didn't looks good, and also there is the bleeding of in-focus/out-of-focus pixels colors. Here are some images (notice that I'm faking hdr to make brighter spots more perceptible)


Diffusion DOF

On the other hand the Diffusion DoF, in a elegantly way, uses the heat diffusion equation to implement the DoF effect. This method treat the image as a medium with each pixel color being its heat. Considering the pixel CoC as the heat conductivity of the pixel, we can simulate the heat diffusion  "spreading" the color among the pixels. Following we have the heat equation:

\frac{\partial u}{\partial t} = \beta \left(\frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2}\right)

Using the Alternate Direction Implicit (ADI) method we alternate the directions (x,y) solving two one dimensional problem at each time step. This 2D ADI method is unconditionally stable, this means that we can use any time step we want.
Simplifying the notation we have:

u_j^k = u_j^{k+1} + \frac{\beta \Delta t}{\Delta x^2}\( u_{j+1}^{k+1} - 2u_j^{k+1} + u_{j-1}^{k+1}\)
u_j^k = \(1+2s\)u_j^{k+1} - s\(u_{j+1}^{k+1} + u_{j-1}^{k+1}\)
where: s = \frac{\beta \Delta t}{\Delta x^2} , and u_{j}^{k} is the color in position j on time k.
 As the new value of u in the time k+1 depends on its adjacents values, we need to solve a system where:
\left( \begin{array}{cccccccc} b & c & & & & & & 0 \\ a & b & c & & & & & \\ & & & & & & & \\ & & & (...) & & & & \\ & & & & & & & \\ & & & & a & b & c & \\ & & & & & a & b & c\\ 0 & & & & & & a & b\\ \end{array} \right) \times \left( \begin{array}{c} u_0^{k+1} \\ u_1^{k+1} \\ \\ (...) \\ \\ \\ u_{n-1}^{k+1} \\ u_n^{k+1} \\ \end{array} \right) = \left( \begin{array}{c} u_0^k \\ u_1^{k} \\ \\ (...) \\ \\ \\ u_{n-1}^{k} \\ u_n^{k+1} \\ \end{array} \right)
with:
a = -s
b = 1 + 2s
c = -s

For this kind of matrix (tridiagonal)  we have fast methods to to solve the system, in my CPU implementation, I used the TDMA (also known as the Thomas algorithm).
Therefore we first have to calculate the diffusion in x direction and them in the y direction. Notice that for each row/column, we need to solve one system.

Before going to implement it in CrystalSpace, I implemented  it first in CPU to have a reference and here are the result:

Original image

CoC image

First step(diffusion in x direction)

Second step(diffusion in y direction)

You can download the code here.

by Pedro Arthur (noreply@blogger.com) at August 24, 2013 11:25 AM

August 15, 2013

Project Ares

Being Friendly

Perhaps a weird subject line but it reflects what I have been doing to AresEd in the past few weeks. I have been working very hard on trying to make AresEd more userfriendly. More specifically the entity editor where templates and quests can be made. I found out myself that even I had trouble creating new templates and quests. I forgot what kind of messages were supported, what parameters they needed and so on. Added to that the user interface was really not that nice and it was also hard to add support for new property classes, rewards or triggers.

So various things were done to fix this problem. First I changed the template and quest editor to a newly discovered WX component: the property grid. This is basically a tree of typed properties. This tree offers a much nicer way to get an overview of the selected template or quest (with collapsable parts) and also makes it easier to edit. And one of the biggest benefits: it is a lot easier to add support for new types of property classes and quest components. Here you see a screenshots of how the property grid looks like for one of the templates included with Ares:


In addition to giving a better overview and editing facility the property grid also supports custom buttons which you can add to various editors. In the screenshot above you can see such a custom button for the 'Monitor' property. Since that property expects an entity as a parameter the button labeled '...' will present the user with a list of entities to select from. This custom button is added to various types of editors (like templates, quests, messages and so on). There is even a custom button for a position (vector3). When you press that one you will get a wizard that will give you a few options to quickly fill in a position from various sources:


This context sensitive wizard button is used throughout various places in the template and quest editors.

In addition to the property grid I also added a lot of useful messages from CEL to the list of known messages. This makes it a lot easier to find what messages you can send to various entities and avoids the need to have to remember it all.

The most recent addition is a new wizard system for templates and quests. When you create a new template or quest you are now given the option to create this new object with a wizard. Wizards are defined in a config file and are parametrized. This is a very easy and quick way to create new templates or quests based on common patterns. Once created you just modify the template or quest as usual.

A lot more work is needed to make AresEd really userfriendly. For example, I would also like to add more comprehensive tooltips to various parts of the editor and of course there need to be many more wizards.

So still a lot to do :-)



by Jorrit Tyberghein (noreply@blogger.com) at August 15, 2013 12:40 AM

August 06, 2013

CrystalSpace spherical terrain generator

Okay my idea does look like it will work

Essentially I think that my idea to handle the intersection seam will work, so I am going to keep working on that and hopefully I will have a beautiful prototype at the end

August 06, 2013 03:18 AM

Crystal Space Integration and the areas of intersection

I just want to make a note that the integration into the crystal space library is a rather interesting process as the entire SDK is very intricate so everyday that I spending learning it I continue to be surprised with ideas that I have never had before.

Today I took a break from the feeders (for the terrain randomization) and the integration as a plugin to look at the overlap, although I think that my current solution is to map each adjacent edge to the other adjacent edge, but it has occurred to me that I am not sure if this is possible, so I think I am going to go back a few versions to test if this is possible before spending too much time doing this

August 06, 2013 02:04 AM

July 30, 2013

CrystalSpace spherical terrain generator

here is another rendering. hopefully I will have a few of the...



here is another rendering. hopefully I will have a few of the bugs fixed by the end of the night

July 30, 2013 12:33 AM

Okay so I figured out how to use libnoise and this is the start...



Okay so I figured out how to use libnoise and this is the start of the functioning generator. As you can see I removed the planar intersection points, however I realize that I may not actually need to find the points on intersection on the blank sphere, I am thinking that I may just have the edge of the planes averaged, like a form of gaussian blur.

July 30, 2013 12:21 AM

July 29, 2013

CrystalSpace spherical terrain generator

libnoise tutorials

Currently I am just going over the libnoise tutorials so hopefully I should have some spheres with height maps shortly

July 29, 2013 05:33 PM

SAFESEH issue solved

I solved the SAFESEH issue by just turning it off in my compiler, however if libnoise is to integrated into crystal space it should be modified so that it can handle SAFESH (which is currently beyond my understand)

July 29, 2013 04:38 PM

Issues with libnoise

Due to libnoise’s age, there is a slight issue with the SAFESEH, protocol, there are no tables for the dll. So currently trying to find a work around, however as it stands it appears the libnoise will cause problems for people using newer compilers

July 29, 2013 04:19 PM

Starting on the noise generation

Okay so I have started going over all the boiler plates for the crystal space src, and working on mine. Today however I am working with the libnoise library, and hopefully I will have an interesting looking sphere that is very spiky. Although I am not going to write the seam code yet so the seems will be very visible.

July 29, 2013 03:48 PM

July 27, 2013

Coder's nook

Finalizing the HBAO

This week I finished to fully implement the HBAO effect, following we have  the effect xml file:
<posteffect>
<layer name="hbao" shader="/shader/postproc/HBAO/HBAO.xml" downsample="0">
<parameter name="angle bias" type="float">0.52359877</parameter>
<parameter name="sqr radius" type="float">0.01</parameter>
<parameter name="inv radius" type="float">10</parameter>
<parameter name="radius" type="float">0.1</parameter>
<parameter name="num steps" type="float">25</parameter>
<parameter name="num directions" type="float">32</parameter>
<parameter name="contrast" type="float">2.3</parameter>
<parameter name="attenuation" type="float">1.0</parameter>

<input source="/data/posteffects/CosSinJitter.bmp" texname="tex csj"/>
<output name="out" format="argb8" />
</layer>

<layer name="blurX" shader="/shader/postproc/HBAO/BilateralBlurX.xml">
<parameter name="blur radius" type="float">7</parameter>
<parameter name="blur falloff" type="float">0.010204081632</parameter> <!-- 1 / (2*radius*radius)-->
<parameter name="sharpness" type="float">100000</parameter>

<input layer="hbao.out" texname="tex diffuse"/>
<output name="out" format="argb8" />
</layer>

<layer name="blurY" shader="/shader/postproc/HBAO/BilateralBlurY.xml">
<parameter name="blur radius" type="float">7</parameter>
<parameter name="blur falloff" type="float">0.010204081632</parameter> <!-- 1 / (2*radius*radius)-->
<parameter name="sharpness" type="float">100000</parameter>

<input layer="blurX.out" texname="tex diffuse"/>
<output name="out" format="argb8" />
</layer>

<layer name="combine" shader="/shader/postproc/HBAO/combine.xml">
<input layer="*screen" texname="tex diffuse"/>
<input layer="blurY.out" texname="tex ao"/>
</layer>
</posteffect>
So, it contains 4 layer:

  • The HBAO - is the main layer, as the original effect uses a buffer(dx10) with precomputed random (cosine,sine,jitter) I baked it into a texture. In the parameters we have, the radius (in which the texture will be sampled), the number of directions and how much steps in each direction will be calculated. 
  • Blur X and Y - is the bilateral blur, this is, a gaussian blur weigthed by the depth diferrence between the samples, the sharpnes parameter controls how much the difference affect the weight 
  • Combine - this layers just combines the AO with the color, it can be done directly in the BlurY pass, but I keep it for now, just to make debug/test easy.
Now I will work on properly hook the depth buffer so that the effects can use it, I think there is some bug related to fbo's on intel gpus, because when I try to run the deferred rm on it the rm binds a D24S8 buffer and then the ShadowPSSM tries to bind a D32 buffer causing a invalid attachment error, but if I change the first to D32 it works fine. Maybe this bug was causing the invalid attachment when I tried to hook the depth buffer with a D24S8 format.
Other strange thing is that this same intel gpu supports dx10, but in the HBAO if I use "for" loops it says that I can't use "for", and also if I remove the "for"(unroll) it says that I exceded the max temporary registers! But the NVIDIA demo with almost the same shader runs fine.

by Pedro Arthur (noreply@blogger.com) at July 27, 2013 03:41 PM

July 24, 2013

CrystalSpace spherical terrain generator

These are the images of the sphere while zoomed in, the lowest...

















These are the images of the sphere while zoomed in, the lowest resolution is 4x4 and the largest is 512x512. The contrast demonstrates the LOD that is possible as each line represents the size of a single row or column

July 24, 2013 10:22 PM

These are the the zoomed out images of the current models So...

















These are the the zoomed out images of the current models

So this shows the result of the final OpenGL stage (I am now moving onto the Crystal Space libraries). I left the bug in for resolution 2 just to show the fact that the planes no longer wrap around behind the forward facing plane, and these are resolution 2 to 9.

July 24, 2013 10:18 PM

V0.7.8 Is now finished

So I have worked out most of the bugs for the Vertex Arrays, and the Color Arrays. I have decided to use a striped color pattern so that the resolution divisions can be seen.

I have decided to leave the small amount of overlap in the sphere for now, and I will fix that when I get the Crystal Space 3D architecture integrated, so the main bug that was fixed was the wrap around strip on the back of the surfaces. Also I have left the vertices in the 6 plane format for now, as this maybe helpful when dealing with selective drawing based off position and direction of camera.

July 24, 2013 10:07 PM

July 21, 2013

Coder's nook

Implementing HBAO

After improve the error and warnings notification I started to implement the HBAO, based on the NVIDIA HBAO  demo. I think that this ambient occlusion achieves better visual quality than the standard SSAO and also it can be implemented without having to store  the normals, thus being usable not only by the deferred rm.
Here is a small video showing the effect:



As there are various effects that uses the depth buffer, now I have to find a elegant way to hook the depth buffer and provide it to the effects without affecting the rm drawing. I also thought in provide some kind of depth buffer usage capabilities to post-effects.

by Pedro Arthur (noreply@blogger.com) at July 21, 2013 06:19 AM

July 19, 2013

Project Ares

Busy busy busy

Hi all,

I just had a nice vacation and I'm now full of energy and highly motivated to work on Ares and the indie game project (code name APRIL btw).

First I scanned my harddisks and found the blend file for one of my older levels. It is the level which is used by celtest. It is an old house. Only the first floor was finished but otherwise it is ok. I started to modernize the level. The idea is to use this level as a test-bad for various CS and Ares features with three goals in mind:

  • See what's lacking in CS, CEL, and Ares and then fix (or find people to fix) this.
  • Make a nice small game using this level which can serve as a good and complete example for people who would also like to try to get into Ares.
  • Make a first test at seeing which technologies are going to be used and useful for the indie game project.
Especially for that last situation I wanted to try to use the deferred render manager. I really think that in these days we should aim for hardware based shadows with good quality and performance. The deferred render manager comes close. I was pleasantly surprised by it. But even so it is not good enough for a commercial game. So a few people have started to invistigate the issues and are now trying to find ways to optimize both the performance as well as the quality.

For making a nice little game I also started thinking about adding particle editing to AresEd. The CSEditing framework already has something for that so I started seeing in how I can use as much as possible from that framework into AresEd.

Ah yes, I also need to finish the work on the physical actor :-)

In the mean time the APRIL team is working towards the first demo. That's about all I'm going to tell you about that!

Greetings,

by Jorrit Tyberghein (noreply@blogger.com) at July 19, 2013 07:51 AM

July 18, 2013

CrystalSpace spherical terrain generator

Next Stage

Currently I am in version 0.77 on my computer, the next two stages 0.78 and 0.79 will be the following stages

First in 0.78 will take all of the current points generated in the rough version and generate a matrix that stores every vertice, it will also have a changed intersection algorithm based on radius and not resolution as to allow the generation of the spheres at higher resolutions to be faster. So essentially I am now going to move away from calculating all the vertices on the fly and move to a structure that pre calculates all of the points on the sphere. Currently I can only manage to generate a sphere that is resolution 2^9 which is approximately 512*512*6 vertices or about 1.5 Million vertices, this process takes 3.5 minutes to generate the list of points on an i3 1.33GHZ processor, the next version should be able to generate a list of points 2048*2048*6 or 2 orders of magnitude greater in the same time frame (that was the maximum resolution I was getting in that time before the intersection solver was implemented, which is approximately 25.2 million vertices).

Next version 0.79 will be the first one to use the crystal space architecture, what will happen here is the complete conversion from openGL to iGraphics3D.

I have a pretty big list of things that need to be done for version 8 including but not limited to the inclusion of the Crystal Space terrain generation functions, the use of libnoise, the generation of surface normals for lighting, although those issues are pretty easy to solve and there is very little math that needs to be done in those steps that isn’t already well documented.

 

July 18, 2013 03:37 PM

These are all the oblique images to show how the surfaces seem...















These are all the oblique images to show how the surfaces seem to work into each other seamlessly (Although currently they are not connected but that will be done after the crystal space conversion).

July 18, 2013 03:25 PM

This is the head on view of the new sphere structure, currently...















This is the head on view of the new sphere structure, currently there are no surface normals calculated (that will be done after I have translated this into the crystal space architecture).

July 18, 2013 03:23 PM

Completion of the intersection

Okay so I have no officially finished the rough version of the planetary net. There are three bugs however, which will be fixed in the next iterations of the planetary generator.

The first is that resolution 2 still has a precision problem and therefore the intersection is not solvable (although I know how to fix that problem, which will also allow for a more efficient program overall as well).

The second issue is that the predictions that during the mapping process after the center of the plane there are limited points of intersection, this will be remedied with a better prediction algorithm in the next stage

The third issue is there are vestigial bands that loop behind the surfaces, this will be fixed by making oscillating loops, and again will be repaired in the next stage.

I am going to upload screen caps of the current generation of the sphere net (I didn’t bother making the wire frame version as that is not necessary at the current time). 

July 18, 2013 03:21 PM

July 16, 2013

CrystalSpace spherical terrain generator

So These images show exactly what the spheres look like with the...





So These images show exactly what the spheres look like with the overlap removed and the seems not built. On resolution 256 and 512 these seems don’t really show up however I want to remove this gap. The issue is figuring out where the information is within the sphere struct as it already has all the points of intersection discovered and they just need to be plotted on the edges properly (Which is the job for today)

July 16, 2013 08:54 PM

Update on Overlap

Okay so I have finally built a sphere without overlap however I am working out two issues. 

The first issue is that even though the points where the overlap occurs are easy to remove there is the problem that I am unable to replace those points with the points of intersection as seen in previous posts.

The second much simpler issue is that the second half of the planes still have a very small amount of overlap however I think I can solve this by adjusting the loops that plot these points.

So hopefully by the end of the day today I will have the rough version of the sphere without the annoying seem gap (I am posting the 64*64 resolution images next)

July 16, 2013 08:51 PM

July 12, 2013

Coder's nook

Post-Effect XML Parser

This week I did some enhancements in  PostEffectLayersParser. An effect can be described by the following structure:

<posteffect>
    <layer name="name" shader="/shader.xml" downsample="0" mipmap="false" maxmipmap="-1">
        <input layer="" texname="tex diffuse" texcoord="texture coordinate 0" pixelsize="pixel size" />
         (...)
        <input source="/tex.png" texname="a texture" />


        <output name="out1" format="argb16_f" reusable="true">
        (...)
        <output ... />
        <parameter name="param_name" type="vector2">1,1<parameter />
        (...)
        <parameter (...)>(...)<parameter />
    </layer>
    <layer>
        (...)
    </layer>
</posteffect>

Based on that structure we have:
An effect can have multiple layers and each layer must specify at least its name and the shader used. Optionally can be specified its downsample, mipmap and maxmipmap.
Each layer can have one or more inputs and one or more outputs.
In the input attributes we have:

  • layer - specifies which layer output will be used as input. If omitted will be used the first output of the previous layer (in case it's the first layer then will be used the input texture). The input format is <layername>.<outputname>, if <outputname> is omitted then is used the first output of <layername>.
  • texname - shader variable name used to access this texture, if omitted the default value is "tex diffuse"
  • texcoord - shader variable name used to access the texture coordinate, if omitted the default value is "texture coordinate 0"
  • pixelsize - shader variable name used to access the pixel size for this texture, if omitted no shader variable is provided to the shader.
  • source - specifies the path to the texture used as input.

Notice that layer and source are mutual exclusive, and if both attributes are provided layer takes precedence over source. If no input tag is provided then is used one with all default values.

In the output tag can be specified:

  • name - the output name, used to access this output as input for other layers, if omitted this output will not be accessible unless this is the first output.
  • format -the output texture format, if omitted the default value is "argb8".
  • reusable - if false prevents the reuse of this output , if omitted the default value is "true".
  • mipmap - enables mipmap generation, overrides the value specified in layer tag, optional.
  • maxmipmap - max mipmap level to generate, overrides the value specified in layer tag, optional.
If output tag is omitted, the output used will use all default values.

The parameter tag is used to provide default shader variable values for the post effect.


by Pedro Arthur (noreply@blogger.com) at July 12, 2013 11:19 AM

July 10, 2013

CrystalSpace spherical terrain generator

A quick note on the previous posts

The red points on all the intersection lines are the points that caused the most issues, they are the points where two intersections occur on one line segment on the arch from the adjacent plane.

July 10, 2013 03:27 AM

Here you can see what the map looks like after all the points...









Here you can see what the map looks like after all the points have been extrapolated, on the current version the center points on the planes are included (they are not in this version as I took all of these screen caps yesterday, and currently I am working on the storage system so that the draw process can use the intersection information when drawing the planes).

July 10, 2013 03:24 AM

This view shows the method that I used to solve for the...













This view shows the method that I used to solve for the intersection, essentially the solve intersection function looks at a half of 1 edge, therefore the program only looks for 1/24 of the total intersection (on higher resolutions this makes a world of difference).

Notably there is a bug, this bug occurs on resolution 2, and what is happening is the the step is not fine enough so the intersection is never discovered (the function compares points on each line segment and the intersection does not occur on the current step size).

July 10, 2013 03:20 AM

Intersections

Okay so I have finished skeleton of the intersection function, at the moment the function can find every point of intersection however it doesn’t record the locations. I am working on the recording part of the function.

July 10, 2013 03:12 AM

July 07, 2013

Coder's nook

Refactoring the PostEffect code

After porting the old postprocessing code I've started to refactor the code.
Starting by the PostEffectSupport class, first I removed some functions that no longer make sence. As PostEffectSupport contains an array of post effects and not Layers, the functions ClearLayers, AddLayers* and ClearIntermediates are removed. I've added the function SetPostEffectEnable so that we can enable/disable postprocessing at any time.

In the iPostEffect interface I've removed the function ClearIntemediates because all texture management was move to PostEffectManager. I also added the function LoadFromFile and Construct, which respectively, loads the effect from an external xml file and setup the effect (which include allocate textures, link layers and sets up shaders variables).

In this process I created some helper classes: DependencySolver and PrioriryShadeVariableContext.
The first given a graph of layers with edges linking a layer output to another layer input, this class calculates the minimum number of textures needed to draw the effect and assigns each texture to its respective layer.
The second stacks up shaders variable contexts with each one having a priority so when pushing the variables to the svStack the highest priority dominates over the lower priorities. Then we can have a svContext for default variables and one(or multiples) to user defined variables in a clean way.

For the PostEffectManager I added 3 new functions:

  • RequestTexture
  • SetupView
  • RemoveUnusedTextures

Which respectively:

  • Asks the manager to create or reuse a texture previously allocated based on the allocation info and the given number(identifier).
  • Sets up the view, if resolution has changed it releases all textures so that PostEffectSupport forces the effects to reacquire the textures
  • We keep an array of allocated textures (csRef), then when the reference of the texture reaches 1, it's no longer used in any post effect and we can safely remove it, yet this function isn't implemented because I'm having some trouble with the CS iterators, I think using std would be much more easy.


And finally, the HDR code is BROKEN!
I noticed that the hdr already no longer worked, but now given all these changes, the code is really broken and I will have to devote some time to fix it. It's good and bad because it consumes time but I think in the end the hdr code will be more simple more versatile and, of course, will work properly.

by Pedro Arthur (noreply@blogger.com) at July 07, 2013 09:32 AM

June 28, 2013

CrystalSpace spherical terrain generator

I have noticed one more interesting thing, as I increase the...





I have noticed one more interesting thing, as I increase the resolution all the sections that are multiples of 2 happen to intersect symmetrically. So for instance I marked the beginning position and the mid-points. 

There is a strange effect though that even multiples of two for instance resolution 6, the symmetrical intersection lands on the position 2^4, while for resolution 5 the symmetrical intersection lands on the position (2^3) - 1 so I am looking into this for making the intersection solver much more efficient. 

June 28, 2013 04:52 AM

Okay so I am now refining the technique for collecting the...









Okay so I am now refining the technique for collecting the information about the slopes of each line segment and therefore the information about where each arch intersects the bottom of the plane. Currently I am not sure how to have the program make the initial guess for the location of the intersection. 

The above images show each line segment as defined by their slope, apposed to the polar coordinates for the respective grid. This means that if I compare any two line segments I can now find the intersection, however I am not sure how to choose the line segments for comparison.

On a side note I can now divide any segment to infinitesimal size so there may be an interesting LOD attribute that can be played with…

June 28, 2013 04:43 AM

June 27, 2013

CrystalSpace spherical terrain generator

Okay so this post is to illustrate that I have figured out how...













Okay so this post is to illustrate that I have figured out how to solve for the intersection of the planes on the spheres. What you are looking at is a series of lines drawn in GL_POINTS after the equations of the lines are solved dynamically. I had a bit of trouble getting this to work correctly however now that it works it is trivial to find the intersection of the planes since all I have to do is compare the line segments.

Okay so I have discovered to very interesting things here, the first is because of the symmetrical nature of spheres the intersection of planes always happens in the same sections on both planes. The implications of this is that I can predict which line segment the intersection will occur and therefore not have to calculate every point on every line segment. The second thing I learned is if you look at the resolution 0 image, you will see that both the line segments that make up the net are yellow. This means that the trivial case is actually not a trivial case and therefore needs no special treatment (this will shorten the amount of code in the final product).

June 27, 2013 07:09 PM

Coder's nook

Crystal Space Postprocessing Requirements & Design

The postprocessing requirementes are:
  • posteffect can have multiple passes (layers)
  • each pass can uses multiples inputs
  • the pass inputs can be others layers outputs, or custom textures
  • each pass can have multiple outputs (render targets)
  • output textures can have custom format, downsample and mipmap values
  • # of output textures should be minimal
  • output textures can be reused if not strictly specified the opposite
  • a posteffect can have default shaders parameters
  • it should be data-driven, that is, fully set up using the posteffects xml files

Implementation

As some part of these requirements has already been inherited from the previous code, then I'm doing small changes in the codebase to fit the rest of the requirements.

Texture usage optimization
I started to work in optimizing the texture usage, sharing common textures between layers. The old method (using ping-pong textures) would not work properly, imagine the case where we have 3 layers:

Layer 2 uses the output from layer 1 and layer 3 uses both outpus (1 and 2).

In such a case we clearly realize that we can't reuse the output of layer 1 until layer 3 finishes its work, the same occurs for layer 2.

From this example we conclude that:
- All inputs of layer i can't be reused until layer i is drawn
- All outputs of layer i can't be shared between themselves (obvious)

For that problem I wrote a simple algorithm that resolve the needed texture usage (pseudo code):

ResolveDependency( layers )
avail_tex : list;
used_tex : list;

for each output of last layer
used_tex.put(output);

for i = last layer to  first layer
//remove textures where assigned layer number > i
// and put it on avail_tex
updateLists(i);
for each input of layer i
//find if there is any texture that match the
//properties from the layer output referenced by input
tex = avail_tex.find(format,mipmap,downsample);

if tex = not_found then
tex = CreateNewEntry(format,mipmap,downsample);

asign tex to layer i
used_tex.put(tex);
end

Of course it's a shallow overview of the implementation details.

I also did some minor changes in some structs related to the layer options. I will list the structs first then I explain them.


struct TextureAllocationInfo
{
bool mipmap;
int maxMipmap;
int downsample;
bool reusable;
csString format;
TextureAllocationInfo();
bool operator==(const TextureAllocationInfo& other);
};
As the name sugests this struct defines the properties of the layer's outpus, and will also be used as input param by  a Texture cache that will be implemented to share textures (RT's) between posteffects.


struct PostEffectLayerOptions
{
TextureAllocationInfo info;
csRef&ltiTextureHandle> renderTarget;
csRect targetRect;
csString name;
bool operator==(const PostEffectLayerOptions& other);
};
This struct defines the full output options, its name and a manual render target if desired.

enum LayerInputType { AUTO, STATIC, MANUAL };
struct PostEffectLayerInputMap
{
LayerInputType type;
csRef&ltiTextureHandle> inputTexture;
csString sourceName;
csString svTextureName;
csString svTexcoordName;
csString svPixelSizeName;
csRect sourceRect;
PostEffectLayerInputMap () : type (AUTO),
svTextureName ("tex diffuse"),
svTexcoordName ("texture coordinate 0") {}
};
The input map had small but important changes, first it now has a type that defines if it references a layer output, a texture resource or is manually seted up. The sourceName variable depending on the type can be: "layername.outputname" so that we can link layers input/output, or a path like "/data/lookuptex.png" for AUTO and STATIC type respectively. For manual textures the sourceName is unused.

For last but not least:
struct LayerDesc
{
csArray&ltPostEffectLayerInputMap> inputs;
csArray&ltPostEffectLayerOptions> outputs;
csRef&ltiShader> layerShader;
csString name;

LayerDesc () {}
LayerDesc (iShader* shader)
LayerDesc (iShader* shader, const char * layerName)
LayerDesc (iShader* shader, PostEffectLayerInputMap &inp)
LayerDesc (iShader* shader, PostEffectLayerInputMap &inp,
PostEffectLayerOptions &opt)
LayerDesc (iShader* shader,
csArray&ltPostEffectLayerInputMap> &inp,
PostEffectLayerOptions &opt)
LayerDesc (iShader* shader, PostEffectLayerOptions &opt);
};
The layer descritor contain all the parameters needed to create and setup a layer, and of course, alot of constructors to easily create a layer.

The next step will be create a texture cache where all posteffects can request textures, link layers inputs to outputs and create a loader for STATIC type inputs.

by Pedro Arthur (noreply@blogger.com) at June 27, 2013 12:36 PM

June 26, 2013

CrystalSpace spherical terrain generator

That black point is the intersection of the top, front and side...



That black point is the intersection of the top, front and side planes. I have however realized that there are different intersection points on each resolution so I am currently working on how to solve for each resolution. Once that is finished I will then do the cross check. The cross check is where the plane is checked across each row (this will define the curve).

June 26, 2013 02:59 AM

Intersection resolution 1 is finished

Okay so I have figured out how to solve for the intersection of the planes by comparing line segments. When I am finished upon the generation of each sphere a list of points will created this list will contain every point where any two planes intersect (and all points that are inside the intersection will be ignored). I will upload an image of the solution to the first level of resolution intersection.

June 26, 2013 02:53 AM

June 23, 2013

CrystalSpace spherical terrain generator

Intersection

Okay so the intersection of the planes has a few parameters, the first three are the tip of the plane, triple point of intersection, and the center of the each plane. The last parameter is the curve of the adjacent plane, essentially I now have enough information to define the area of intersection. Since the points that define the area do not change with the resolution, the points that define area of intersection can be solved as the triple intersection of three line segments on resolution 1. The curve that intersects the plane is dependant on the resolution and is already solved since it is generated in the production of each grid.

June 23, 2013 02:24 PM

The intersection of the planes is fixed as seen in these 4...









The intersection of the planes is fixed as seen in these 4 images.

June 23, 2013 02:16 PM

June 22, 2013

CrystalSpace spherical terrain generator

These are the last models that my laptop is capable of producing...









These are the last models that my laptop is capable of producing in this current generation of models (before VBO implementation, pre-calculating vertices and removal of all redundant points, the most dense sphere is made from 6 line strips with the dimensions 8192*8192 and has a 5 second calculation and render time (this will be reduced to about a .025 to .05 second calculation, and a 0.05 second render time in the final version)

June 22, 2013 02:18 AM

I am just uploading the current nets that I have (that work, the...













I am just uploading the current nets that I have (that work, the corner algorithm is not ready to be shown yet however)

June 22, 2013 02:10 AM

June 21, 2013

CrystalSpace spherical terrain generator

Resolution control of the "Volley Ball" spheres

I have figured out the resolution control features for the current generation of spheres, I am going to keep each level as an exponential increase in the number of squares on each face i.e. resolution 0 is 2^0 squares per face, resolution 1 is 2^1 squares per face, resolution 2 is 2^2 squares per face and so on. This is actually super easy since each face is exactly 1/6  of the entire sphere and 1/4 of a circle in any direction. I am still trying to figure how to handle the corners, and I think that I know how to do it now.

(I am now releasing the images of resolutions 1 to 5, and not that the corner pieces are not perfect yet)

June 21, 2013 03:18 AM

This is the solid surface so far as you can see there is very...







This is the solid surface so far as you can see there is very little warping in the bands and again I am working on getting those corners to sit smoothly in place

June 21, 2013 12:49 AM

As you can see the nets do work rather nicely as if they were...









As you can see the nets do work rather nicely as if they were the sides of a cube (meaning 6 “faces" to the sphere) I haven’t figured out how to handle the corners yet… 

June 21, 2013 12:48 AM

June 20, 2013

Coder's nook

Porting the postprocessing code

Porting the posteffect code from Crystal Space soc2011 branch to the newest(soc2013/postprocessing), wasn't too hard as I expected.
In the porting process I learned a bit more on how the current postprocessing code work, I found the critical parts to start the work and, of course, I found bugs too.


  • hdr is buggy, something related to frame buffers deallocation
  • Basic posteffect chaining is working
  • Removing/adding the effcts is buggy, I think it's due to the PostEffectsSupport not chaining the rendertargets correctly
  • The posteffect code doesn't yet reflect a design focused in the use of multiples effects


There are some minor bugs too, like wrongly offseted texture coordinates and pixelSize shader variable not setup correctly, for these I will commit fixes soon.

I also did a small demo to test posteffect chaining. Here is an image showing three simple posteffects chained


The next step in the work is to start improving the desing of the posteffect code to reflect the needs of the proposed effects.

by Pedro Arthur (noreply@blogger.com) at June 20, 2013 08:34 PM

Generating msvc projects for CrystalSpace

 Generate the msvc projects was tiring , everything that could go wrong went. Then I decided to make an entire tutorial explaining how to get things working.


First of all, download cygwin (if you are using windows) .

Installing ftjam
Then install ftjam, download it here, and copy the binary to  cygwin/bin or cygwin/usr/sbin folder.

Installing perl and Template Toolkit
Before installing it, you need to get gcc-4, if you doesn't already have it run the cygwin setup and in the devel packages select gcc-4 C/C++ package.
If you are using cygwin, also install mingw gcc C/C++ compiler or you will get a error when running ./configure script, because gcc-4 doesn't support a flag used by the configure script.

To install Template Toolkit go to the  cygwin/linux terminal:

# cpan install Template

Or the hardest way, download ActivePerl here, go to the activeperl/bin folder and run cpan.bat, and then in the prompt run the command:

# install Template

Run the configure script

# ./configure

Notice that if your  CS svn repository is configured to use CRLF windows line endings these scripts will fail to run, therefore change the line endings to LF.

Before running jam you will need to run (only once, to create a configuration file):

# ttree

it will asks you to create a configuration file, confirm and now you are ready to run:

# jam msvcgen

If you don't run the ttree before using jam, it will fail because jam uses ttree and ttree will asks to create the configuration file without execute the command passed by jam.

by Pedro Arthur (noreply@blogger.com) at June 20, 2013 06:58 AM

June 17, 2013

CrystalSpace spherical terrain generator

Final notes on this design (which is the one that I like the most now), the surface will be...

Final notes on this design (which is the one that I like the most now), the surface will be subdivided into 6 pieces, appearing as the 6 sides of a sphere, each face can now be viewed as an xy and z plane, where z is once again the radius and x and y is a step in either up and down or left and right by the interval index of the sphere (this will be the resolution, i.e. the higher the resolution the finer the details).

June 17, 2013 04:04 PM

The top sphere is a full sphere built in all three dimensions...







The top sphere is a full sphere built in all three dimensions however it is not truncated so there is overlap, the final version will remove all of this overlap, the bottom two spheres simply illustrate how even the surface can be along the bands, even though this looks regular at the poles as seen in the triangle mesh there is still a bit of room for improvement.

June 17, 2013 04:01 PM

This is what each band will look like, however this band is...





This is what each band will look like, however this band is incorrect as it is connected all the way through. The next iteration will only have the band approach and connect to the other 4 bands, 1 on each side and 1 from top and bottom.

June 17, 2013 03:58 PM