Archives for: June 2010

2010-06-30

Permalink 07:04:37 pm, Categories: GSoC 2010  

Change of plans

Ok...so I've been trying the past few days to get some headway on the occlusion culling. After resorting to drawing the meshes quicker with a variant of the DrawMesh method (available in the OpenGl plugin), I have tried several schemes involving bounding boxes in order to reduce the query overhead. Unfortunately all of the methods I have tried have all sorts of corner cases where the culling scheme either culls geometry incorrectly or it is extremely conservative, culling far to little. The methods that have cases where geometry is culled incorrectly aren't well suited for a general culler. The ones which are too conservative aren't very good because they don't really do very efficient work.

So...what to do? Well, as far as I can see, with the midterm deadline approaching, there's only one sure way to get some results which are to focus on implementing a mesh simplification algorithm. This has several advantages:

1) There already exists a basic implementation, written by my mentor with the CHC++ algorithm in mind. The main problem with it is the overhead created by rendering the same geometry twice. By rendering simplified, but accurate, meshes for occlusion queries the overhead will be eliminated enough to make using the CHC++ algorithm practical.

2) The mesh simplification scheme will enable the new culler to work for generalized models and scenes.

So which algorithm to use? Well, there has been numerous research related to the topic of mesh simplification. I've been doing some research today and the algorithm that seems most promising in terms of speed, accuracy and generality is Garland and Heckbert, which uses a quadric based polygonal surface simplification scheme. There's very little time left until the midterm evaluation and I hope to have this algorithm functioning by the 11th of July. Once that's done I can integrate it into the the CHC framework.

I'm cutting it close on time as far as midterm evaluations are concerned, but I hope I'll have some concrete results come the mid-deadline. I'll provide some updates as I progress into the implementation of the mesh simplification algorithm.

[1] http://reports-archive.adm.cs.cmu.edu/anon/1999/CMU-CS-99-105.pdf

2010-06-24

Permalink 05:51:14 pm, Categories: GSoC 2010  

Current Status

Hello there. I'm rather late writing this post, for which I apologize. What I've been doing until now is trying to integrate the chc algorithm into CS by using a modified version of the unshadowed render manager. This has meant integrating support for occlusion queries into the OpenGl renderer and modifying frustvis to interract with the modified unshadowed version I mentioned.

My mentor, Mike Gist has done some work related to hardware supported occlusion culling in CS. Unfortunately the work he did was centered on rendering the polygons in a scene twice in order to be able to see which ones would be occluded. This unfortunately has the issue of added overhead of rendering the same geometry basically twice. I've been trying to improve this through the use of bounding boxes. This is to say render a bounding box mesh in order to test for occlusion. This unfortunately has some quality draw backs:

The first pictures shows the houses in front and a third one in the back. We also see the bounding box outlines to get a better idea of what's going on.

From the second picture we get a pretty good idea of what's going on, wrong. The idea is that as the two bounding box meshes approach each other and begin to overlap, the third house dissapears. Why? you ask? Well even though, the house should be visible, the over conservative bounding box meshes obscure the third house's bounding box mesh completely, thus leading to wrong visual results.

So what to do?

Well, I can think of several schemes to solve this.

1) Decide before the rendering which objects are to be considered occluders and render them normally, all the other meshes (occludees) being rendered with as a boudning box mesh when testing visibility. This would avoid the problem seen in the pictures above.

2) Render the objects as simplified meshes when testing for visibility. This is technically the best solution, as it leads to a compromise between the bounding box mesh solution and the initial 2-pas rendering. We render lower detail meshes that strive to preserve the original's shape characteristics. This would save most of the overhead of drawing when testing for visibility and also alleviate the accuracy problem of bounding boxes.

As my mentor suggested, one method to cut down on the over ehad would be to introduce a "direct" way to render geometry. This would bypass most of CS' abstractions and simply draw basic geometry data (no textures, shaders or any other additional stuff). This will be my next target and after that I'll discuss which of the ways described above would be most feasible.

Stay tuned for further details in the coming days. Over and out.

2010-06-21

Permalink 12:33:12 am, Categories: Navigation Meshes  

Gap between two navmeshes

One of the most annoying problems that comes from building one navigation mesh for each sector is that a gap is formed between two navmeshes, near a portal. In a nutshell, this problem occurs because Recast subtracts a border from each navmesh (for more details, see the post before this one).

In order to solve this problem, I created a "fake corridor" of triangles for each portal, going out of the map in the direction of the portal's normal. This corridor is like an extrusion of the portal's polygon, going in the direction of it's normal. This way, the border is subtracted from this fake corridor instead of the walkable map.

The portal's polygon is a planar convex polygon. For each of it's vertices v, another vertex was created at the position v + n * b, where n is the portal's normal and b is the navmesh's border size. Triangles were then created to connect these new vertices to the old ones (two triangles for each of the polygon's edges). The triangles were defined clockwise for an observer inside the corridor formed.

This approach successfully closed the gap between navmeshes from adjacent sectors, as can be seen below:

Fixed gap between navmeshes
Door between two sectors, before and after applying the fix.

One important thing to notice is that some parts of the navigation mesh are not visible. In the current state of the test application, only the navmesh of the sector the camera is on is visible. This happens because of the way portals are handled in some maps (including the castle map used for this application): when a portal is visible, the sector it connects to is first rendered to a texture, which is then applied to a portal mesh. Since there is no connection between the navmeshes and the sector, the later are not rendered to this texture. So, in order to take this screenshot, I positioned the camera right between the two sectors, causing both sectors to be rendered directly.

At first sight, there is no simple way to make all meshes visible without altering the rendering engine. So, for now, I'll probably leave this problem as it is. On my project proposal, I allocated the last week to polishing the demo application, so maybe that would be a good time to try and fix this.

2010-06-15

Permalink 19:39:48, Categories: GSoC 2010  

Some more colors in the room

Ok, I know I am a bit (quite a bit maybe) late at writing this post. But still, the good thing is that I have got something working. But before that there are other important things to write.

Plans:

After some discussions and meeting with my mentor Scott Johnson, we have decided to aim for getting at least the first milestone completed by the mid term evaluation in mid-july. And the first milestone includes three tasks:

1.) Getting the photon coloring stuff working to obtain color bleeding.
2.) Getting photon mapping to work with caustics.
3.) Planning for how to get photon mapping take advantage of multiple cores.

Status:

Well after spending a lot of time to get walktest working on ubuntu, I shifted onto windows with visual studio, just to realize that it wasnt ubuntu but my graphics card,, which wasn't working, after which my brother was generous enough to buy me a new laptop. So, as of now everything is working perfectly fine (in terms of the development environment) but still on windows with VS.

Anyways, coming to the point, the thing which I have got working till now is the photon coloring effect, the effect of color bleeding can be seen in photon mapping.

The steps which I followed were simple, it was to get lighter2 save another copy of the textures for the scene, it saves one copy as of now for direct lighting but that is a filtered image and isn't useful for photon mapping. The next step was to use this image to obtain the color of a particular primitive when it is hit, and multiply it with the color of photon ray to get the color for the photon ray.

Results : Here are a few samples of the cornell map with photon mapping

This was taken with 25M photons and photon rays colors multiplied by 16. The image has a lot of noise most probably because of the photon mapping light scaling.

And another one:

This too has been taken with 25M photons but pm light scaling =10, the color bleeding is too less, but there is lesser noise.

One major issue that I found with photon mapping is the number of photons that are reflected are too less, while using 25M photons, with maxrecursiondepth set to 20, the total number of photons traced were just around 50-51M in most cases. I looked into the code which decided whether to reflect the photon or not, and I think that there are some issues with the criteria to decide the reflectivity of a surface, I will probably try to read more about it and tweak it out to get some better results.

Permalink 18:46:48, Categories: GSoC 2010  

Introduction

Hi, to everyone @ crystal space community. I am Mohit Taneja, and I am one of the students selected in GSoC this year to work on crystalspace project.

This summer I would be working on getting the photon mapping work out-of-the-box for lighter2. And also make it more efficient by using better algos for packing lightmaps, and enabling lighter2 to take advantage of available parallelism.

I am a undergraduate student in my final year at NSIT, Delhi, India, with my majors as computer science. I plan to enjoy my work and interaction with the crystalspace community.

Permalink 03:39:46 am, Categories: Navigation Meshes  

Current state

Since I am now able to produce some nice looking screenshots, I think this is the right time to talk about the current state of the project.

I'm using the castle map for the example application, since each of it's rooms is a different sector. I have tweaked the colors from the default debug rendering code used in the Recast & Detour demo application, so it would look different and a little more visible in the map used. Here are a few screenshots: (light yellow edges for tile borders, dark yellow for other polygon edges).

Navigation Mesh for the reception
Navigation Mesh for the reception

Navigation Mesh for the throne room
Navigation Mesh for the throne room

Navigation Mesh for the basement dining area
Navigation Mesh for the basement dining area

Although it can't be seen from the examples above, even though I'm rendering all the navigation meshes at every frame, I cannot see navigation meshes through a portal. I don't know why that is happening, specially since the navmeshes are rendered directly with opengl, but I'm not worrying too much about it now.

There is still one problem related to building the navmesh. In order to make sure that a unit can walk in any straight path cast inside the navmesh, Recast removes a border proportional to the unit's radius from the navmesh (so the unit won't go inside a wall, for example). Since the navmeshes are being built independently for each sector, this generates a gap between two sectors where the unit can't move to.

The picture below is the bottom part of a door, which is a portal between two sectors. Notice that there is an area that should be walkable, but has no navmesh over it:

Gap between two sectors
Gap between two sectors

Recast uses the triangles from the collision detection data to build the navigation meshes. In order to fix the gap, I plan to feed Recast with some "fake" triangles, normal to the portal. This way, the border will be subtracted from these "fake" triangles instead of the walkable area.

Path-finding

In order to be able to see a path in the example application, you have to first set a source point (shift + left click) and a destination point (left click). The points are graphically represented by a cylinder, which has the same height and radius of the unit used to build the navmesh. Right now, the path is only calculated correctly when both points are on the same sector. Here are a few screenshots: (blue cylinder is source, green is destination)

Path example 1
Path example 1

Path example 2
Path example 2

2010-06-12

Permalink 12:15:46 am, Categories: Navigation Meshes, Hierarchical Pathfinding  

The API

In this post I'll talk a little about the division between the navigation mesh API and the hierarchical navigation structure API (fancy names, what do those mean?).

The main objective of this project is to use Recast to build navigation meshes for Crystal Space maps. In Crystal Space, we have Sectors, which define an area in space where we can put geometry. Generally, a map is composed of a collection of sectors. These sectors are then connected by Portals.

In order to build a navigation mesh for an area using Recast, we need all the triangle locations from the collision detection meshes that are inside that area. However, in Crystal Space two or more sectors can share world coordinates, so we cannot use Recast directly to build the navmesh for an area formed by more then one sector.

That is where the Hierarchical Navigation Structure comes into place. The idea is that we build independent navmeshes for each sector, and have a higher level structure connecting these navmeshes. In our case, this structure will be a graph, where each node represents a portal. In this graph, edges exist between two nodes if the portals can be connected by a path, and the edge weights represent the smallest possible length for such paths.

Finding a path in this structure takes two steps:

  • Find out which portals we will have to cross to reach our destination. This is done using the A* algorithm in our high level graph. With this information, we also know which sectors the path will go through.
  • For each sector our path goes through, we find the part of our path that is inside that sector.

Here area a couple of examples of how to use navigation mesh API. It's important to keep in mind that one probably won't use these interfaces directly, but will rather use the hierarchical version (unless one's map only has a single sector).

Creating a navmesh:

csRef<iCelNavMeshBuilder> navMeshBuilder = csQueryRegistry<iCelNavMeshBuilder>(object_reg);
csRef<iCelNavMeshParams> params;
params.AttachNew(navMeshBuilder->GetNavMeshParams()->Clone());
params->SetSuggestedValues(1.0f, 0.2f, 45.0f);
navMeshBuilder->SetNavMeshParams(params);
// iSector* sector;
navMeshBuilder->SetSector(sector);
csRef<iCelNavMesh> navMesh;
navMesh.AttachNew(navMeshBuilder->BuildNavMesh());

Finding a path:

csRef<iCelNavMeshPath> path;
// csVector3 origin, destination;
path.AttachNew(navMesh->ShortestPath(origin, destination));

The hierarchical navigation API is very similar to the navigation mesh one. Here are the equivalent examples from the two above:

Creating a navmesh:

csRef<iCelHNavStructBuilder> navStructBuilder = csQueryRegistry<iCelHNavStructBuilder>(object_reg);
csRef<iCelNavMeshParams> params;
params.AttachNew(navStructBuilder ->GetNavMeshParams()->Clone());
params->SetSuggestedValues(1.0f, 0.2f, 45.0f);
navStructBuilder ->SetNavMeshParams(params);
// csList<iSector*> sectorList;
navStructBuilder ->SetSectors(sectorList);
csRef<iCelHNavStruct> navStruct;
navStruct.AttachNew(navStructBuilder->BuildHNavStruct());

Finding a path:

csRef<iCelPath> path;
// csVector3 origin, destination;
// iSector* originSector; iSector* destinationSector;
path.AttachNew(navStruct->ShortestPath(origin, originSector, destination, destinationSector));

The complete APIs can be seen here: Navigation Mesh API and Hierarchical Navigation Structure API.

I'll talk more about the iCelNavMeshParams structure and how to find the values for it's parameters in a later post.

2010-06-08

Permalink 06:44:51 am, Categories: General  

Introduction

Hello everyone!

My name is Leonardo, and I am one of this year's GSoC students.
I am currently a M.S. student in Computer Science at UNICAMP (University of Campinas), Brazil. I don't have any prior experience with CrystalSpace and CEL, but I believe I won't have much problem, since everything looks organized and the community has been very friendly and helpful so far.

This summer (although it's technically a winter for me), I will be working into integrating the Recast and Detour toolsets into CEL.

The main topics of my project will be the following:

  • Build a navigation mesh from a sector, using Recast.
  • Build a hierarchical navigation structure from a map.
  • Build a navigation mesh from a height map.
  • Allow changes to the navigation meshes during runtime.

I plan on posting on this blog updates about my progress and any problems I come across, so people can use it as reference for this work.

June 2010
Mon Tue Wed Thu Fri Sat Sun
 << < Current> >>
  1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30        

Blog All Title

This is the long description for the blog named 'Blog All'.

This blog (blog #1) is actually a very special blog! It automatically aggregates all posts from all other blogs. This allows you to easily track everything that is posted on this system. You can hide this blog from the public by unchecking 'Include in public blog list' in the blogs admin.

Search

Misc

XML Feeds

What is this?

powered by
b2evolution