Archives for: 2010

2010-11-20

Permalink 10:10:09 am, Categories: general  

VirtualBox GnomeApplet

Sometimes the need arises to monitor something.

How to install:

  • Get the files from here. and place them in
    /home/youruser/vb/
  • Make file executable: chmod +x /home/youruser/vb/virtualbox-applet.py
  • Open virtualbox-applet.server and edit

    <oaf_server iid="OAFIID:GNOME_VB_factory"
    type="exe" location="/home/youruser/vb/virtualbox-applet.py">
    ...
    to point to the right path.
  • create a sym link in /usr/lib/bonobo/servers/

    sudo ln -s /home/youruser/vb/virtualbox-applet.server /usr/lib/bonobo/servers/virtualbox-applet.server
  • Right click a panel, click 'Add to panel', do a search on 'virtualbox', select 'Python VirtualBox Applet', click 'Add' and voila it should show up.

vb

2010-10-18

Permalink 11:04:23 pm, Categories: External tools  

The quest for CEGUI

It seems to be a task that programmers by and large despise, but which is necessary nonetheless: Creating GUIs in the form of menus, buttons, the whole shebang. CS' GUI of choice is CEGUI. However, getting it to run is, right now (Oct. 18, 2010) is... troublesome.
First of all, you need a sufficiently current version, which means >= 0.7.0, which in turn means that the packages in the Ubuntu repositories are too old by far, so figuring out dependencies, installing them, checking out the SVN and building it yourself is the way to go:

apt-get install libtool libpcre3-dev
svn co https://crayzedsgui.svn.sourceforge.net/svnroot/crayzedsgui/cegui_mk2/branches/v0-7 cegui_mk2-0-7
cd cegui_mk2-0-7/
./configure --disable-xerces-c
make
sudo make install

So far, so good. But now we're getting to the Python bindings. If I were a little less curious than I am now, I'd say "It's not worth the hassle, just wait until they're officially supported" (which, for now, is scheduled for Nov. 19, 2010). Well, but I *am* that curious and hassled Kulik, so... You know what? Nevermind. Wait until they're officially released. The other way includes downloading bindings that are probably gonna change, writing build scripts yourself, all the good stuff. Just build CEGUI itself so you can enjoy CS' ceguitest and hairtest.

2010-10-15

Permalink 10:56:47 pm, Categories: CS at a glance  

Buy Crystal Space today for free! An overview of everything.

So, what is Crystal Space and what can it do for you?

Crystal Space is, centrally, a realtime 3D engine. In addition to that it includes everything that one needs to make a game or application; sound, keyboard, mouse and tons of other stuff. CS is event-driven, which means that once you've set up the map for playing, your code gets called each time something relevant happens.

To make juggling all that data easier, the Crystal Entity Layer (CEL) has been introduced. It adds the concepts of entities, property classes and messages. An entity is a "thing" in your game, the properties of which are determined by the property classes given to it. Property classes are, for example, pcmesh (which keeps the entities reference to a mesh), pctimer (which you can use to get a message either after a set delay or every frame), pccommandinput (keyboard and mouse input), pcmeshselect (which sends you a message when the user clicks on the pcmesh that the entity having this class also has) and so on.
Messages, lastly, are CELs mode of inter-entity communication. Property classes send messages to their entities when something relevant happens, and you can make your entities send messages to each other, too. Messages have a name and parameters, which are keys and values.
A somewhat deprecated concept is that entities also have behaviours. Those are the part of code that "receives" the messages, that is, gets called when a message with a name which is also the methods name in the code (I'm specifiaclly talking about writing behaviours in Python, but you can also do it in HTML) comes in, and is given the messages parameters. This concept is outdated only insofar as these message handlers now are encapsulated into "regular" property classes. So instead of having one and only one behaviour script per entity, you can now load and unload behaviour script property classes as you like.
Last, but not least, I also should mention entity templates. If you have multiple entities with are rather similar to each other, you can create a template from which those entities then are created.

To make things even easier (yes, that paragraph just now really was about how things get easier, not more complicated ;) ), CELstart was written. While all the defining templates, entities, meshes etc. can be done in map files, there still has to be a program that actually loads and runs those files. CELstart is that program. All it needs is a .zip file that includes your map files, artwork, whatever code you may have written and configuration files. It'll take those and start up the engine, load or create whatever you mentioned in celstart.cfg in your .zip, and off you go: Your game is running.

Well, that's the theory. All you have to do now is to think of a game or application that you want to write, then you find out what your entities are, then you assign property classes, model, texture and animate the models you want to use, write behaviour property classes and... Voila. Presto game.

Permalink 10:28:30 pm, Categories: CS at a glance  

My first and second impressions of Crystal Space

Hi. My name is Sebastian, and if you visit the IRC channel #crystalspace on irc.freenode.net, you may know me as Baribal.
Somewhen around 2002 I heard of Crystal Space (CS) took a look. On my first try, the build process was frustrating enough for me to become discouraged. Every now and then I checked back and found new, shiny features. Also, each time I tried, I got a bit farther than before.
This time around I managed to set up everything that I need to get developing for real. Less than a month ago, I decided to create a chessboard with it; three years ago, I nearly managed to create a Go game using nothing but hand-edited XML and Python scripts. Now, three weeks after I looked at the first cube I made, I have both gathered a fundamental understanding of how to work with CS and wrote most of my chessboard. Considering that despite the hurdles I encountered on the way, I got quite far in a short time, I am wondering why not more people are developing games using CS, especially as nearly all the tools one could need are provided.
Then again building CS, then making the game are not trivial tasks, either, and most of the documentation meant to help along newcomers consists of the explained code of small applications. Personally, I can think of easier ways by which CS could be explained.
And that's why I asked for this blog.

2010-08-17

Permalink 02:54:10 am, Categories: General  

Last day of the Summer of Code

So, today is the last day of this year's Google Summmer of Code (although the date on the post will say it's August 17, it's still August 16 where I am =]). Time surely has flown by! I was really worried at the beginning of the program, there was a lot of code I had to get used to, and I still didn't know exactly how some things had to be done. But in the end, everything worked out, and I believe this project was a great success.

I was able to finish all the tasks that were on the original proposal, with minimal adaptations. I also documented everything as well as I could, both with comments in the code and posting the details in this blog. There are two demo applications as well, to show how the API should be used.

I would like to thank everyone from CrystalSpace for the opportunity, specially my mentor Andrew. People have been very helpful and friendly since our first contacts, during the GSoC applications period. I hope the code produced during this program can be put to good use.

Please feel free to contact me at anytime by mail. I will also stick around the IRC channel.

Best regards,
Leonardo

Permalink 02:44:16 am, Categories: General  

Last week's changes

This last week I have made a few changes to the code, and I thought it would be good to make one post to talk a little about them.

First and most important, I found a problem with the demo application (not exactly a bug, more of a map misbehavior). If you plot a path going from the stairs in the basement01 sector to the basement03 sector, near the basement-dungeon sector, the path is likely to go below the stairs and through a wall. After careful examination, I realized this was actually caused by some parts of navigation meshes that were created below the stairs, due to the level geometry. Since Detour searches for the polygon nearest to the path's starting point, one of the polygons in this disconnected piece of mesh was being chosen, which caused the problem. In order to fix this, I altered the portal's polygon position for the portals that connect basement01 and basement03 in the world xml file, moving them up a little (the portal polygons were going below the stair level).
Here is the modified world file: link.

Second, I changed the DebugRender methods to return a list of csSimpleRenderMesh instead of drawing the structures directly using OpenGL. Notice that now you don't have to (read you shouldn't) call those methods every frame, only when the structure being drawn changes (you have to ask the iGraphics3D plugin to draw the meshes every frame, however).

Finally, I added another hotkey to the pathfindingtest application. Upon pressing 't', a path will be calculated between two predetermined points. This path will then be compared to a path previously calculated, and an error message will be displayed if they differ. If the paths are equal, nothing will happen.

2010-08-11

Permalink 01:26:11 am, Categories: General  

PolygonSearchBox parameter

I have added a new parameter to the iCelNavMeshParams class, polygonSearchBox. Here is a detailed description (copied from the parameter values post):

"Before creating a path, Detour finds the closest polygons in the navigation mesh to the origin and destination points. This parameter determines the dimensions of the bounding box where Detour will look for the closest polygons. The bounding box is represented by two vertices: (center + polygonSearchBox) and (center - polygonSearchBox), where center is the point one wants to find the closest polygon to.

This parameter is also used to determine if a low level path that was calculated by Detour reaches it's intended destination (Detour always returns a path, either to the destination or the closest possible point). If the distance between the two points, in each of the coordinates, is less then this parameter, then they are considered to be close enough for this purpose."

One simple way to understand this parameter using the demo application is clicking on a point that is not exactly on the navigation mesh. For example, consider clicking on the walls: the greater the y coordinate of the polygonSearchBox, the higher you can click on the wall and still have the path traced.

Permalink 12:37:23 am, Categories: General  

Updating navigation structures

If some object changes position in a map, instead of rebuilding the whole navigation structure, one can update it using the iCelHNavStruct::Update() method. This method takes a bounding box and a pointer to a sector as parameters (although the sector is optional): the bounding box describes the area of the navigation mesh that one wishes to update, and the sector determines which sector should be updated. If no sector is specified, then the bounding box will be collided with each of the sectors navigation meshes' bounding boxes to determine what sectors have to be updated.

Note that this should probably not be used for objects that are on constant movement (npcs for example), since it's not very fast. This can be seen on the apppathfindingtest application by pressing 'y' (check last modifications to the demo application on this post).

Although updating a navigation structure takes some time, it is still way faster then rebuilding it.

Permalink 12:29:56 am, Categories: Navigation Meshes  

Changes to the demo application

In order to showcase the updating capabilities of the navigation structure, I have added a large stone block to the initial sector of the path finding demo application. This block moves linearly back and forth between two points near the stairs, and it is taken into account when building the navigation structure.

Stone block in the interior sector of the demo application
Stone block and the hole left behind when the navigation structure was built.

I also added some new hotkeys to this application, here is a description:

  • '4' - Toggle stone block rendering.
  • 'u' - Update the area of the navigation structure where the stone block is.
  • 'y' - Toggle auto-updating for the area where the stone block is. When this is turned on, the navigation structure will be updated every frame.

2010-08-07

Permalink 14:19:43, Categories: GSoC 2010  

The Crystal lighting

Finally, after a lot of efforts and a lot of hardwork, lighter2 is able to produce caustics. Also a lot of bugfixes have been made in the original code, the major one being the one with lighting produced with the photons. The power of photons had to be scaled to match the illumination of raytracer, there is no need for that now.

Ok, enough of talking here is something pleasant for your eyes.

(This image was generated without any scaling of photons, and with only photon mapping enabled for direct as well as indirect lighting)

Now, regarding the intricacies of getting photon mapping working.

1.) Added a tag into the world file regarding the material, about whether it can produce caustics or not, also if it can produce caustic, what is the refractive index of the material. (For air the refractive index is considered to be 1, also if it is not mentioned for a caustic producing material, it is assumed to be 1. So, the caustics might not be that accurate, if one doesn't mention the refractive index.)

2.) While parsing the scene, the positions (Bounding spheres) of all the meshes which contain the material which can produce caustics are stored in a list. This list is used in the photon emission stage. The point light is considered to be like a spotlight which emits photons only in the direction of these meshes while emitting caustic photons.

3.) The photon tracing is facilitated with refracting these caustic photons, these photons are traced till the time they hit a diffuse surface, and instead of reflecting these photons in random direction they are refracted using simple laws of refraction (this is the point where refractive index is required).

4.) Finally in the final gathering stage it is not just the normal photon map which is used but also the caustic photon map.

(A lot of debugging and reworking also went into this process, which took all this time)

As for the future, I am trying to gt support for direction and spotlight lights for photon mapping and caustics, and also area lighting. I plan to get these working before the submission date for GSoC, and then it would be time for optimizing lighter2, in my opinion the memory optimization should be of higher priority than time optimization.

2010-08-01

Permalink 07:21:22 pm, Categories: Navigation Meshes, Hierarchical Pathfinding  

Saving and loading navigation structures

Although building navigation structures only take a couple of seconds (at least for the maps in CrystalSpace), it's important to be able to save and load them to/from files.

Since the navigation structures are composed of one navigation mesh for each sector and some extra information (like the name of the sectors contained in the navstruct and the high level graph), I decided the best way to save them was creating a zip file, containing one file for each navmesh and one file with some general information. These files are all serialized to XML.

To save each navmesh, the iCelNavMesh::SaveToFile() method is used. It receives as a parameter a iFile, which is where it will save the data to. To save navigation structures, iCelHNavStruct::SaveToFile() is used. This is the method that will probably be called by the user. It receives a pointer to the virtual file system and a char array as parameters, and will save the zip file in the current directory of the VFS, with the name described by the char array.

To load navigation structures, iCelHNavStrucBuilder::LoadHNavStruct() is used. It receives a pointer to the VFS and a file name, just like the SaveToFile method, and expects the file to be on the current directory of the iVFS received. It will open the zip file, read the main file and then call iCelNavMeshBuilder::LoadNavMesh() for each navigation mesh. It's important to notice that the load methods are in the builder classes, so the only way to create (either new or load) both navigation meshes and structures is from the builder classes.

In the demo application, loading the navigation mesh for the castle map is significantly faster then building it.

2010-07-26

Permalink 02:03:59 am, Categories: General  

Hierarchical pathfinding demo

I created a new demo application to test the HPF algorithm (it's the pathfindingtest app). In this application, there is a player controled actor (Cally), which can be moved around by the arrow keys. The map used in this demo is the Castle map, from CrystalSpace.

Here is a list of hotkeys for this application:

  • b: Build the navigation structure.
  • l: Load a previously built navigation structure.
  • s: Save the current navigation structure.
  • c: Clear the navigation structure, as well as the current path, if there is one.
  • 1: Toggle navigation structures debug rendering.
  • 2: Toggle destination point debug rendering.
  • 3: Toggle path debug rendering.

Once a navigation structure is either built or loaded, the mouse can be used to create paths. A left click will create a path between the actor's current position and the clicked position.

For the path following code, I used the pcmove.mover property, modified so the curves are sharp and precise instead of smooth. If the old smooth behaviour was used, there would be no guarantee that the actor would not walk out of the navigation meshes. The new behavior can be activated in any application that uses pcmove.mover, all that has to be done is set iPcMover::SetSmoothMovement() to false (the old behaviour is the default one).

I made a video of the actor moving around, being controled by the mouse, you can check it clicking here or in the image below (unfortunately, I was unnable to embed the video).

Path following
Path following

Permalink 01:55:38 am, Categories: Hierarchical Pathfinding  

Hierarchical pathfinding

The basics of hierarchical pathfinding were already discussed in the post about the API (link). In this post, I'm going to talk a little about the implementation of HPF.

First of all, in order to build the high level graph that connects all portals, we needed a way to find a point that represents a portal. In the current implementation, the central point of the portal polygon was chosen. While this heuristic will give good results most of the time, it won't result in optimal paths. The difference in length to the shortest path possible will be more noticeable in a map that has very large portals. One alternative to reduce the effects of this problem is to create a number of points per portal proportional to it's size, relative to the agent size. However, while using more points per portal will result in more accurate paths, it will also cause a decrease in performance, so the number of points has to be chosen with care.

To find a path in the high level graph, we first need to add both the source and destination points to it, and then connect them to other nodes that share their sectors. We then calculate the path using A*. After the path is calculated, we remove the source and destination nodes from the graph, as well as any edges connected to them.

With the high level path calculated, it's time to refine it. For each two consecutive nodes in the high level path, we calculate a low level path segment using the Detour pathfinding. This step is done on demand: each time a user asks for the next node, either a new low level path segment is calculated or the next position of the current low level path is returned (if there is one).

In case the destination point is not reachable, the path to it's closest point in the navigation structure will be calculated.

2010-07-25

Permalink 10:49:32 pm, Categories: Navigation Meshes  

Terrain navmeshes

In order to build a navigation mesh, all Recast needs are the triangles that form the meshes that compose the map. So, to build the navmesh for terrain meshes, I had to triangulate the terrain. This was done using the iTerrainSystem mesh interface. Meshes of this type have a collection of cells (iTerrainCell objects), which are further divided into a finer grid. These cells have a method for querying the height of the grid at a specific point (iTerrainCell::GetHeight()).

Once I had the grid size and a way to query the height, triangulating the terrain was trivial. The only thing that may cause some confusion are the triangle indices, since the terrain cells don't use the default coordinate system.

Here are a few screenshots of the navigation meshes built for the terrainf map, from CrystalSpace:

Terrain navigation mesh 01
Navigation mesh for the terrainf map

Terrain navigation mesh 02
Navigation mesh for the terrainf map

2010-07-20

Permalink 05:10:06 pm, Categories: Navigation Meshes  

Values for parameters in iCelNavMeshParams

The Recast and Detour toolset has a lot of parameters to set. In CEL's implementation, those parameters can be found on the iCelNavMeshParams class. But what values should be used for them? This post contains a very brief description of the parameters and a compilation of tips found either on the documentation for Recast and Detour or at Mikko's blog (http://digestingduck.blogspot.com, for some reason I can't link it here). Also, for more information, be sure to check the Recast and Detour discussion group.

  • AgentHeight and AgentRadius: These values represent the bounding cylinder of an agent. For example, if we wanted to represent a human (using meters as unit), we could use AgentHeight=2.0 and AgentRadius=0.4.
  • AgentMaxClimb: The height of the tallest step an agent can walk over. Following the previous example, we could use an approximate value of AgentMaxClimb=0.6 for a human.
  • AgentMaxSlopeAngle: The maximum surface inclination where the agent can walk, in degrees. In our example, a value of 45.0 would be appropriate.
  • CellSize: The size of the voxelization cell*. Smaller values give higher navigation mesh accuracy, at the expense of higher computation time. Good values generally range from AgentRadius/3 to AgentRadius/2.
  • CellHeight: The height of the voxelization cell*. Smaller values give higher navigation mesh accuracy, at the expense of higher computation time. Generally, a good value is CellSize/2.
  • MaxSimplificationError: When the rasterized areas are converted back to vectorized representation the MaxSimplificationError parameter describes how loosely the simplification is done (the simplification is Douglas-Peucker, so this value describes the max deviation in voxels). Good values are between 1.1-1.5 (1.3 usually yield good results). If the value is less, some strair-casing starts to appear at the edges and if it is more than that, the simplification starts to cut some corners.
  • DetailSampleDist and DetailSampleMaxError: These parameters define how the navigation mesh is tesselated**. Sample points are generated inside the polygon. If the difference of height between one of those points and the height of the navmesh at that location is greater then DetailSampleDist, that point gets added to the mesh. DetailSampleMaxError controls the maximum allowed error when the mesh or poly outline is simplified. If both parameters are zero, the mesh won't be tessellated. For more details, look for the posts about "Navmesh Height Accuracy" on Mikko's blog.
  • MaxEdgeLength: Maximum edge length for the navmesh polygons. A good value is something around eight times the agent radius in voxels (8 * (AgentRadius / CellSize)).
  • MinRegionSize: Minimum isolated region size that is still kept after watershed partitioning***. A region is removed if the regionVoxelCount < minRegionSize * minRegionSize. The default value used in the SetSuggestedValues method is 20.
  • MergeRegionSize: Maximum region size. The regions derived from watershed partitioning*** are merged. If regionVoxelCount > mergeRegionSize * mergeRegionSize, the region is not allowed to be merged with another region anymore. The default value used in the SetSuggestedValues method is 50.
  • MaxVertsPerPoly: Maximum number of vertices per polygon of the navigation mesh.
  • TileSize: Tile size****.
  • BorderSize: Border size***** for the navigation mesh, in voxels. A good value is the agent radius in voxels, plus some padding ((agentRadius / cellSize) + 3).
  • PolygonSearchBox: Before creating a path, Detour finds the closest polygons in the navigation mesh to the origin and destination points. This parameter determines the dimensions of the bounding box where Detour will look for the closest polygons. The bounding box is represented by two vertices: (center + polygonSearchBox) and (center - polygonSearchBox), where center is the point one wants to find the closest polygon to.
    This parameter is also used to determine if a low level path that was calculated by Detour reaches it's intended destination (Detour always returns a path, either to the destination or the closest possible point). If the distance between the two points, in each of the coordinates, is less then this parameter, then they are considered to be close enough for this purpose.

* - Recast calculates the walkable area of a map by voxelizing the triangles of the meshes that compose it.
** - At some point, the navigation mesh is tessellated in order to get more accurate values for the height.
*** - Watershed partitioning is used to group the walkable voxels and form the navmesh polygons.
**** - The navigation mesh is divided in tiles that can be rebuild, loaded or saved independently. We can, for example, update a navmesh to account for a dynamic object by changing only the tiles this object is into.
***** - Recast leaves a border about the size of the agent radius out of the navigation mesh, so it can guarantee any straight path cast inside the navmesh is valid, and the agent won't go through a wall when walking on it.

In order to help getting some initial results, the iCelNavMeshParams class has a method called SetSuggestedValues. This method receives as arguments the AgentHeight, AgentRadius and AgentMaxSlopeAngle, and either derives from that some initial values for the other parameters or gives them a default value. Using the values from this method may not generate a perfect navigation mesh, but will at least give a base to work on.

2010-07-16

Permalink 01:29:58 am, Categories: Hierarchical Pathfinding, Graph  

Changes to the graph plugin

In order to implement the hierarchical pathfinding algorithm (hpf), I needed to create a graph representing the portals in a map, and then find shortest paths between pairs of points in this graph. Since CEL already has a graph implementation that meets these requirements, I decided to use it. However, I did find a few points where some changes had to be made to the code, as well as some bugs. Here is a list of all the changes (besides bugfixes):

  • Added a weight element to iCelEdge - Since the graph implementation was used for navigation, it was safe to assume the weight (or cost) between two nodes would always be the euclidean distance between then. This is not the case for hpf, however.
    The weight can be set using the edge's setter method, or using the new overloaded version of iCelNode::AddSuccessor(), which takes the weight as a parameter. The old version will still consider the weight to be the euclidean distance between the nodes.
  • Added a way to remove edges and nodes from the graph - Nodes can be removed using iCelGraph::RemoveNode(), while edges can be removed by either using iCelGraph::RemoveEdge() or iCelNode::RemoveEdge(). The methods used to create nodes and
    edges were also changed to return the index of the added element.
  • Added a way to create a new node for the graph - Nodes in a graph can now be created using the method iCelGraph::CreateEmptyNode().

Besides those changes, I also made an alternative implementation of the A* algorithm, since the one found in iCelGraph::ShortestPath() looked unfamiliar to me (meaning I don't belive its correct, but I'm not sure). It's possible the differences are due to some optimization I don't know about, but hpf already had a lot of issues to check, so I decided to be on the safe side. This new version of A* was based on this article and wikipedia. I believe it is the most general version of the algorithm, and it should present no problems. For now, I left this new implementation of the A* algorithm in iCelGraph::ShortestPath2(), as an alternative to the old one.

One important difference about the old A* implementation and the new one is that the path returned by the former doesn't contain the starting (current) position, while the later contains both the starting and ending position. I decided this was important, since the path could be inverted, in which case the destination could be missing.

Do note that none of these changes will break existing code that uses the graph plugin.

2010-07-03

Permalink 06:08:30 pm, Categories: GSoC 2010  

Keep it cull

I'll try and keep this post short. After a few discussions on IRC, the mesh simplification idea explained in the previous post might be better left for until after the midterm evaluation. Until then I should focus on the CHC++ implementation in order to determine exactly what the bottlenecks, in terms of speed reduction, are. I also should provide some hard numbers to back up any results I find. So, that's what I'm gonna' try and do. Having Mike Gist's implementation as a reference I hope I'll make some good headway. Having merged furstvis in the render manager I'll take a few different approaches in a few key points (such as mesh rendering for occlusion queries) than my mentor did in the hope it'll provide more speed boosts. That's all for now.

2010-06-30

Permalink 07:04:37 pm, Categories: GSoC 2010  

Change of plans

Ok...so I've been trying the past few days to get some headway on the occlusion culling. After resorting to drawing the meshes quicker with a variant of the DrawMesh method (available in the OpenGl plugin), I have tried several schemes involving bounding boxes in order to reduce the query overhead. Unfortunately all of the methods I have tried have all sorts of corner cases where the culling scheme either culls geometry incorrectly or it is extremely conservative, culling far to little. The methods that have cases where geometry is culled incorrectly aren't well suited for a general culler. The ones which are too conservative aren't very good because they don't really do very efficient work.

So...what to do? Well, as far as I can see, with the midterm deadline approaching, there's only one sure way to get some results which are to focus on implementing a mesh simplification algorithm. This has several advantages:

1) There already exists a basic implementation, written by my mentor with the CHC++ algorithm in mind. The main problem with it is the overhead created by rendering the same geometry twice. By rendering simplified, but accurate, meshes for occlusion queries the overhead will be eliminated enough to make using the CHC++ algorithm practical.

2) The mesh simplification scheme will enable the new culler to work for generalized models and scenes.

So which algorithm to use? Well, there has been numerous research related to the topic of mesh simplification. I've been doing some research today and the algorithm that seems most promising in terms of speed, accuracy and generality is Garland and Heckbert, which uses a quadric based polygonal surface simplification scheme. There's very little time left until the midterm evaluation and I hope to have this algorithm functioning by the 11th of July. Once that's done I can integrate it into the the CHC framework.

I'm cutting it close on time as far as midterm evaluations are concerned, but I hope I'll have some concrete results come the mid-deadline. I'll provide some updates as I progress into the implementation of the mesh simplification algorithm.

[1] http://reports-archive.adm.cs.cmu.edu/anon/1999/CMU-CS-99-105.pdf

2010-06-24

Permalink 05:51:14 pm, Categories: GSoC 2010  

Current Status

Hello there. I'm rather late writing this post, for which I apologize. What I've been doing until now is trying to integrate the chc algorithm into CS by using a modified version of the unshadowed render manager. This has meant integrating support for occlusion queries into the OpenGl renderer and modifying frustvis to interract with the modified unshadowed version I mentioned.

My mentor, Mike Gist has done some work related to hardware supported occlusion culling in CS. Unfortunately the work he did was centered on rendering the polygons in a scene twice in order to be able to see which ones would be occluded. This unfortunately has the issue of added overhead of rendering the same geometry basically twice. I've been trying to improve this through the use of bounding boxes. This is to say render a bounding box mesh in order to test for occlusion. This unfortunately has some quality draw backs:

The first pictures shows the houses in front and a third one in the back. We also see the bounding box outlines to get a better idea of what's going on.

From the second picture we get a pretty good idea of what's going on, wrong. The idea is that as the two bounding box meshes approach each other and begin to overlap, the third house dissapears. Why? you ask? Well even though, the house should be visible, the over conservative bounding box meshes obscure the third house's bounding box mesh completely, thus leading to wrong visual results.

So what to do?

Well, I can think of several schemes to solve this.

1) Decide before the rendering which objects are to be considered occluders and render them normally, all the other meshes (occludees) being rendered with as a boudning box mesh when testing visibility. This would avoid the problem seen in the pictures above.

2) Render the objects as simplified meshes when testing for visibility. This is technically the best solution, as it leads to a compromise between the bounding box mesh solution and the initial 2-pas rendering. We render lower detail meshes that strive to preserve the original's shape characteristics. This would save most of the overhead of drawing when testing for visibility and also alleviate the accuracy problem of bounding boxes.

As my mentor suggested, one method to cut down on the over ehad would be to introduce a "direct" way to render geometry. This would bypass most of CS' abstractions and simply draw basic geometry data (no textures, shaders or any other additional stuff). This will be my next target and after that I'll discuss which of the ways described above would be most feasible.

Stay tuned for further details in the coming days. Over and out.

2010-06-21

Permalink 12:33:12 am, Categories: Navigation Meshes  

Gap between two navmeshes

One of the most annoying problems that comes from building one navigation mesh for each sector is that a gap is formed between two navmeshes, near a portal. In a nutshell, this problem occurs because Recast subtracts a border from each navmesh (for more details, see the post before this one).

In order to solve this problem, I created a "fake corridor" of triangles for each portal, going out of the map in the direction of the portal's normal. This corridor is like an extrusion of the portal's polygon, going in the direction of it's normal. This way, the border is subtracted from this fake corridor instead of the walkable map.

The portal's polygon is a planar convex polygon. For each of it's vertices v, another vertex was created at the position v + n * b, where n is the portal's normal and b is the navmesh's border size. Triangles were then created to connect these new vertices to the old ones (two triangles for each of the polygon's edges). The triangles were defined clockwise for an observer inside the corridor formed.

This approach successfully closed the gap between navmeshes from adjacent sectors, as can be seen below:

Fixed gap between navmeshes
Door between two sectors, before and after applying the fix.

One important thing to notice is that some parts of the navigation mesh are not visible. In the current state of the test application, only the navmesh of the sector the camera is on is visible. This happens because of the way portals are handled in some maps (including the castle map used for this application): when a portal is visible, the sector it connects to is first rendered to a texture, which is then applied to a portal mesh. Since there is no connection between the navmeshes and the sector, the later are not rendered to this texture. So, in order to take this screenshot, I positioned the camera right between the two sectors, causing both sectors to be rendered directly.

At first sight, there is no simple way to make all meshes visible without altering the rendering engine. So, for now, I'll probably leave this problem as it is. On my project proposal, I allocated the last week to polishing the demo application, so maybe that would be a good time to try and fix this.

2010-06-15

Permalink 19:39:48, Categories: GSoC 2010  

Some more colors in the room

Ok, I know I am a bit (quite a bit maybe) late at writing this post. But still, the good thing is that I have got something working. But before that there are other important things to write.

Plans:

After some discussions and meeting with my mentor Scott Johnson, we have decided to aim for getting at least the first milestone completed by the mid term evaluation in mid-july. And the first milestone includes three tasks:

1.) Getting the photon coloring stuff working to obtain color bleeding.
2.) Getting photon mapping to work with caustics.
3.) Planning for how to get photon mapping take advantage of multiple cores.

Status:

Well after spending a lot of time to get walktest working on ubuntu, I shifted onto windows with visual studio, just to realize that it wasnt ubuntu but my graphics card,, which wasn't working, after which my brother was generous enough to buy me a new laptop. So, as of now everything is working perfectly fine (in terms of the development environment) but still on windows with VS.

Anyways, coming to the point, the thing which I have got working till now is the photon coloring effect, the effect of color bleeding can be seen in photon mapping.

The steps which I followed were simple, it was to get lighter2 save another copy of the textures for the scene, it saves one copy as of now for direct lighting but that is a filtered image and isn't useful for photon mapping. The next step was to use this image to obtain the color of a particular primitive when it is hit, and multiply it with the color of photon ray to get the color for the photon ray.

Results : Here are a few samples of the cornell map with photon mapping

This was taken with 25M photons and photon rays colors multiplied by 16. The image has a lot of noise most probably because of the photon mapping light scaling.

And another one:

This too has been taken with 25M photons but pm light scaling =10, the color bleeding is too less, but there is lesser noise.

One major issue that I found with photon mapping is the number of photons that are reflected are too less, while using 25M photons, with maxrecursiondepth set to 20, the total number of photons traced were just around 50-51M in most cases. I looked into the code which decided whether to reflect the photon or not, and I think that there are some issues with the criteria to decide the reflectivity of a surface, I will probably try to read more about it and tweak it out to get some better results.

Permalink 18:46:48, Categories: GSoC 2010  

Introduction

Hi, to everyone @ crystal space community. I am Mohit Taneja, and I am one of the students selected in GSoC this year to work on crystalspace project.

This summer I would be working on getting the photon mapping work out-of-the-box for lighter2. And also make it more efficient by using better algos for packing lightmaps, and enabling lighter2 to take advantage of available parallelism.

I am a undergraduate student in my final year at NSIT, Delhi, India, with my majors as computer science. I plan to enjoy my work and interaction with the crystalspace community.

Permalink 03:39:46 am, Categories: Navigation Meshes  

Current state

Since I am now able to produce some nice looking screenshots, I think this is the right time to talk about the current state of the project.

I'm using the castle map for the example application, since each of it's rooms is a different sector. I have tweaked the colors from the default debug rendering code used in the Recast & Detour demo application, so it would look different and a little more visible in the map used. Here are a few screenshots: (light yellow edges for tile borders, dark yellow for other polygon edges).

Navigation Mesh for the reception
Navigation Mesh for the reception

Navigation Mesh for the throne room
Navigation Mesh for the throne room

Navigation Mesh for the basement dining area
Navigation Mesh for the basement dining area

Although it can't be seen from the examples above, even though I'm rendering all the navigation meshes at every frame, I cannot see navigation meshes through a portal. I don't know why that is happening, specially since the navmeshes are rendered directly with opengl, but I'm not worrying too much about it now.

There is still one problem related to building the navmesh. In order to make sure that a unit can walk in any straight path cast inside the navmesh, Recast removes a border proportional to the unit's radius from the navmesh (so the unit won't go inside a wall, for example). Since the navmeshes are being built independently for each sector, this generates a gap between two sectors where the unit can't move to.

The picture below is the bottom part of a door, which is a portal between two sectors. Notice that there is an area that should be walkable, but has no navmesh over it:

Gap between two sectors
Gap between two sectors

Recast uses the triangles from the collision detection data to build the navigation meshes. In order to fix the gap, I plan to feed Recast with some "fake" triangles, normal to the portal. This way, the border will be subtracted from these "fake" triangles instead of the walkable area.

Path-finding

In order to be able to see a path in the example application, you have to first set a source point (shift + left click) and a destination point (left click). The points are graphically represented by a cylinder, which has the same height and radius of the unit used to build the navmesh. Right now, the path is only calculated correctly when both points are on the same sector. Here are a few screenshots: (blue cylinder is source, green is destination)

Path example 1
Path example 1

Path example 2
Path example 2

2010-06-12

Permalink 12:15:46 am, Categories: Navigation Meshes, Hierarchical Pathfinding  

The API

In this post I'll talk a little about the division between the navigation mesh API and the hierarchical navigation structure API (fancy names, what do those mean?).

The main objective of this project is to use Recast to build navigation meshes for Crystal Space maps. In Crystal Space, we have Sectors, which define an area in space where we can put geometry. Generally, a map is composed of a collection of sectors. These sectors are then connected by Portals.

In order to build a navigation mesh for an area using Recast, we need all the triangle locations from the collision detection meshes that are inside that area. However, in Crystal Space two or more sectors can share world coordinates, so we cannot use Recast directly to build the navmesh for an area formed by more then one sector.

That is where the Hierarchical Navigation Structure comes into place. The idea is that we build independent navmeshes for each sector, and have a higher level structure connecting these navmeshes. In our case, this structure will be a graph, where each node represents a portal. In this graph, edges exist between two nodes if the portals can be connected by a path, and the edge weights represent the smallest possible length for such paths.

Finding a path in this structure takes two steps:

  • Find out which portals we will have to cross to reach our destination. This is done using the A* algorithm in our high level graph. With this information, we also know which sectors the path will go through.
  • For each sector our path goes through, we find the part of our path that is inside that sector.

Here area a couple of examples of how to use navigation mesh API. It's important to keep in mind that one probably won't use these interfaces directly, but will rather use the hierarchical version (unless one's map only has a single sector).

Creating a navmesh:

csRef<iCelNavMeshBuilder> navMeshBuilder = csQueryRegistry<iCelNavMeshBuilder>(object_reg);
csRef<iCelNavMeshParams> params;
params.AttachNew(navMeshBuilder->GetNavMeshParams()->Clone());
params->SetSuggestedValues(1.0f, 0.2f, 45.0f);
navMeshBuilder->SetNavMeshParams(params);
// iSector* sector;
navMeshBuilder->SetSector(sector);
csRef<iCelNavMesh> navMesh;
navMesh.AttachNew(navMeshBuilder->BuildNavMesh());

Finding a path:

csRef<iCelNavMeshPath> path;
// csVector3 origin, destination;
path.AttachNew(navMesh->ShortestPath(origin, destination));

The hierarchical navigation API is very similar to the navigation mesh one. Here are the equivalent examples from the two above:

Creating a navmesh:

csRef<iCelHNavStructBuilder> navStructBuilder = csQueryRegistry<iCelHNavStructBuilder>(object_reg);
csRef<iCelNavMeshParams> params;
params.AttachNew(navStructBuilder ->GetNavMeshParams()->Clone());
params->SetSuggestedValues(1.0f, 0.2f, 45.0f);
navStructBuilder ->SetNavMeshParams(params);
// csList<iSector*> sectorList;
navStructBuilder ->SetSectors(sectorList);
csRef<iCelHNavStruct> navStruct;
navStruct.AttachNew(navStructBuilder->BuildHNavStruct());

Finding a path:

csRef<iCelPath> path;
// csVector3 origin, destination;
// iSector* originSector; iSector* destinationSector;
path.AttachNew(navStruct->ShortestPath(origin, originSector, destination, destinationSector));

The complete APIs can be seen here: Navigation Mesh API and Hierarchical Navigation Structure API.

I'll talk more about the iCelNavMeshParams structure and how to find the values for it's parameters in a later post.

2010-06-08

Permalink 06:44:51 am, Categories: General  

Introduction

Hello everyone!

My name is Leonardo, and I am one of this year's GSoC students.
I am currently a M.S. student in Computer Science at UNICAMP (University of Campinas), Brazil. I don't have any prior experience with CrystalSpace and CEL, but I believe I won't have much problem, since everything looks organized and the community has been very friendly and helpful so far.

This summer (although it's technically a winter for me), I will be working into integrating the Recast and Detour toolsets into CEL.

The main topics of my project will be the following:

  • Build a navigation mesh from a sector, using Recast.
  • Build a hierarchical navigation structure from a map.
  • Build a navigation mesh from a height map.
  • Allow changes to the navigation meshes during runtime.

I plan on posting on this blog updates about my progress and any problems I come across, so people can use it as reference for this work.

2010-05-16

Permalink 10:11:18 am, Categories: GSoC 2010  

Hello World

...and Crystal Space. My name is Claudiu Mihail and I will be trying to improve Crystal Space's visual culler system during this year's Google Summer of Code.

As this project aims to take advantage of next generation chip-set capabilities the new culler will be build (at least in part) from the ground up. After some talking with my mentor and other Crystal Space developers the direction to go is to start a new plugin. This is going to be a rendermanager plugin that, in time, should replace the current one. Steps to follow for this are:

1) Base the new rendermanager plugin on the unshadowed code alraedy in place
2) Cannibalize code from the frustvis plugin (which implements frustum culling) and integrate it into the new render manager.
3) Once 1) and 2) are completed I can begin adding occlusion culling and begin implementing the CHC culling algorithm as described in my application plan.

The main challenge in this endeavor will be to understand and use the code already in place. Since CS is a pretty complex project I hope this won't pose to much of a problem.
As I progress further into my work I will be keeping everyone updated through this blog. But enough chatter, time to get to work.

2010
 << Current>>
Jan Feb Mar Apr
May Jun Jul Aug
Sep Oct Nov Dec

Blog All Title

This is the long description for the blog named 'Blog All'.

This blog (blog #1) is actually a very special blog! It automatically aggregates all posts from all other blogs. This allows you to easily track everything that is posted on this system. You can hide this blog from the public by unchecking 'Include in public blog list' in the blogs admin.

Search

Misc

XML Feeds

What is this?

powered by
b2evolution