I have run into a problem which I didn't anticipate prior to two minutes ago. Unfortunately, support for <p> elements in Crystal Space is going away (as I understand it, it wasn't especially useful, anyway). This, along with the fact that it didn't previously handle general polygons anyway, presents the following problem:
* Before converting to Crystal Space format, all polygons in COLLADA format which are of type <polygons>, or <polylists>, will need to be triangulated.
So, it appears as though I will be dusting off my Computational Geometry book from last semester and attempting to find out what the best method of utilizing a Delaunay Triangulation is. I am going to look into libraries for this purpose, as I have heard the DT implementation is a nightmare. So, sorry folks, but it doesn't look like there's going to be a commit in the next couple of days.
This includes two of the new objects, csColladaMesh and csColladaAccessor. Both of these objects have, at their core, a Process() function. The Process() function essentially parses the XML document and retrieves the necessary information. The activity diagram of csColladaMesh::Process() follows:
And the activity diagram for csColladaAccessor::Process():
The next stage will be to integrate a listing of polygons into the mesh object. This should be fairly straightforward, as before, and will be composed of yet another class, csColladaPolygon, which will have as sub-classes: csColladaSimplePolygon, csColladaTriangle, csColladaTriangleFan, csColladaTriangleStrip, csColladaLine, and csColladaLineStrip. I hope to have all of these converted and ready by this afternoon.
At last, we have a proper hierarchy shown in the scene browser. What you see above is the scene node hierarchy of flarge.
I'm ignoring the whole multiple names/hierarchies thing that I mentioned in my last post. That stuff can come later.
TODO for Scene Browser:
Show icons for nodes, e.g. light bulb for lights, cube for mesh.
Implement renaming (currently allows you to do it, but doesn't actually change the CS object)
Implement context menu which has Select, Remove command and shows available one-off tools.
TODO for other stuff:
Implement VFS open dialog
I have decided to go back and re-think the design of the library itself. In hindsight, I think that to make the code cleaner and more robust, it will be beneficial to add another couple of classes. Namely, what I added was a class representing the mesh itself. Ideally, I want to be able to process the COLLADA file in a single pass, without having to jump around looking for different elements in the document structure multiple times. To this end, I have decided to add a csColladaMesh class, which will effectively represent data stored in a mesh object. As of right now, this consists of just vertices. I will extend it in the near future. A new class diagram, with two new classes, is given below. I am currently working on redesign of the activity diagrams. They should be done tomorrow morning.
I ran into some trouble this weekend within the conversion of triangles and triangle fans. I decided that the method I used previously to convert polygons was somewhat convoluted and much too difficult to interpret in order to effectively re-use. Thus, I have decided to redesign this conversion method, and separate it into several conversion functions:
For the first iteration, only ConvertGeometry() will be implemented, which in turn will call ConvertVertices() and ConvertPolygons(), so both of those will need to be implemented as well. The first iteration will also implement materials and scene conversion functions, at a basic level. I intend to have iteration one finished before July 9.
Unfortunately, I am currently in the middle of battling a migraine headache, and was unable to get much done today. As such, my original plan of committing this evening will have to be postponed until tomorrow or Tuesday. Once I clean up the codebase, and make sure that my implementations are cleaner than they are right now, I will commit the finished ConvertGeometry() code. As of right now, the only un-implemented functionality is the trifans and tristrips elements, however, as I previously said, I am redesigning some of these functions so they follow a more logical pattern.
I will paste here the interfacte of CelGraph and its components. At the moment I am finishing some details on A* and I will soon be finishing an application for testing CelGraph and pathfinding.
struct iCelEdge : public virtual iBase
SCF_INTERFACE (iCelEdge, 0, 0, 1);
virtual void SetState(bool open) = 0;
virtual void SetSuccessor(iCelNode* node) = 0;
virtual bool GetState() = 0;
virtual iCelNode* GetSuccessor() = 0;
struct iCelNode : public virtual iBase
SCF_INTERFACE (iCelNode, 0, 0, 1);
virtual void AddSuccessor(iCelNode* node, bool state) = 0;
virtual void SetMapNode(iMapNode* node);
virtual void Heuristic(int cost, iCelNode* goal)= 0;
virtual csVector3 GetPosition() = 0;
virtual csArray iCelNode* GetSuccessors() = 0;
virtual csArray iCelNode* GetAllSuccessors() = 0;
virtual int GetHeuristic () = 0;
virtual int GetCost () = 0;
struct iCelPath : public virtual iBase
SCF_INTERFACE (iCelPath, 0, 0, 1);
virtual void AddNode(iMapNode* node) = 0;
virtual void InsertNode(size_t pos, iMapNode* node) = 0;
virtual iMapNode* Next() = 0;
virtual iMapNode* Previous() = 0;
virtual bool HasNext();
virtual bool HasPrevious();
struct iCelGraph : public virtual iBase
SCF_INTERFACE (iCelGraph, 0, 0, 1);
virtual void AddNode(iCelNode* node) = 0;
virtual void AddEdge(iCelNode* from, iCelNode* to, bool state) = 0;
virtual iCelNode* GetClosest(csVector3 position) = 0;
virtual iCelPath* ShortestPath(iCelNode* from, iCelNode* goal) = 0;
The last few days have been spent doing the grunt work for the ConvertGeometry() function. Most of this is pretty straightforward, as most of the COLLADA geometry converts directly to Crystal Space format. I have been adding functionality, debugging, adding functionality, debugging, etc... for the past 3 or 4 days. Since the function isn't complete until all of the functionality is added, I haven't performed a commit in a little while.
Right now, I am working to finish the <triangle> tag conversion. Once this is completed, I will test it and then begin implementation of triangle fans. This will then complete the polygon and triangle conversions, and I will perform normal coordinate conversion before proceeding to the ConvertTexturesShading() function.
For the near future, I intend to list functionality which needs to be implemented in each iteration. Once completed, I will then begin coding small bits of each conversion function before adding new functionality. I think this stricter iterative process will be better, since all of the conversion functionality depends on each of the other conversion functions.
I also intend to re-design some of the function names, as it's not entirely intuitive. I believe I will remove the ConvertTexturesShading() function, and add two new functions in its place: ConvertMaterials() and ConvertShading(). Similarly, I will also be adding a ConvertScene() function, in order to more intuitively support conversion of items like the camera. I have also toyed with the idea of splitting the Convert() function into two separate functions ConvertLibrary() and ConvertMap(), but I don't know if I am going to do this just yet.
For tomorrow, I plan to do the following:
Testing and debugging will continue through this weekend. I intend to make a commit Sunday night which has a completed ConvertGeometry() function, although if the schedule slips a little, this might not happen right on Sunday. ;)
It's been a busy two weeks since my last entry here. With moving out of uni accom back home, my VM holding half my SoC stuff failing for whatever reason and other random things keeping me busy I've had little coding time to get anything worth committing done. However, I've had plenty of thinking time and progress has been made. Right now, I'm updating my branch from trunk and installing MinGW here so I can get some more testing done. I'm going to quickly lay out my plan and what progress I've made in each area:
It seems that -msse etc. is required to use SIMD instructions on gcc. This is a bit of an inconvenience, but not a serious issue. The way I've decided to handle the problem is to force users (that's you) to put their SIMD code in a separate cpp file to c++ code, as suggested by the gcc docs. Then, all SIMD code will have to be compiled with those compiler flags. What I need to do is to make this as painless as possible, so I'm going to run configure checks (AX_CHECK_COMPILER_FLAGS()) to see if the flags are supported by the compiler, then save the results in COMPILER.CFLAGS.SIMD or something of the kind and/or COMPILER.HAS.SSE = "yes/no" etc. Next, I add something which allows me to specify in a jam file the compiler flags (those ones I saved) which will be applied to a specific cpp. It might be an idea to put SIMD code in a subdir to the main folder I think. That way we know that *.cpp will all be SIMD, which means we don't really need to worry about specific files, we can apply to everything (so to the whole Jamfile). Of course, we need to be able to #define out any code which isn't supported by the compiler too (or do this in the Jamfile). Any suggestions on how to refine this idea are welcome of course!
The next area I'm working in is that fairly important code path selector. I've decided on using a function which takes in the functions, arguments, types and selects the correct route to take. It looks like this:
CallSIMDVersion (Arrow) ReturnType, ArgumentTypes (Arrow) (SIMDInstructionSetEnum, SIMDFunction, C++Function, Arguments);
This works quite nicely, but it does have some limitations which I'm working towards removing. Right now, it only supports one SIMD path and a c++ fallback. It needs to be able to take several possible SIMD paths and a fallback (MMX, SSE3, AltiVec, C++ for example).
I chose this method mainly for it's lack of overhead and simplicity. Only a single call to check for capabilities is done (the results are cached), most internal functions to the check are inlined so I have few function calls and only one check per SIMD/c++ function call is made. A small benchmark I ran showed an overhead which was too small to be measured (<1us). The SIMD code itself ran 8x faster than the C++ which is a good sign. :) I'll probably commit that test as part of the simdtest app.
Also, I've started to define some CS types to be used for SIMD work. AltiVec and SSE use different methods of declaring the __m128 (SSE :)) type, but the CS version needs to be as platform independent as possible so the user can just 'use' it and not worry about maintaining compatibility. I'll add more details on this when I've written them. :)
Finally (as far as I can think), more testing is needed. As I said earlier, I'm installing MinGW to see if my code compiles fine there. Hopefully I won't run into many problems.
If anyone could point out how to have arrow brackets without this thing spitting errors at me, that'd be cool :P
No screenshots today, sorry.
There are a few problems with the current iEditorObject abstraction which I've come across so far. I'll deal with them each and try to come up with a solution.
I was able to get vertex conversion working correctly today. As of right now, the ConvertGeometry() function does the following for each <mesh> object:
This doesn't seem like a lot, but the code for it is actually quite extensive. This actually only converts to a <library>, currently. In order to convert to a <world> type file, I will need to determine how to represent portals. I believe this is going to need to be information either added to the COLLADA file in <extra> elements (as this is really what these are for), or perhaps there will be more information coming in future sections. Right now, my main design concern is to get the geometry (i.e. vertices, lines, polygons) working correctly before proceeding. Since the library element is a prerequisite for the <world>, (e.g. the <world> element often includes library elements outside of its own file), conversion of libraries seemed the logical place to start.
Additionally, I am not quite sure how to convert splines. As far as I can tell right now, splines are not part of the Crystal Space library definition. I could be incorrect, however. It might be possible to convert the splines to a polygonal mesh, similar to what Blender allows a user to do, but this would have to wait until the basics of the conversion system are in place.
I didn't realize that I was supposted to update history.txt in the docs folder after each change I made to the code. Unfortunately, over the past 2 weeks or so, I have neglected to do this. I updated the history.txt file to give a brief overview of what I have done to date, but it is a very general overview. If anyone would like a more comprehensive annotation, please see either the SVN log, or this log.
For future reference, I will be updating history.txt before each commit. I apologize for any problems this may cause, and will update the history file to add more depth if folks feel the addition I have given (June 2, I believe) isn't sufficient to describe my work over the past 2 weeks. Note that this currently only affects my SVN branch, not the CS main branch.
I spent all of today working on the ConvertGeometry() function. It's a fairly extensive task, and I anticipate it will take me through next week. Since there are a lot of details I need to keep in mind when writing the code for this function, I felt it would be best to design the algorithm for the function first, using UML activity diagrams.
The UML activity diagram for this function, as it stands right now, can be found here. Currently, I have only added functionality that will allow the verticies to be processed and converted to Crystal Space XML format. I am going to develop this function in iterations, and iteration one will be the addition of the <verticies> element of the COLLADA geometry schema. I have added code and begun implementing this function, however, since I am not finished debugging (actually, not even finished testing the first iteration), I have not yet committed to SVN.
Also, I updated the class diagram to account for changed functionality. It can be found here.
The scene browser shows all of the sectors in flarge.
Here's how it works:
A plugin implementing wrappers around all of the (useful) CS interfaces is loaded. It registers interface wrapper factories with an iInterfaceWrapperManager. This manager keeps a hash of scfInterfaceID's mapped to iInterfaceWrapperFactory's.
Another plugin listens for when the map is loaded (which the editor tells it), then creates an EditorObject for all of the sectors in the engine and adds it to the editor's object list.
EditorObject, using SCF metadata, goes through each interface implemented by the object it's wrapping and requests the iInterfaceWrapperFactory for the given interface. If there's no wrapper for that interface, it ignores it. It then instantiates that wrapper and pushes it onto an array. While it's going through, it finds an interface wrapper that has a name attribute, and stores that for future reference, when something wants to call Get/SetName. It does the same thing for a parent attribute. It also figures out the type (instance, factory, or unknown) and stores that.
After adding an object to the object list, the scene browser panel will get an Object Added event, and it will put the given object into the tree, using EditorObject to get the name and parent (for the hierarchy).
I haven't implemented putting the objects into the correct hierarchy in the Scene Browser panel yet, nor have I implemented object name editing or re-parenting via dragging. Also, scene browser should have a way of showing an icon and grouping (or sorting) the different objects.
Right now, I've only implemented an interface wrapper around iObject, since that provides the name for every engine object. But eventually, I'll need wrappers for every useful object, since I will need to know their properties for the property editor.
This was partly just a proof-of-concept right now. There are a few functions that could have a better place at the moment.
Add the rest of objects from engine on map load (other than sectors)
Clean up some of the nasty parts
Respect object hierarchy in scene browser
Implement name editing in scene browser
Before beginning work on the ConvertGeometry() design, I have begun reading the COLLADA: Sailing the gulf of 3D digital content creation book, in order to better acquaint myself with the esoteric aspects of COLLADA. I have found that the book is quite political, with a large amount of unnecessary historical background and self-justification on the part of the authors. While this is a harsh criticism, I do find that the information in the book is extensive and extraordinarily relevant to the project at hand.
In reading the book, and gaining insight of the COLLADA format from the authors' perspectives, I have come across an interesting design question: Is it necessary to include the <asset> tag in the conversion process from COLLADA to CS? The asset tag seems designed to give credit to the author of the 3D media and to represent what type of software/hardware configuration the media was designed on, as well as for. The dilemma occurs when we consider that there is no symmetric place for this information in the Crystal Space map/library files.
I submit that this information is not necessary to include in the Crystal Space map/library file for two reasons:
Given these two reasons, I will proceed with the intention that the asset tags can be completely removed, and not converted to Crystal Space format. If this is not an acceptable conclusion, I believe the Crystal Space file format would need to be changed. However, I think changing the file format to include this information would be unnecessary and, in fact, wasteful of time and space complexity. After all, the more tags that the engine needs to parse during rendering, the longer it will take per frame, and information about the author isn't going to affect how the frame looks in the end, right?
I will be working tomorrow on design schematics for the ConvertGeometry() operation, and hope to have final versions ready for the design log tomorrow.
I decided to implement a Write() function for the output. I feel that since the cs file being created more than likely going to be written out to a file, rather than used on the fly, it seems appropriate to short-circuit the inevitable getting of the iDocument, then using iDocument's write() function to write it to file. This doesn't seem to me a thing that should be left to the application. It's not necessary. Ideally, once the conversion process is finished, it should be written to a file, if desired.
The problem I see with this is that it places the conversion pipeline under the control of the user. It would be possible, for instance, for the user to do the following:
csRef<iColladaConvertor> colCon = csQueryRegistry<iColladaConvertor> (GetObjectRegistry());
// the idea is that colCon->Convert() would go here
Thus, it is possible that the user could cause the pipeline to become corrupted, due to either a simple error, or possibly a deliberate attempt to mess it up (why, I have no idea...but the chance is there). I have, at the moment, two boolean values indicating whether or not the collada and cs files, respectively, are ready, but they are set to true upon creation of the iDocument which represents these files (i.e. during the Load() functions). I could create additional boolean values, but this seems like somewhat of a hack. It's possible that the user would expect that Convert() be called upon Load() of a collada file, but what happens if a user only wanted to convert say, geometry? Then, the Load() function would go through unnecessary conversion routines, only to be converting a smaller subset. Thus, work is wasted (and this process could take a non-trivial amount of time, so it's worth considering).
Another problem becomes apparent in the code above. The Load() function doesn't really do anything with the path if the type of file being loaded is CS_MAP_FILE or CS_LIBRARY_FILE. This is because Load() assumes that these files are new files (since they are being converted from the COLLADA format), and so chooses to create a new iDocument object to represent them, placing either a <world> or <library> tag, respectively, inside the document. Thus, a path isn't needed. It's somewhat counter-intuitive (from my perspective) to provide a path upon load, but then have to re-provide a path (more than likely the same path) when you choose to Write(). I could simply save the path from the Load() function, but then what if the user wanted to convert from a collada file on-the-fly, and simply have it in memory? Then, the loaded path is wasted. This isn't a major concern, but I am trying to take all possibilities into account.
I am still wrestling with segfaults from the Write() function, but I hope to have them debugged by later today. I will keep the log apprised of my advancements.
Today was quite productive. I was able to setup all of the Load() functionality for loading a COLLADA file. I also added some functions for debugging, specifically reporting debug messages. It doesn't seem like a lot, but slowly, the functions are beginning to come together. Once I am able to get past this housekeeping stuff, I will be able to begin programming the actual COLLADA conversion functionality.
I want to be sure that all of the necessary checks are in place to determine if the file actually is a COLLADA file, and that it obeys certain constraints, before beginning the conversion procedures. It would be a huge frustration if I were to get knee-deep into the Convert() or ConvertGeometry() functionality and then find out that bugs are apparent in the Load() functions. Thus, the remainder of the day is going to be spent testing and updating the Load() functions, with possibly some time devoted to adding additional housekeeping functions.
I will keep the blog updated as I modify the code.
Cohesion, Separation and Direction Matching (this name is going to change probably for Velocity Matching) options are available now
by pressing keys '3', '4', and '5' in the steer application.
Cohesion will work pushing the entity towards the center of mass of a group of entities.
Separation will push the entity away
and Direction Matching will push it towards the same direction a group of entities have.
Well that took somewhat longer than expected, but I ticked off everything from my previous list except for starting on the SceneBrowserPanel. The File->Open only opens flarge right now, since I've yet to make an open dialog.
I also implemented a custom statusbar which has a progress bar. iEngine::Prepare takes a iProgressMeter* as a parameter, so I gave it an implementation which updates the statusbar with the description and the progress on the Prepare. Prepare reports on lighting. Unfortunately, engine lighting will soon be removed, but I still believe that the iProgressMeter implementation will be useful. After all, it is generic enough to be used in other parts of CS, for example the loader/saver, or even internally in the editor, such as for terrain generation (if someone implements a plugin to do this). I have yet to expose the statusbar in the iEditor interface, but I think this would be a good idea in some form, so that plugins can keep the user abreast of what is going on. Whether they need to show the progress gauge or not, they can still benefit from showing descriptive status text.
Other than that, I'm getting a segmentation fault at program exit, so I'll have to investigate this a bit.
I was able to accomplish a lot today, although it doesn't feel like it. I spent most of the morning (and afternoon) working out how to get the dll generated by the plgcolladaconvertor project to actually load. Thanks to res2k, Rolenun_, and iceeey, I was successful. I didn't realize that the MSVC project files are automatically generated, and that if I create them manually, the needed resource file for embedding the .csplugin file into the dll on compilation is not present. After figuring this out, and fighting to get perl installed so I could do a 'jam msvcgen', I finally was able to get the project files in a reasonable state so I could build everything.
Once this was completed, I ran into another snag: the plugin was having problems initializing. Initially, I learned (again, thanks to res2k) that I had been trying to load the plugin crystalspace.utilities.colladacoverter, as opposted to crystalspace.utilities.colladaconvertor. This simple spelling mistake cost me roughly a hour to find, and I probably would still be at it, if it hadn't been for res. After a number of trial runs, and looking at bugplug.cpp, I was able to determine that the problem was in the Initialize() function returning false.
So, finally, after a lot of work, I have gotten the plugin to load successfully. I also implemented a Report() function, similar to the one found in bugplug.cpp, which takes a variable number of arguments. It's a private function, so it can't be accessed outside of the csColladaConvertor class, but it will be useful to report what's going on while I debug things.
On a more personal note, I ordered the COLLADA Book today (Sailing the seas of digitial concept creation, or something equally exotically titled). I have heard it's a good resource, and while the specification is detailed enough for me to work off of, any additional documentation always comes in handy ;). The wierd thing is, Amazon tells me that they won't be able to ship it for another few days, and they expect the arrival date won't be until June 18.
That's all for today. It was quite a day. :)
It's been a while since my last entry, so I'll quickly update on what I've done.
Right now, basic runtime detection for Windows, x86 linux and PPC are done. I've changed quite significantly the original plan for that, now I have a base class with the inline bool HasMMX() type functions and the bool hasMMX; type vars. I've used a template on that, so I can pass the correct platform specific class to it when creating an object instance of it, then I use another class as an access point for the outside world which has it's own Has*() functions (which call the specific equivalent in the base class).
When a check for one instruction set is done, checks for all of them are done and a bitmask is returned. Then the correct instruction is fetched from this result.
So a check for MMX on windows would do something like this:
I think this is quite a nice solution. It allows us to easily add new checks in the future.
While writing some configure checks for xmmintrin.h and __m128 I ran into a problematic problem. GCC requires -msse to be enabled for me to access builtin intrinsic functions. However, -msse also tells the compiler to optimize non-floating point code with sse instructions :) To quote from the GCC manual:
"These options will enable GCC to use these extended instructions in generated code, even without -mfpmath=sse. Applications which perform runtime CPU detection must compile separate files for each supported architecture, using the appropriate flags. In particular, the file containing the CPU detection code should be compiled without these options."
To me, this is not a great option. I'm not sure why the GCC devs decided to force compiler optimizations upon us if we want to use intrinsics at all, but that's the way it is... maybe. I'm going to experiment on defining what the xmmintrin.h header requires to be defined.. maybe that will work. If not then we'll have to try what the manual suggests, making each file which uses intrinsics compile with the required flags. The third option is to say "screw this" and write my own versions of the intrinsics using asm. I'll still need to use the builtin stuff for x86_64, but that's okay because -msse and crew are defined by default on that platform. My hope is that I can trick the headers that all is good without giving the compiler an 'okay' to optimize.
Once a solution for this is done, I need to work out a code path for using these optimizations. Right now I'm favouring either using templates along with my own functions, or having a function like blah(SIMDcode, C++Code, arg1, arg2, argn); I haven't decided. Obviously I need to keep the overhead and code duplication down to a minimum. More on this later.
I've made some advances in pcsteer. Collision Avoidance is still not working, I've tried many things to solve it but I am having problems with csSectorHitBeamResult, I am using it to detect the nearest collision but, most of the time it fails to detect collisions even if it is in front of a wall =S. well, if any of you know how to fix this, please tell me =P.
However, I added the Pursue behaviour, it can be used in the steering application by pressing the 'p' key.
The npc will then pursue the player.
Pursue is different from Seek in:
1. It continues updating the players position until Interrupt() or StopMovement() are called.
2. It predicts a target future position: target_position += velocity*prediction
So Pursue can be used whenever you want an npc to follow a moving target and seek could be used if you want the npc to look for an object or any other non-moving targets.
Well, thats all for now =-)
Today I got the plugin loading and CS initialization code working. So I implemented the CS 3D view panel in a new plugin which is loaded by searching for any plugins under the 'crystalspace.editor.plugin.' hierarchy.
I didn't have time to make the map loading menu item yet, so I just had it load flarge to test the view functionality.
Move main frame stuff to separate class
Add File->Open menu item
Add world load/save listener
Make CS view resize in response to frame resize
Start work on scene browser panel
I performed the first basic compilations of both the plugin and the application, and they both went without any problems. I am in the process of writing some of the preliminary documentation in the header files (Doxygen style documents).
I have begun writing the Load() functions in the plugin library. I realized this morning that there could be a slight problem - the plugin requires TinyXML, as it needs to be able to write to the iDocument objects. Unfortunately, I am wondering if this will be problematic for a user who wishes to use XMLRead, since XMLRead is faster. In order to overcome what I perceive to be a problem like this, I have the document system used by my plugin to specifically initialize a csTinyDocumentSystem as follows:
bool csColladaConvertor::Initialize (iObjectRegistry* reg)
obj_reg = reg;
// create our own document system, since we will be reading and
// writing to the XML files
docSys = new csTinyDocumentSystem();
I am hoping that this will allow me to utilize the TinyXML document system without interfering with a user's document system of choice, when they initialize the iColladaConvertor plugin.
So I actually got started implementing the editor the past few days. The current state is as follows.
But you wouldn't know from the screenshot that the panel manager and the panels themselves are potentially pluggable. I say potentially, because I haven't written the CS initialization code yet, including plugin loading of iPanels. Although, I don't expect to ever make the panel manager into a plugin.
In fact, most things in the editor will be pluggable, meaning you could implement them in a plugin. But to avoid making a plugin implementation for each single SCF interface in the editor, I'm going to include all of the core managers in the editor executable itself. But the core panels, tools, actions, and CS objects will be implemented in a single plugin to serve as an example and also to use consistent code for instantiating these components.
thebolt made a nice addition to SCF recently to provide access to interface metadata. This should make getting all of the interfaces that an object has so much faster since it means I don't have to iterate over all possible interfaces. Instead, I can keep a hash table of interface names mapped to iInterfaceWrapperFactory's. So to instantiate the correct iInterfaceWrapper objects for a particular iBase* object, I can simply perform lookups in the table. Thanks Marten.
Panels now also specify a default dock position so that they are laid out in an appropriate way. You can always move the panes around though (except for center pane). It will be nice if I have a menu for saving perspectives, that is, view configurations. But I won't worry about that yet.
Implement plugin loading code
Implement CS initialization code
Implement the main CS 3d panel
Move wx event pump to CS 3d panel
Implement a map loading menu item
Start work on the scene manager and editor object stuff
Today I spent most of the day setting up the skeletal structure of the conversion system, and polishing things up from yesterday. I also added some Doxygen documentation to both collada.h and csColladaConvertor.h. None of the implementation (save for the basic initialization stuff) has been added yet, but I will begin implementing ConvertGeometry() tomorrow.
If you would like to have a look at the work I have done thus far, please feel free to download it using subversion:
I've been working in Steering Behaviours this last weeks and I will continue working on it for about two more weeeks I think.
The steering property class is located at plugins/propclass/steer/
right now it supports seeking (with and withour arrival checking) and fleeing.
I already started collision avoidance but it is not fullu working (I hope I'll find out why tomorrow =P)
I've also created a Steering application which is located at apps/tutorial/steering/
All of the above is located it: https://cel.svn.sourceforge.net/svnroot/cel/cel/branches/soc2007/ai
To excecute the steering application you only need to run: ./steering (./steering -relight the first time).
The steering application is based in walktut, it is actually an upgrade on walktut. It has the same entities plus one npc which is able to perform
any action included in pcsteer.
Here is a little tutorial on how to test it:
1 Activates arrival checking
2 Activates collision avoidance (this is not working right now)
s Seek (The npc will run in the players direction)
f Flee (The npc will run in the oposite direction to the player)
This is all for now, hope some of you have time to download and test this application, the idea is that I could get feedback in time to perform any changed =)
Does it happen to you too that at life threatening moments you forget about those small crucial details, like a chain link fence makes horrible cover from a spray of lethal bullets?
That was what I was wondering when I jumped that fence while someone was firing at me with a 9 mill automatic weapon. It was at about the same time I realised a leather jacket and jeans are no good as body armor.
I screamed like a 15-year old girl and rolled over the ground; blood was gushing out of my leg and shoulder.
My armed opponent walked passed the fence, reached into my jacket and pulled out my gun. He mumbled something between his teeth though I couldn't quite make out the language. Still, I'm sure it was an insult directed at me. Actually I'm certain it was an insult as he kicked me in the ribs when he said it. I rolled over in pain, pondering which part of my body hurt the most so I could apply pressure to it next. This turned out to be my head, throbbing with pain that made the gaping holes in the rest of my body feel like papercuts in comparison.
Having lived on only strong liquor for the past few months had incurred a devestating effect on my health. What was I supposed to do though? You can't go around shooting people while eating Subway sandwhiches.
At the request of Eric Sunshine, I moved the COLLADA SCF interface file to reside in the ivaria directory. The new information is as follows:
COLLADA Conversion Library: SCF Interface Header File: /include/ivaria/collada.h
On an administrative note, I realize that me placing the date in the title of my log entries is redundant, seeing as the blog system does this automatically. Thus, I am not going to do this anymore. You may think that my titles lack creativity, but I prefer to organize my thoughts this way, rather than continually trying to search through titles which may or may not have relevance to the actual text in the entry.
Anyway, I have updated the class diagram for the COLLADA conversion library. The changes I have made indicate design decisions I made to include the different steps of conversion as functions in a single class, rather than multiple classes. I have no idea why I originally designed it such that I had multiple classes for conversion - it really doesn't make any sense. This new method makes more sense. The diagram for it is shown below.
I am still debating whether or not to make these functions private. In reality, they will probably only be needed internally by the class, but I hesitate to make them private in the interest of supporting applications which may only need to convert COLLADA geometry, or animation, etc... Thus, for now, they will be left as publicly accessible functions, although this may change in the future, depending on input from the community.
I have spent this morning creating Visual Studio projects and determining where in the CS codebase the COLLADA stuff will be located. I have entered most of my skeleton files (placeholders, really) into the CS codebase in the following locations (note that these are in my specific branch of SVN, not the trunk):
COLLADA Conversion Library: SCF Interface: /include/iutil/collada.h SCF Implementation: /plugins/colladaconvert/csColladaConvert.h /plugins/colladaconvert/csColladaConvert.cpp Project File: /mk/msvc71/plgcollada.vcprj COLLADA Conversion Console Application: Header File: /apps/colladaconvertor/appcolladaconvert.h Source File: /apps/colladaconvertor/appcolladaconvert.cpp Project File: /mk/msvc71/appcolladaconvert.vcprj
I wasn't absolutely sure whether the SCF interface file should go into iutil or not, although it seemed like a good choice. It can easily be moved, however, if other people feel it should be placed elsewhere in the codebase. I think that the other choices I made for locations were good, but feel free to comment on this if you feel it should be located elsewhere.
Also, it should be noted that these are simply skeleton files at the moment. They have some basic definitions and such, but are, for the most part, empty of functionality. Since I am still somewhat in the design phase, these will be populated with implementation as things progress. I will guarantee that they compile on my machine (a windows machine - hence the reason I am only populating the MSVC project files at the moment) each time I commit, but it is possible that they may not compile and/or have some difficulties on other platforms. I will rectify as many platforms as I can by the time the project is finished, but in the meantime, if you have a specific concern about it compiling on a platform other than windows while I am developing it, please feel free to comment on this blog, and I will see what I can do to accommodate any requests.
I finished the basic SCF interface for the Collada Conversion Library. It looks like the following:
struct iColladaConvertor : public virtual iBase
SCF_INTERFACE(iColladaConvertor, 1, 0, 0);
virtual const char* Load(const char *str, csColladaFileType typeEnum) = 0;
virtual const char* Load(iString *str, csColladaFileType typeEnum) = 0;
virtual const char* Load(iFile *file, csColladaFileType typeEnum) = 0;
virtual const char* Load(iDataBuffer *db, csColladaFileType typeEnum) = 0;
virtual const char* Convert() = 0;
virtual bool ConvertGeometry(iDocumentNode *geometrySection) = 0;
virtual bool ConvertLighting(iDocumentNode *lightingSection) = 0;
virtual bool ConvertTextureShading(iDocumentNode *textureSection) = 0;
virtual bool ConvertRiggingAnimation(iDocumentNode *riggingSection) = 0;
virtual bool ConvertPhysics(iDocumentNode *physicsSection) = 0;
An implementation has been started, but there are a few bugs yet to work out. In particular, I haven't yet decided whether I want a specific Write() function or not. If I include one, this function will force a write to the disk (i.e. allowing for a user of the library to immediately write converted files to the hard drive). The reason I'm not sure if I want this is because essentially the library should do the conversion in memory, and then give the results back to the client. It is the client's job to determine if they want to write the results to disk, or use them on the fly. On the other hand, it might encapsulate things better if a Write() function were added, because then it would be a self-contained collada conversion system.
This is something I will have to discuss and ponder over the next few days.
I have been able to get the iDocument stuff working, which I was having trouble with previously. It turns out it was merely a situation where I was confusing an iDocument* pointer with a csRef<iDocument> variable. Once found, this error was easy to correct.
I have been working on developing Use Cases for the COLLADA conversion, as I feel this will help me better understand exactly what needs to be done during each conversion step. Additionally, I modified the class diagram to contain a single class, that of iColladaConvertor, (along with an implementation), and having each individual conversion step be a separate function, rather than a separate class. I am not sure why I originally designed it to have multiple classes, but this definitely seems like overkill, and would cause the system to become bloated and possibly slow. (Not to mention being a pain with memory management. The revised class diagram is shown below.
The documentation for the use cases can be downloaded from here. I created it using a program called Use Case Maker, which is actually pretty elegant. I don't know what some of the items are, though, so there is somewhat of a lot of useless information. Sorry to those of you who get frustrated with that kind of stuff - I don't know how to turn it off. Anyway, the information which is valid right now is mainly the convert geometry use case, as that will be the one I am working on first. I've started the implementation of this particular function, and I will be updating this log on the progress of it as the day continues.
|<< <||Current||> >>|
This is the long description for the blog named 'Blog All'.
This blog (blog #1) is actually a very special blog! It automatically aggregates all posts from all other blogs. This allows you to easily track everything that is posted on this system. You can hide this blog from the public by unchecking 'Include in public blog list' in the blogs admin.