Monthly Archives: May 2007


So no time really spent lately on coding for Black Engine, what with moving, unpacking, cleaning, and starting a new job. But I have been doing a lot of research.

I printed out the entire ODE manual and read through it cover to cover, then went back and read some more. Then I got on some forums and talked and read some more. I have a pretty good grip on how I will implement ODE now.

But something that seemed like a small caveat that could be easily solved has blossomed into a scarily large issue.

The problem is simulation time vs. real time. This was a problem I faced down years ago. The problem is simple. My computer runs faster then yours (probably not really, but you get the idea). So if you hold down W to move forward what do I do? Well in the simplest form, I just add 10 to your forward velocity.

So we read input every frame, I run at 30 Frames Per Second (FPS) which means if I hold the W key for 1 second I have a velocity of 300 ( 30 * 10 ) (assuming my velocity accumulates and there is no friction). Now you run it on your Commador64 (Jesus man, upgrade already) and only get 10 FPS. This means you only have an accumulated velocity of 100 ( 10 * 10 ) by the end of 1 second. As you can see this is a rather large problem.

So when I first encountered this problem like 2 years ago, I wrote a timer class which times the length of time it takes to render each frame and stores it in as deltaTime. Thus everything that is time sensitive uses the value of deltaTime as a scaling factor. So the amount of velocity to add each frame is computed as follows:
currentVelocity = velocityConstant * deltaTime;
totalVelocity += currentVelocity;

This yielded fairly stable results except my custom timing class was *too* accurate, and it produced different results on different computers. Finally I moved to a cross-platform timer with a more coarse resolution and things were perfectly stable. Thus the simulation time is determined by the deltaTime and the real time is determined by the frame rate.

So thats the history of it. Since things were handled all internally, synchronously, and most importantly, deterministically with the coarse grain timer, there was no problem. The problem comes up when non-deterministic elements are introduced. The first of these elements is the physics engine. It is a non-deterministic physics engine which uses calculus (integration) to interpolate between “simulation steps”. Now I *do* have simulation steps currently, each frame is a simulation step, and since we use deltaTime, we can say that our simulation steps are of variable width or size. This does not work well with the calculus that the physics engine uses. The physics engine wants fixed size steps.

After looking into that I started wandering in my research merely out of boredom and started looking into networking again (which I hadn’t planned on looking at for a very long time) and realized a lot of my questions of how the server would remain authoritative in a real time environment were solved by fixed size simulation steps.

Now this doesn’t sound like a huge deal, converting from variable size to fixed size, but the implications are huge, absolutely massive. First it only makes sense to implement them in a client server model if a move to that model is ever planned, other wise you’d essentially be rewriting everything later when you wanted to switch.

Second, you now introduce a mind boggling number of issues intrinsically related to both the move to fixed size steps as well as the change of logic in how client/server model should communicate, and how the client should work as a whole.

See one issue is this, like I said ODE (and many others) is non-deterministic, so it suffers from floating point rounding errors which vary from one computer to another (much like my original fine grain timer). The server must be authoritative, but to reduce bandwidth you might want to only send updates, like player X wants to move with force Y. Thats a small update. Then the client side physics engine takes that data and computes the new position based on that data as well as all the other world data that that affects player X (gravity, friction, collision detection, the list goes on). Problem is, my uber awesome computer will computer that to a high accuracy, but your Commodore 64 will make several rounding errors due to, among other things, hardware limitations. Now we both get the next update player X wants to move with force Z. My player X is at a slightly different location then your player x due to the rounding errors, and now they both move again in another direction, and again your’s loses some precision and moves on a slightly different vector then mine. As you can see these errors compound and over time the 2 clients become totally out of sync with each other and even the server world (which is the “true” world).

This could be fixed with sync (synchronization) packets which contain the full data on an object, location, orientation, velocity, friction, and every other world force. But thats a huge amount of data to send to every player, every step!

Another problem is this, as you all know, the internet is unreliable (damn you Al Gore!). The speed of the connection will vary, packets will be lost, etc. So what does a client do between update and sync packets? Just sit there? You would experience very studdered, non-physical behavior if this were the case, even if the connection was decent.

The solution to this is to do client side extrapolation or interpolation between step updates. Interpolation is the more favored method, BUT! You run into the same problem with the physics engine and update packets! Client side floating point inaccuracies! Either resolution method however still requires a very difference structure for even the game loop!

In short (wwwaaaayyyyyy too late for that) no matter what I choose, I want to eventually make this a networked engine, the advantages provided by an authoritative server for physics (as well as many other aspects) are undeniable, thus I have to go back and refactor to a client server model now rather then later 🙁

This is going to be a huge task… Not as big as the change from v.5 to v.6, but still huge. And it’s going to take a lot more research and planning to get it to work.


WOW. So I Conquered two HUGE and extremely long standing problems in the past 2 days.

The first is, I FINALLY got the ATI drivers working for my Radeon 9800 Pro under Linux ( Ubuntu ) which all in all is pretty damn important considering I primarily use Linux for developing a 3D graphics engine. So now I’m not running about 30 FPS in Linux 😉

Let me tell you, it was a pretty damn good feeling seeing:
“Direct rendering: Yes”
after typeing:
glxinfo | grep rendering
So SO SSSOOOO many times and seeing No.

Any way, with that finally set, I went back to debugging my memory corruption bug and low and behold! I KILLED IT! You can now load a level, then load another one and it will clean up the memory and get it going again.

There is still quite a bit of memory leaks to track down, but with Valgrind, my new best friend, that won’t take very long!

So looking forward. The big news is, after very long deliberation I have decided to integrate 3 new 3rd party libraries.

The first is the Configurable Math Library (CML). I’m going to retire my custom Math Lib to get a more standardized and unit tested base math library. It should be faster as well.

The second is the Open Dynamics Engine. It is an open source physics engine that looks really sweet. It will handle movement, collision, all that fun stuff. Should allow for some REAL cool stuff in the future and doing all that physics math isn’t the point of this project. I’m making a Game Engine, not a Physics Engine, they are two totally separate projects these days.

The third is the Irrklang 3D Sound Engine. This is absolutely the sweetest library I’ve worked with in a long time. It is just freaking perfect and has influenced me on certain design ideologies I would like to implement as I go along.

At some point I will be integrating a networking library as well, I have my eye on the networking layer that garage games puts out. I forget the name at the moment. But it looks sweet.

So probably the next thing will be to clean up some of the more egregious memory leaks ( My poor poor texture manager not deleting properly 🙁 ). Then research the ODE and see how I will be integrating it into Black Engine. Once I have a good handle on that I will integrate the CML, then ODE.

Hopefully during this time Naxos will be working on the model loading and rendering!

Well I’ve said it before and I’ll say it again but I mean it more then ever this time, stay tuned because theres going to be some very neat things to come!!

– Adam


It may be that I’m running on only 2 hours of sleep but debugging is especially frustrating to me today.

Any way, a few days ago I solved several more big bugs which allowed me to load, unload, and load another level several times before it crashed.

Then I ran it through valgrind again to identify more trouble spots. More spots found, and fixed today. Pretty much the last memory corruption bug now has to do with the Camera class deleting it’s Scene Graph Nodes. It is a bit of a special case so it makes sense it is the one causing problems. But I have been over it and over it, fixing little logic errors associated with very obscure code paths along the way, and I can’t find a single thing wrong with it now. It’s very frustrating.

Once that is solved ( some day ) there is quite a bit of memory leakage happening. Mostly obvious stuff. Part of my texture memory, a temp buffer during creation, isn’t being deleted. It should be pretty quick to get most of that under control.

A few steps forward. But so much has been revealed to me during this debugging that is necessary to make the code base clean. I’m going to be spending a lot of time cleaning and refactoring…

– Adam