So no time really spent lately on coding for Black Engine, what with moving, unpacking, cleaning, and starting a new job. But I have been doing a lot of research.
I printed out the entire ODE manual and read through it cover to cover, then went back and read some more. Then I got on some forums and talked and read some more. I have a pretty good grip on how I will implement ODE now.
But something that seemed like a small caveat that could be easily solved has blossomed into a scarily large issue.
The problem is simulation time vs. real time. This was a problem I faced down years ago. The problem is simple. My computer runs faster then yours (probably not really, but you get the idea). So if you hold down W to move forward what do I do? Well in the simplest form, I just add 10 to your forward velocity.
So we read input every frame, I run at 30 Frames Per Second (FPS) which means if I hold the W key for 1 second I have a velocity of 300 ( 30 * 10 ) (assuming my velocity accumulates and there is no friction). Now you run it on your Commador64 (Jesus man, upgrade already) and only get 10 FPS. This means you only have an accumulated velocity of 100 ( 10 * 10 ) by the end of 1 second. As you can see this is a rather large problem.
So when I first encountered this problem like 2 years ago, I wrote a timer class which times the length of time it takes to render each frame and stores it in as deltaTime. Thus everything that is time sensitive uses the value of deltaTime as a scaling factor. So the amount of velocity to add each frame is computed as follows:
currentVelocity = velocityConstant * deltaTime;
totalVelocity += currentVelocity;
This yielded fairly stable results except my custom timing class was *too* accurate, and it produced different results on different computers. Finally I moved to a cross-platform timer with a more coarse resolution and things were perfectly stable. Thus the simulation time is determined by the deltaTime and the real time is determined by the frame rate.
So thats the history of it. Since things were handled all internally, synchronously, and most importantly, deterministically with the coarse grain timer, there was no problem. The problem comes up when non-deterministic elements are introduced. The first of these elements is the physics engine. It is a non-deterministic physics engine which uses calculus (integration) to interpolate between “simulation steps”. Now I *do* have simulation steps currently, each frame is a simulation step, and since we use deltaTime, we can say that our simulation steps are of variable width or size. This does not work well with the calculus that the physics engine uses. The physics engine wants fixed size steps.
After looking into that I started wandering in my research merely out of boredom and started looking into networking again (which I hadn’t planned on looking at for a very long time) and realized a lot of my questions of how the server would remain authoritative in a real time environment were solved by fixed size simulation steps.
Now this doesn’t sound like a huge deal, converting from variable size to fixed size, but the implications are huge, absolutely massive. First it only makes sense to implement them in a client server model if a move to that model is ever planned, other wise you’d essentially be rewriting everything later when you wanted to switch.
Second, you now introduce a mind boggling number of issues intrinsically related to both the move to fixed size steps as well as the change of logic in how client/server model should communicate, and how the client should work as a whole.
See one issue is this, like I said ODE (and many others) is non-deterministic, so it suffers from floating point rounding errors which vary from one computer to another (much like my original fine grain timer). The server must be authoritative, but to reduce bandwidth you might want to only send updates, like player X wants to move with force Y. Thats a small update. Then the client side physics engine takes that data and computes the new position based on that data as well as all the other world data that that affects player X (gravity, friction, collision detection, the list goes on). Problem is, my uber awesome computer will computer that to a high accuracy, but your Commodore 64 will make several rounding errors due to, among other things, hardware limitations. Now we both get the next update player X wants to move with force Z. My player X is at a slightly different location then your player x due to the rounding errors, and now they both move again in another direction, and again your’s loses some precision and moves on a slightly different vector then mine. As you can see these errors compound and over time the 2 clients become totally out of sync with each other and even the server world (which is the “true” world).
This could be fixed with sync (synchronization) packets which contain the full data on an object, location, orientation, velocity, friction, and every other world force. But thats a huge amount of data to send to every player, every step!
Another problem is this, as you all know, the internet is unreliable (damn you Al Gore!). The speed of the connection will vary, packets will be lost, etc. So what does a client do between update and sync packets? Just sit there? You would experience very studdered, non-physical behavior if this were the case, even if the connection was decent.
The solution to this is to do client side extrapolation or interpolation between step updates. Interpolation is the more favored method, BUT! You run into the same problem with the physics engine and update packets! Client side floating point inaccuracies! Either resolution method however still requires a very difference structure for even the game loop!
In short (wwwaaaayyyyyy too late for that) no matter what I choose, I want to eventually make this a networked engine, the advantages provided by an authoritative server for physics (as well as many other aspects) are undeniable, thus I have to go back and refactor to a client server model now rather then later 🙁
This is going to be a huge task… Not as big as the change from v.5 to v.6, but still huge. And it’s going to take a lot more research and planning to get it to work.