Well this project didn’t go as I had planned. Every part of it took about ten times longer than I expected. And in the end, it still didn’t function exactly the way I wanted.
First some fun media to show what did work 🙂
Also a short video of it:
YouTube: Distributed Planar Key Encryption
Now let’s start where I left off in part 1.
My first task was to come up with the method I would use to intersect my 3D planes. This was actually a fun exercise for me, and this is what I came up with:
Given 3 planes (Pa, Pb, Pc) first intersect Pa and Pb, resulting in a linear equation, giving any point on the line of intersection generated by Pa/Pb. Then using this linear equation, intersect that line with the third plane Pc, and generate the unique point of intersection.
I coded this up using floating point math, and the whole thing only took two or three sessions to get implemented and debugged, but it was actually pretty fun.
Next up, I needed to port my Planar intersection algorithm to fixed point math. To do this I had implement a minimum number of fixed point math operations… this took a while.
I read up on Fixed Point math and began implementing the most basic operations. But then, I realized I needed to implement others such as sin, cos, square root, and that’s where things got messy.
For those of you who don’t know, Fixed Point math (also referred to as Scaled Math) is simple enough in theory. Take for instance the number 3.14, if we wanted to represent this using integers and not floating point data types, we have to get rid of the decimals. Well the easiest way to do this is to just multiple it by (in this case) 100, making 3.14 into 314.
Now lets say we wanted to add 1.0 to 3.14, but in fixed math, we would need to move 1.0 into our new fixed point math space, again, by multiplying by 100, so 1 obviously becomes 100. Now we add them: 314 + 100 = 414
All pretty logical and easy. And now, if we want to go back to decimals, we just divide by our base (which is 100 in this case) making 414 / 100 = 4.14, so as you can see, things worked out, and we did all our calculations (only addition in this case) in integer math, instead of floating point math, meaning it’s 100% deterministic! (aka: the whole reason I’m doing it this way)
So why is it called “Fixed Point Math”? Well when we use 100 here, we are essentially saying that our number will have two decimal places. A Fixed Point “1” in our system here is represented by 100. A Fixed Point 0.5 is represented by 50, like wise a 0.01 is represented as a 1. So how do you represent 0.009? You don’t. That’s the draw back of Fixed Point math. You have a Fixed number of decimal places, and anything below that is simply lost…. It gets worse.
Let’s take a look at Fixed Point multiplication: 3*3 in our system this will look like: 300*300. Now we all know our answer should be 9 (or 900 in Fixed Point), but what happens when we multiply our Fixed Point representations? 300*300=90,000! It’s off by a factor of 100! Well that number is familiar, isn’t it! See our Fixed Point base number is in the multiplication as well, if you break it down: (3*100)*(3*100)
The fix is simple enough, just divide it out when you are done, so our fixed point multiplication now becomes two operations: (300*300)/100=900
Bbbuuuttt… You don’t get off that easy. Let’s take a more real world example. I used 64bit ints, and I needed much greater then 2 bits of decimal precision. When you are making a Fixed Point math system on a computer, you need to take into account your actual data type, and realize that each bit of decimal precision reduces the whole number range of your variables. Let’s take a look:
Let’s use 8bit ints here. This is what a 1 looks like normally:
Now lets say I want 4 bits of decimals, my one is going to sit here:
That’s the binary representation, but it’s still the same thing, our scaling factor, instead of being 100, is now 16 (in base 10).
Now back to the problem of multiplying our numbers which contain the scaling factor. Here we only have 4 bits of whole number precision. The largest whole number we can represent is 15. So, if during the course of multiplication we ever exceed 15, we will over flow our data type, and lose very significant parts of our number. We must prevent this, if at all possible.
It doesn’t take much number crunching to realize, that because our multiplication will be off by a factor of our scaling factor, relatively small numbers, when multiplied together, will overflow our data type before we ever get a chance to divide the scaling factor out.
Luckily there is a generally accepted solution to at least try and mitigate this. Since we are using binary instead of base 10 to define our scaling factor, we can use simple binary bit shifting to attempt to alleviate this (this could be done with division, but this is simpler since we are using binary anyway). If we shift each operand in our multiplication to the right a few places, we can reduce the size of the scaling factor that goes into the multiplication, and thus reduce the likelihood of overflowing. So in this example, let’s shift each operand 2 places:
00010000 becomes 00000100 (aka 4 instead of 16) here we’ve halved the scaling factor. Now in an equation:
2 * 2
in Fixed point:
(00100000 >> 2 ) * (00100000 >> 2)
00001000 * 00001000 = 01000000
And there you go! We pre-shift half of the scale value from each, so that when multiplied together, the scale value repairs itself, and we avoid an overflow in many cases.
SO! Why have I dragged you through this? Because this is the part that ruined my encryption!! Obviously, the pre-shifting knocks off 2 bits of decimals here, so whatever their value was, is lost! This is an accepted drawback of Fixed point math, but not for me!
In my actual implementation I have 64bit ints, with 16bits of decimal precision, and 48bits of whole number range (clearly range was more important to me then precision. My scaling factor is 65,536) which means I pre-shift by 8bits during multiplication.
Here’s the problem: if I multiply 3.141592 by 12.345678 I will get a different number than if I multiply 3.14 by 12.34 ( 38.78508323 vs. 38.7476 ), that’s a difference in the hundredths place! That’s fairly significant! Now if I take that result and feed it into another part of an equation (as happens many many times in my equations), that difference will propagate through along with other inaccuracies, and the final number will be greatly effected by each inaccuracy.
You might say “Why is this a problem, Adam! you knew Fixed Point math was inaccurate when you started this!” and the answer is, because I didn’t have the foresight to realize this wrecked the communicativeness of equations!
Using the example above, let’s say the result is then multiplied by 8.765432, in theory:
3.141592 * 12.345678 * 8.765432 == 8.765432 * 3.141592 * 12.345678
But because of how the inaccuracies propagate through from one operation to the next, they don’t equal each other!!!
So what exactly does this destroy for me? Well if you pass my Planar intersection algorithm 3 keys, it will generate a 3D point (the encryption key).
If you give the algorithm Pa, Pb, Pc, it will create a linear equation for the intersection of Pa/Pb, then intersect that with Pc. The point generated from that will be used to encrypt the file. NNnnnoooowwww if I want to decrypt the file, and I provide the keys in ANY OTHER ORDER (such as Pb, Pa, Pc) the inaccuracies propagating through the equations will result in a significantly different point! Thus when it is used to decrypt the file, the decryption will fail!
It gets worse! Another central tenet of the algorithm is to be able to generate more than 3 keys, and use any 3 of the keys to generate the parent encryption key, but because of this problem, using ANY KEYS OTHER THAN THE ORIGINAL, in ANY ORDER OTHER THAN THE ORIGINAL will never generate the proper 3D point!
I didn’t realize all of this right off the bat. I wrote my Fixed Point math, I even implemented the more complex functions: sqrt(), sin(), etc… And moved on.
The encryption itself is actually just a simple XOR. I take the 3D point, parse each component to a string, concatenate them together, and then use CryptoPP to create a secure hash (high randomness, low redundancy, low collision rate), which I use as the bit stream for my XOR encryption. Not terribly secure, but the whole point was the key generation, not the encryption itself. So I was ok with it.
Then I went on to make the 3D view of it all, so it would have “Hollywood style” graphics. I added a “lookat()” function to my camera and learned quite a bit more 3D math in the process of creating and orienting the 3D Planes. All in all, I learned a ton on this project. But when I realized the problem with the math, I became very disheartened, so the 3D View is a bit unfinished… I had intended to do more. But I don’t want to now.
Now I’m sure, that given enough time, I could eventually solve these problems. But this is a project with absolutely no practical application. It’s demo-able, and I’ve learned a ton. So it’s time I move on.
Time to get back to some actual game development 🙂