Morbuzka
Rage-Inducing Forum Troll
So the gist of it is that you can only have a finite amount of precision in a CPU, so floating points are used to approximate precision. For example, say I had a precision of 5. That would mean:
1.234500001 = 1.234500002 is actually true since we're only looking 5 decimal places.
I believe the big difference between those two ARM architectures is that one actually has floating point calculations done directly on a register (hardware) on the CPU while the cheaper one is simulated in the software. So when you compile it for one, it looks for the hardware register and it won't run on the one that doesn't actually have it.
I see, so are floating points higher in numbers today, like double digits? Or are they around the example you gave?