Morbuzka

Rage-Inducing Forum Troll
So the gist of it is that you can only have a finite amount of precision in a CPU, so floating points are used to approximate precision. For example, say I had a precision of 5. That would mean:

1.234500001 = 1.234500002 is actually true since we're only looking 5 decimal places.

I believe the big difference between those two ARM architectures is that one actually has floating point calculations done directly on a register (hardware) on the CPU while the cheaper one is simulated in the software. So when you compile it for one, it looks for the hardware register and it won't run on the one that doesn't actually have it.

I see, so are floating points higher in numbers today, like double digits? Or are they around the example you gave?
 

takethepants

Australian Skial God
Contributor
I see, so are floating points higher in numbers today, like double digits? Or are they around the example you gave?

Short answer: It depends on the hardware (for C and C++). For .NET languages like C#, it's archictecture independent

Medium Answer: http://en.wikipedia.org/wiki/Floating_point#IEEE_754:_floating_point_in_modern_computers

Single precision, called "float" in the C language family, and "real" or "real*4" in Fortran. This is a binary format that occupies 32 bits (4 bytes) and its significand has a precision of 24 bits (about 7 decimal digits).

Double precision, called "double" in the C language family, and "double precision" or "real*8" in Fortran. This is a binary format that occupies 64 bits (8 bytes) and its significand has a precision of 53 bits (about 16 decimal digits).

Double extended, also called "extended precision" format. This is a binary format that occupies at least 79 bits (80 if the hidden/implicit bit rule is not used) and its significand has a precision of at least 64 bits (about 19 decimal digits). A format satisfying the minimal requirements (64-bit precision, 15-bit exponent, thus fitting on 80 bits) is provided by the x86 architecture. In general on such processors, this format can be used with "long double" in the C language family (the C99 and C11 standards "IEC 60559 floating-point arithmetic extension- Annex F" recommend the 80-bit extended format to be provided as "long double" when available). On other processors, "long double" may be a synonym for "double" if any form of extended precision is not available, or may stand for a larger format, such as quadruple precision. Extended precision can help minimise accumulation of round-off error in intermediate calculations.[8]
Long answer: There's a standard: http://en.wikipedia.org/wiki/IEEE_floating_point
 

Morbuzka

Rage-Inducing Forum Troll
Oh I see. So, they can get up into double digit number of decimal points, pretty neato. Thanks for teaching me that.