patternMinor
Why aren't numbers stored this way?
Viewed 0 times
storedthiswhynumberswayaren
Problem
I read that the further you get from 0, the less precision, due the way they are stored.
So why aren't numbers stored like this:
$32$ bits for representing a value between $-2^{31}$ and $2^{31}-1$ before the decimal point and another $32$ bits for representing the precision. $\frac{1}{2^{32}} = 0,00000000023$, which would be a fairly precise interval.
With this type of storage there would be no precision loss at high values.
So why aren't numbers stored like this:
$32$ bits for representing a value between $-2^{31}$ and $2^{31}-1$ before the decimal point and another $32$ bits for representing the precision. $\frac{1}{2^{32}} = 0,00000000023$, which would be a fairly precise interval.
With this type of storage there would be no precision loss at high values.
Solution
In a floating point, by definition, the point floats. What you're proposing is fixed point (which is occasionally used in practice).
A normal IEEE 754 64-bit double precision can store values in the range from $-10^{308}$ to $10^{308}$. This is a much larger range than your system can store given 64 bits (you would need nearly 800 bits to achieve the same).
The precision loss is often acceptable. When you are dealing with a very large value (like $10^{308}$) what comes at the $10^{\textrm{th}}$ place after the decimal point has no significant impact on the outcome of your calculation. When you are dealing with a very small value (like $10^{-20}$), what comes at the $10^{\textrm{th}}$ place after the decimal point has a huge impact on your calculation.
Using floating point thus makes a lot of sense: you get a level of precision that is appropriate for the range of values you're dealing with, and it can represent a very wide range of values in limited space.
A normal IEEE 754 64-bit double precision can store values in the range from $-10^{308}$ to $10^{308}$. This is a much larger range than your system can store given 64 bits (you would need nearly 800 bits to achieve the same).
The precision loss is often acceptable. When you are dealing with a very large value (like $10^{308}$) what comes at the $10^{\textrm{th}}$ place after the decimal point has no significant impact on the outcome of your calculation. When you are dealing with a very small value (like $10^{-20}$), what comes at the $10^{\textrm{th}}$ place after the decimal point has a huge impact on your calculation.
Using floating point thus makes a lot of sense: you get a level of precision that is appropriate for the range of values you're dealing with, and it can represent a very wide range of values in limited space.
Context
StackExchange Computer Science Q#63456, answer score: 6
Revisions (0)
No revisions yet.