HiveBrain v1.2.0
Get Started
← Back to all entries
patternMinor

Why multiplying float number by multiple of 10 seems to preserve better precision?

Submitted by: @import:stackexchange-cs··
0
Viewed 0 times
whynumberfloatmultiplyingpreserveprecisionseemsbettermultiple

Problem

It is famous that for float numbers:

.1 + .2 != .3


but

1+2=3


It seems that multiplying floats by 10 allows you to preserve more precision. To further illustrate the case, we can do this in python:

sum([3000000000.001]*300)
#900000000000.2957

sum([3000000000.001 * 1000]*300) / 1000
#900000000000.3


By multiplying each element in the list by 1000 and divide the sum of the list by 1000, I can get the "correct" answer.
I am wondering:
1) why it's the case.
2) Will this always work, and
3) At what magnitude, will this method backfire, if it will.

Solution

The only numbers that can be stored exactly are rational numbers whose denominator is a power of 2 (read any source on floating point numbers to understand why). For example, floating point numbers do satisfy
$$ 1/4 + 2/4 = 3/4 $$
but do not satisfy (or rather, don't necessarily satisfy)
$$ 1/3 + 2/3 = 3/3. $$
There's nothing magical about the number 10. It's just an artifact of the common decimal representation. So in general it won't help to multiply by a power of 10.

Context

StackExchange Computer Science Q#114639, answer score: 3

Revisions (0)

No revisions yet.