HiveBrain v1.2.0
Get Started
← Back to all entries
patternMinor

Controlling overflow and loss of precision during floating point multiplication

Submitted by: @import:stackexchange-cs··
0
Viewed 0 times
losspointduringoverflowfloatingprecisionmultiplicationcontrollingand

Problem

I have a large number of floating point numbers (~10,000 numbers) , each having 6 digits after decimal. Now, the multiplication of all these numbers would yield about 60,000 digits. But the double range is for 15 digits only. How can I get my product with minimum loss in precision?

I thought of storing the numbers as 6 digit long long integers and storing their exponents elsewhere. But this appears cumbersome and may not yield correct result. Is there an alternate easier way to do this?

Solution

Multiplication of floating point numbers is considered uncritical with respect to accuracy. If your input is only accurate to 6 digits, there is no point in computing the output to 60,000 digits. The expected relative error after 10,000 multiplications is $\sqrt{10,000}\epsilon=100\epsilon$ with $\epsilon<10^{-14}$ for double precision. This is more than enough precision for your case.

Overflow and underflow on the other hand can indeed happen during multiplication. In practice, I would prefer to work with numbers in a range where I know that this won't happen. However, this may not always be possible. One can use frexp to split floating-point numbers into significant and exponent, multiply the significants separately and add the exponents together, and use ldexp for putting the result back together. However, doing this naively by applying frexp for every number and multiplying all 10,000 significants together at once would be a bad idea, because it would be slow and nearly guarantee underflow. But if done intelligently (appropriate grouping), it will yield the correct result, and the speed difference will be almost unnoticeable.

Context

StackExchange Computer Science Q#44347, answer score: 3

Revisions (0)

No revisions yet.