HiveBrain v1.2.0
Get Started
← Back to all entries
patternMinor

Do formulas involving fewer repetitions of variables give higher numerical precision?

Submitted by: @import:stackexchange-cs··
0
Viewed 0 times
numericalinvolvingprecisiongiverepetitionsformulashighervariablesfewer

Problem

I'm having some trouble doing SICP exercise 2.15. Please note that this question is not closed related to Lisp. Instead, it's closely related to numerical analysis.


Exercise 2.15. Eva Lu Ator, another user, has also noticed the
different intervals computed by different but algebraically equivalent
expressions. She says that a formula to compute with intervals using
Alyssa's system will produce tighter error bounds if it can be written
in such a form that no variable that represents an uncertain number is
repeated. Thus, she says, par2 is a "better" program for parallel
resistances than par1. Is she right? Why?

This question is a little confusing when pulled out of context, so please let me explain. The formula for parallel resistors can be written in two algebraically equivalent ways: $\frac{R_1R_2}{R_1+R_2}$ and $\frac{1}{\frac{1}{R1}+\frac{1}{R2}}$. However, it seems that computing parallel resistors with the second formula would always produce higher precision than using the first one.

My question is:

  • Which formula is better? $\frac{R_1R_2}{R_1+R_2}$ or $\frac{1}{\frac{1}{R1}+\frac{1}{R2}}$? Here "better" means provides higher precision.



  • Why it is better than the other one? Please prove your answer to the first question.



Here is my effort


After many experiments, $\frac{1}{\frac{1}{R1}+\frac{1}{R2}}$ seems to
be the better formula. I guess that the reason behind can be conclude
as the fewer time uncertain numbers are repeated, the less
uncertainty is introduced, and the higher precision we can get.

But that's not enough. I expect a more scientific and more rigorous answer.

Solution

First, I want to say that it is not the case in general that an algorithm that minimizes the number of uses of the inputs is more accurate, at least for IEEE 754 floating point. For example, compensated summation.

On the other hand, it's certainly the case that interval arithmetic can greatly benefit from knowing when two inputs are identical. As a trivial example, if $X$ is an interval variable, then logically $X - X = 0$ but of course the subtraction algorithm (usually) doesn't know whether it's inputs are logically identical, and so it must assume the worst-case $$X - X = [X^{lo} - X^{hi},X^{hi} - X^{lo}]$$

This is roughly what's going on here and almost certainly view SICP is taking. If you use naive interval arithmetic operations, repeated uses of a variable will be treated independently, and so it won't be possible to cancel out error and the worst-case must be assumed. Using the formulas from the interval arithmetic Wikipedia page and assuming $R_1$ and $R_2$ are represented by intervals that are strictly positive, you can simply calculate:
$$\frac{1}{\frac{1}{R_1}+\frac{1}{R_2}} = \left[\frac{1}{\frac{1}{R_1^{lo}}+\frac{1}{R_2^{lo}}},\frac{1}{\frac{1}{R_1^{hi}}+\frac{1}{R_2^{hi}}}\right]$$
while
$$\frac{R_1 R_2}{R_1 + R_2} = \left[\frac{R_1^{lo}R_2^{lo}}{R_1^{hi}+R_2^{hi}},\frac{R_1^{hi}R_2^{hi}}{R_1^{lo}+R_2^{lo}}\right]$$

It's clear that $$\frac{R_1^{lo}R_2^{lo}}{R_1^{hi}+R_2^{hi}} \leq \frac{R_1^{lo}R_2^{lo}}{R_1^{lo}+R_2^{lo}}$$ and symmetrically for the upper bounds, so the latter formula has a looser interval.

Context

StackExchange Computer Science Q#68921, answer score: 4

Revisions (0)

No revisions yet.