HiveBrain v1.2.0
Get Started
← Back to all entries
patternMinor

Are IEEE floating point numbers intervals or point values?

Submitted by: @import:stackexchange-cs··
0
Viewed 0 times
pointarefloatingnumbersieeeintervalsvalues

Problem

The context is IEEE 754-2008 floating point number systems. The systems defined by the standard comprise, as far as I understand it, a bit-level representation and a set of guarantees on the precision for a given number of computations.

Now, I am wondering whether floating point numbers, i.e., the objects represented and manipulated in such a system, should be looked at as intervals—a contiguous set real numbers—or as point values—a unique real number. (Which interval? Crudely, with round-to-nearest, the nominal value plus-or-minus half a ulp.)

The arguments for intervals I see come mostly from the representational aspects of the system:

-
Numbers from ‘outside’ of the system—measurement data, say—are encoded with reduced precision and so the bit-level representation is in general clearly not the ‘true’ value, but—given the information about the rounding rule used—represents an interval in which the ‘true’ value lies.

-
Numbers from ‘inside’ of the system that are the result of a series of computations within the system have also been subject to the effects of intermediate rounding and so again—given the information about the rounding rule used—represents an interval in which the ‘true’ resulting value of the computation lies.

The arguments for point values I see come from the computational aspects of the system and the typical discussions of floating point:

-
Computations are performed on the real value that nominally corresponds to the bit-level representation, using some extra bits to enable precision guarantees. They are not applied to the interval bounds implied by the bit-level representation (and information about the rounding rule used in its generation), which would be possible in principle using some extra bits and double the effort (each computational step…).
Put differently: interval analysis is not applied.

-
If interval analysis were used, rounding rules would not be needed.

-
In the discussions of floating point I have seen, they are alwa

Solution

That has been long established. Most IEEE 754 floating point numbers represent exactly one real number. The exceptions are +0 and -0, +Inf and -Inf, and NaN with special meanings. (Thanks for one comment stating that "In IEEE Std 754-2008, Table 3.1 states that floating point numbers are projected into the (extended) reals, which implies that indeed they are seen as point values". )

If you tried to claim that IEEE 754 floating point numbers did represent intervals, then you would run into deep, deep trouble when you try to define floating-point arithmetic.

We can see that in the comments: There an assumption is made that a floating point number is or has an interval x ± eps. That assumption is wrong. Let u be the value of the last bit in the mantissa of 1.0. Then if say x = 1.5 all numbers in the interval [x - u/2, x + u/2] are rounded to x, but if x = 1.5 + u then that interval is (x - u/2, x + u/2) - an open interval instead of a closed one. And if x = 1.0, then the interval is [x - u/4, x + u/2]. Not even a symmetric interval!

And the suggested arithmetic with these intervals seems to be "we take the numbers in the middle of the intervals, calculate the result, and find the interval containing the result" - which is essentially arithmetic with "point" floating point numbers, with a bit of fuzz added at the beginning and the end.

When you try to convert between binary and decimal numbers, these intervals come into play for real if you want a high quality conversion, and from experience this is an absolute PITA. Don't go there unless you enjoy that kind of thing.

Context

StackExchange Computer Science Q#74047, answer score: 4

Revisions (0)

No revisions yet.