patternMinor
Origins of misconception about using equality signs with Landau notation
Viewed 0 times
equalitywithoriginsnotationusingaboutmisconceptionlandausigns
Problem
From "Misconception 1" from Søren S. Pedersen's blog, and as many have seen before, a major misconception in Big-O (and others) notation is to say a function is "equal" to Big-O of some other function:
$f(n) = O(g(n))$
where what we actually mean is this:
$f(n) \in O(g(n))$
What is the origin of this misconception? And why is this misconception perpetuated, even today?
$f(n) = O(g(n))$
where what we actually mean is this:
$f(n) \in O(g(n))$
What is the origin of this misconception? And why is this misconception perpetuated, even today?
Solution
$O$ is not only used in simple statements like $f(n)=O(g(n))$. It is also used to give error terms as in $f(n) = 14\, n\log n + O(n)$. The interpretation is still the same as @usul has described in his answer: There is a function $h \in O(n)$ so that $f(n) = 14\, n\log n + h(n)$.
Here, writing $f(n) \in 14\, n\log n + O(n)$ would emphasize the fact that there is a set involved in the equation, while the important point is that the leading term(s) are exactly as stated, while the $O$-term simply bounds the error. So in this context using $=$ is a sensible choice and I'd assume that it has been carried over to the case with zero leading terms in order to keep notation consistent.
Here, writing $f(n) \in 14\, n\log n + O(n)$ would emphasize the fact that there is a set involved in the equation, while the important point is that the leading term(s) are exactly as stated, while the $O$-term simply bounds the error. So in this context using $=$ is a sensible choice and I'd assume that it has been carried over to the case with zero leading terms in order to keep notation consistent.
Context
StackExchange Computer Science Q#29974, answer score: 8
Revisions (0)
No revisions yet.