principleMinor
Use of Big-O Notation: Size of Input vs Input
Viewed 0 times
sizeinputbignotationuse
Problem
It is my understanding that, when one is describing time complexity with $\mathcal{O}$, $\mathcal{\Theta}$, and $\mathcal{\Omega}$, one must be careful to provide expressions with regards to the size of the input as opposed to the input itself (particularly in the case of numeric algorithms).
For instance, trial division for integer factorization takes up to $\sqrt N$ divisions to factor the integer $N$. However, the size of the input is the number of bits, $w = \lg(N)$. Thus, integer factorization takes $\mathcal{O}\left(2^{w/2}\right)$ time with respect to the size of the input ($w$ bits).
My question: Given the above, would it be considered correct to write in a CS article (relatively informal--on the scale of a blog post) to write that "integer factorization takes $\mathcal{O}(\sqrt{N})$ time with respect to the input variable $N$," and assume that the reader realizes they should substitute $w=\lg(N)$ to obtain the time complexity with respect to the size of the input?
For instance, trial division for integer factorization takes up to $\sqrt N$ divisions to factor the integer $N$. However, the size of the input is the number of bits, $w = \lg(N)$. Thus, integer factorization takes $\mathcal{O}\left(2^{w/2}\right)$ time with respect to the size of the input ($w$ bits).
My question: Given the above, would it be considered correct to write in a CS article (relatively informal--on the scale of a blog post) to write that "integer factorization takes $\mathcal{O}(\sqrt{N})$ time with respect to the input variable $N$," and assume that the reader realizes they should substitute $w=\lg(N)$ to obtain the time complexity with respect to the size of the input?
Solution
You have it backwards, but you are on to an important distinction.
$O(N)$ if $N$ is a number in the input is perfectly well-defined and can appear in, say, the $\Theta$-asymptotic of a runtime function.
In your example, note that $\sqrt{N} = 2^{w/2}$ and thus $O(\sqrt{N}) = O(2^{w/2})$ -- it's just another way of expressing the same function class, given the identity $w = \lg N$.
The distinction between $f(N)$ and $f(\langle N \rangle)$ is only important in the context of complexity classes. Many of those are defined along the lines of
Class $X$ contains all problems for which there is an algorithm with runtime in $O(f(|x|))$ where $x$ is an input string.
For examples, see pseudo-polynomial problems/algorithms.
$O(N)$ if $N$ is a number in the input is perfectly well-defined and can appear in, say, the $\Theta$-asymptotic of a runtime function.
In your example, note that $\sqrt{N} = 2^{w/2}$ and thus $O(\sqrt{N}) = O(2^{w/2})$ -- it's just another way of expressing the same function class, given the identity $w = \lg N$.
The distinction between $f(N)$ and $f(\langle N \rangle)$ is only important in the context of complexity classes. Many of those are defined along the lines of
Class $X$ contains all problems for which there is an algorithm with runtime in $O(f(|x|))$ where $x$ is an input string.
For examples, see pseudo-polynomial problems/algorithms.
Context
StackExchange Computer Science Q#28557, answer score: 6
Revisions (0)
No revisions yet.