patternMinor
Complexity classes: What if different algorithms exist for different sizes of input?
Viewed 0 times
whatexistalgorithmssizesdifferentinputforclassescomplexity
Problem
If I understand(*) correctly, a decision problem is some function $f(x)$ whose function argument $x$ can be represented by data of a limited length (e.g. not an irrational number) and whose value is either 0 or 1.
If there exists some algorithm which can calculate the function value $f(x)$ in less than $\sum_{i=0}^{i_\text{max}}K_in^i$ steps (while $i_\text{max}$ and $K_i$ are constant) for every function argument with a length of $n$ (bits, bytes, digits, characters...), the decision problem belongs(*) to the complexity class $P$.
My actual questions:
Let's say for every "length" $n$ there is an algorithm that takes less than $\sum_{i=0}^{i_\text{max}}K_in^i$ steps to calculate $f(x)$ for a function argument $x$ which has a length of exactly $n$ (bits, bytes, ...).
(However, these algorithms do not necessarily work for every function argument $x$, but depending on the "length" $n$ of the function argument $x$ we have to use different algorithms.)
(*) Please excuse me if I use wrong terminology or my premises are wrong. I'm not a mathematician and my interest about complexity theory comes from the field of computing device development.
If there exists some algorithm which can calculate the function value $f(x)$ in less than $\sum_{i=0}^{i_\text{max}}K_in^i$ steps (while $i_\text{max}$ and $K_i$ are constant) for every function argument with a length of $n$ (bits, bytes, digits, characters...), the decision problem belongs(*) to the complexity class $P$.
My actual questions:
Let's say for every "length" $n$ there is an algorithm that takes less than $\sum_{i=0}^{i_\text{max}}K_in^i$ steps to calculate $f(x)$ for a function argument $x$ which has a length of exactly $n$ (bits, bytes, ...).
(However, these algorithms do not necessarily work for every function argument $x$, but depending on the "length" $n$ of the function argument $x$ we have to use different algorithms.)
- Would this imply that there exists one single algorithm that can calculate $f(x)$ for any argument $x$ in polynomial time?
- If the answer is "no": Would the decision problem belong to the complexity class $P$ or not?
- If either of the two answers is "yes": How is the situation in other complexity classes than $P$?
(*) Please excuse me if I use wrong terminology or my premises are wrong. I'm not a mathematician and my interest about complexity theory comes from the field of computing device development.
Solution
What you are describing is non-uniform algorithms. You have to be careful when defining them. For example, consider the following algorithm, for inputs of length $n$:
Look up the answer in a large table.
If this algorithm runs in polynomial time in your model, then the non-uniform version of your computation model can compute all decision functions.
There are two common ways to define non-uniform computation models in complexity theory: using advice and using circuits.
Advice. One possibility is to have a uniform algorithm (working for all input lengths) which gets an additional "advice" which only depends on the input. Let's see how such an algorithm can efficiently decide the following language:
$$ L_H = \{ w : \text{the $|w|$th Turing machine halts on the empty input} \}. $$
(Here $|w|$ is the length of $w$.)
The algorithm gets an auxiliary advice $a_n$, and just returns $a_n$. The advice $a_n$ is simply whether the $n$th Turing machine halts on the empty input.
Incidentally, this example shows that non-uniform models can compute uncomputable functions.
One standard complexity class which can be defined in this way is $\mathsf{P/poly}$, which consists of all decision problems computable by polynomial time algorithms having access to a polynomial size advice.
Circuits. Cook's theorem shows (essentially) that if $L$ can be decided in polynomial time, then for every $n$ we can construct a polynomial size circuit deciding the length $n$ instances of $L$. These circuits are uniform in the sense that they can be constructed by an algorithm.
This suggests the following natural non-uniform model: polynomial size circuits. In this model, for every input length we can supply an arbitrary polynomial size circuit (that is, there is a polynomial $p(n)$ such that the $n$th circuit has size at most $p(n)$). Every problem in $\mathsf{P}$ can be decided in this model, as mentioned above. But this model is much more powerful — indeed, it can compute the uncomputable language $L_H$ mentioned above: the $n$th circuit either computes the constant 0 function or the constant 1 function. It turns out that this circuit model is exactly equivalent to the class $\mathsf{P/poly}$ defined above.
Non-uniform models of computation are important in complexity theory, since many lower bound techniques apply directly to them. For example, one approach for proving $\mathsf{P} \neq \mathsf{NP}$ which used to be popular is proving superpolynomial lower bounds for circuits computing some NP-complete problem such as $\mathsf{SAT}$. Nowadays this approach seems less promising. But various weaker non-uniform models are the object of intensive study in complexity theory.
Look up the answer in a large table.
If this algorithm runs in polynomial time in your model, then the non-uniform version of your computation model can compute all decision functions.
There are two common ways to define non-uniform computation models in complexity theory: using advice and using circuits.
Advice. One possibility is to have a uniform algorithm (working for all input lengths) which gets an additional "advice" which only depends on the input. Let's see how such an algorithm can efficiently decide the following language:
$$ L_H = \{ w : \text{the $|w|$th Turing machine halts on the empty input} \}. $$
(Here $|w|$ is the length of $w$.)
The algorithm gets an auxiliary advice $a_n$, and just returns $a_n$. The advice $a_n$ is simply whether the $n$th Turing machine halts on the empty input.
Incidentally, this example shows that non-uniform models can compute uncomputable functions.
One standard complexity class which can be defined in this way is $\mathsf{P/poly}$, which consists of all decision problems computable by polynomial time algorithms having access to a polynomial size advice.
Circuits. Cook's theorem shows (essentially) that if $L$ can be decided in polynomial time, then for every $n$ we can construct a polynomial size circuit deciding the length $n$ instances of $L$. These circuits are uniform in the sense that they can be constructed by an algorithm.
This suggests the following natural non-uniform model: polynomial size circuits. In this model, for every input length we can supply an arbitrary polynomial size circuit (that is, there is a polynomial $p(n)$ such that the $n$th circuit has size at most $p(n)$). Every problem in $\mathsf{P}$ can be decided in this model, as mentioned above. But this model is much more powerful — indeed, it can compute the uncomputable language $L_H$ mentioned above: the $n$th circuit either computes the constant 0 function or the constant 1 function. It turns out that this circuit model is exactly equivalent to the class $\mathsf{P/poly}$ defined above.
Non-uniform models of computation are important in complexity theory, since many lower bound techniques apply directly to them. For example, one approach for proving $\mathsf{P} \neq \mathsf{NP}$ which used to be popular is proving superpolynomial lower bounds for circuits computing some NP-complete problem such as $\mathsf{SAT}$. Nowadays this approach seems less promising. But various weaker non-uniform models are the object of intensive study in complexity theory.
Context
StackExchange Computer Science Q#101895, answer score: 2
Revisions (0)
No revisions yet.