patternMinor
sequence of problems that take $\Theta(n^k)$ for increasing $k$?
Viewed 0 times
thetatakesequencethatforincreasingproblems
Problem
Do we know an infinite sequence of decision problems where the most efficient algorithm for each problem takes $\Theta(n^k)$ time, where $k$ increases unboundedly?
Suppose for example that we would know that finding a k-clique takes $\Theta(n^k)$, then this sequence could be {1-clique, 2-clique, 3-clique, ...}. However, perhaps there are more efficient algorithms for k-clique, so this answer is incorrect.
EDIT:
Suppose for example that we would know that finding a k-clique takes $\Theta(n^k)$, then this sequence could be {1-clique, 2-clique, 3-clique, ...}. However, perhaps there are more efficient algorithms for k-clique, so this answer is incorrect.
EDIT:
- By the Time Hierarchy Theorem (which includes "P does not collapse to DTIME($n^k$) for any fixed $k$"), such a sequence should exist.
- After some discussions in the comments, common NP problems are not good candidates. If it takes only $O(n)$ time to check a certificate (or $O(n^c)$, where $c$ is a constant independent of $k$), it would imply that $P\neq NP$
Solution
You leave the computational model open. I will assume that the question refers to random access machines (RAMs), as that is customary when one asks for the actual exponent in polynomial time.
Let $P_k$ be the set of (encodings of) RAMs $M$, such that $M$ halts in at most $|M|^k$ steps using at most the initial $|M|^k$ memory cells. A universal RAM can solve $P_k$ in $O(n^k)$.
On the other hand, $P_k$ is the universal problem for $DTIME(n^k)$ (in the RAM model and under linear time reductions). As a special case of the RAM version of the time hierachy theorem, a simple diagonalization shows that all algorithms for $P_k$ have running time $\Omega(n^k)$. Thus the complexity of $P_k$ is $\Theta(n^k)$ as asked.
The tightness heavily relies on the computational model. The Turing machine version of the time hierarchy theorem has a $\log n$ gap. Let $P'_k$ be the set of (encodings of) Turing machines $M$, such that $M$ halts in at most $\frac{|M|^k}{\log n}$ steps. Then $P'_k$ can be solved in $n^k$ time by a universal Turing machine and all algorithms have running time $\Omega(\frac{n^k}{\log n})$. I suspect this is the best one can do.
[EDIT]
In a comment, you ask for more conventional problems, like graph problems or logic.
First, let me point out how $k$-Clique (as suggested in the answer) does not seem like a good candidate. If we could prove that $k$-Clique requires time $\Omega(n^k)$ (or $\Omega(n^{f(k)})$ for some unbounded $f$), we have implied that Clique is not in P. And thus that P is different from NP. That is not likely to be easy. The same holds for slices of any other problem that we know to be in NP, or in some other classes like PSPACE that are unknown to be different from P.
Every problem can be rephrased as a graph problem, by encoding the input as a graph. I don't know if you would call a graph version of $P_k$ conventional. I wouldn't call it natural.
As for logic, I can provide an example. It is not boolean logic however, and there is a gap between the lower and upper bound. The example is based on the Immerman-Vardi theorem. Let $\mathcal L$ be first-order logic extended by least-fixed-point operators. Let ${\mathcal L}^k$ denote the fragment where only $k$ first-order variables are allowed. It is permitted to re-use variables, so the restriction is that each subformula has at most $k$ free variables. The Problem $M_k$ is the model-checking problem for ${\mathcal L}^k$, that is on input of a formula $\varphi\in{\mathcal L}^k$ and a structure $\mathfrak A$ of matching vocabulary, the task is to decide whether $\mathfrak A\models\varphi$, that is whether $\varphi$ is true in $\mathfrak A$.
$M_k$ can be solved in time $O(n^{2k+1})$. On the other hand, for some constant $c$, $M_{3k+c}$ is hard for $DTIME(n^k)$ (where we also require an $n^k$ bound on the contents of memory cells). I believe $c=2$ suffices. From diagonalization we obtain that $M_{3k+c}$ requires time $\Omega(n^k)$,
so $M_k$ requires time $\Omega(n^{\frac{k-c}3})$. The bound is not tight in the sense of $\Theta(n^{k'})$ for some $k'$, but at least we have $n^{\Theta(k)}$.
Let $P_k$ be the set of (encodings of) RAMs $M$, such that $M$ halts in at most $|M|^k$ steps using at most the initial $|M|^k$ memory cells. A universal RAM can solve $P_k$ in $O(n^k)$.
On the other hand, $P_k$ is the universal problem for $DTIME(n^k)$ (in the RAM model and under linear time reductions). As a special case of the RAM version of the time hierachy theorem, a simple diagonalization shows that all algorithms for $P_k$ have running time $\Omega(n^k)$. Thus the complexity of $P_k$ is $\Theta(n^k)$ as asked.
The tightness heavily relies on the computational model. The Turing machine version of the time hierarchy theorem has a $\log n$ gap. Let $P'_k$ be the set of (encodings of) Turing machines $M$, such that $M$ halts in at most $\frac{|M|^k}{\log n}$ steps. Then $P'_k$ can be solved in $n^k$ time by a universal Turing machine and all algorithms have running time $\Omega(\frac{n^k}{\log n})$. I suspect this is the best one can do.
[EDIT]
In a comment, you ask for more conventional problems, like graph problems or logic.
First, let me point out how $k$-Clique (as suggested in the answer) does not seem like a good candidate. If we could prove that $k$-Clique requires time $\Omega(n^k)$ (or $\Omega(n^{f(k)})$ for some unbounded $f$), we have implied that Clique is not in P. And thus that P is different from NP. That is not likely to be easy. The same holds for slices of any other problem that we know to be in NP, or in some other classes like PSPACE that are unknown to be different from P.
Every problem can be rephrased as a graph problem, by encoding the input as a graph. I don't know if you would call a graph version of $P_k$ conventional. I wouldn't call it natural.
As for logic, I can provide an example. It is not boolean logic however, and there is a gap between the lower and upper bound. The example is based on the Immerman-Vardi theorem. Let $\mathcal L$ be first-order logic extended by least-fixed-point operators. Let ${\mathcal L}^k$ denote the fragment where only $k$ first-order variables are allowed. It is permitted to re-use variables, so the restriction is that each subformula has at most $k$ free variables. The Problem $M_k$ is the model-checking problem for ${\mathcal L}^k$, that is on input of a formula $\varphi\in{\mathcal L}^k$ and a structure $\mathfrak A$ of matching vocabulary, the task is to decide whether $\mathfrak A\models\varphi$, that is whether $\varphi$ is true in $\mathfrak A$.
$M_k$ can be solved in time $O(n^{2k+1})$. On the other hand, for some constant $c$, $M_{3k+c}$ is hard for $DTIME(n^k)$ (where we also require an $n^k$ bound on the contents of memory cells). I believe $c=2$ suffices. From diagonalization we obtain that $M_{3k+c}$ requires time $\Omega(n^k)$,
so $M_k$ requires time $\Omega(n^{\frac{k-c}3})$. The bound is not tight in the sense of $\Theta(n^{k'})$ for some $k'$, but at least we have $n^{\Theta(k)}$.
Context
StackExchange Computer Science Q#66662, answer score: 3
Revisions (0)
No revisions yet.