patternMinor
Can we multiply approximation hardness?
Viewed 0 times
canmultiplyapproximationhardness
Problem
Say we have two (NP-)optimization problems $A$, resp. $B$, such that it is NP-hard to approximate within a factor $\alpha$ resp. $\beta$.
Now define problem $C$ whose input consists of pair inputs from $A$ and $B$, and a solution as pair of solutions of $A$ and $B$, but the objective function as the product of objective functions of $A$ and $B$.
Can we conclude that $C$ is NP-hard to approximate within a factor $\alpha\beta$? If not, can we prove anything stronger than that there is no approximation within $\max\{\alpha, \beta\}$? If yes, does it matter if the hardness depends on, for example, the unique games conjecture?
Intuitively, I would think: unless $P=NP$, for any polytime approximation algorithm for $A$ there is an input such that it gives a solution of cost more than $\alpha OPT$, and likewise for $B$. Any algorithm for $C$ can be used almost directly as an algorithm for either $A$ or $B$, so we can find an input that produces a solution of cost more than $\alpha \beta OPT$ on that algorithm as well. I am not sure if you can reason like this though.
Now define problem $C$ whose input consists of pair inputs from $A$ and $B$, and a solution as pair of solutions of $A$ and $B$, but the objective function as the product of objective functions of $A$ and $B$.
Can we conclude that $C$ is NP-hard to approximate within a factor $\alpha\beta$? If not, can we prove anything stronger than that there is no approximation within $\max\{\alpha, \beta\}$? If yes, does it matter if the hardness depends on, for example, the unique games conjecture?
Intuitively, I would think: unless $P=NP$, for any polytime approximation algorithm for $A$ there is an input such that it gives a solution of cost more than $\alpha OPT$, and likewise for $B$. Any algorithm for $C$ can be used almost directly as an algorithm for either $A$ or $B$, so we can find an input that produces a solution of cost more than $\alpha \beta OPT$ on that algorithm as well. I am not sure if you can reason like this though.
Solution
When we say that a minimization problem A is NP-hard to approximate up to a factor $\alpha \geq 1$, we mean that there is a polynomial time reduction $f$ from instances $\varphi$ of SAT to instances $x$ of A annotated by a target value $\theta$ such that:
Using this definition it is not hard to prove the multiplicative property.
You can also consider a more general notion of hardness: if there is a polynomial time $\alpha$-approximation algorithm for A, then P=NP. Even under this notion, a $\alpha$-hardness for A translates to a $\alpha^2$-hardness for "A2". Indeed, given an instance of A, duplicate it and pass it to the $\alpha^2$-approximation algorithm. One of the solutions is an $\alpha$-approximation. If only the objective value is given, then the smaller one is an $\alpha$-approximation.
- If $\varphi$ is satisfiable then $A(x) \leq \theta$ (where $A(x)$ is the optimal value of $x$).
- If $\varphi$ is not satisfiable then $A(x) > \alpha \theta$.
Using this definition it is not hard to prove the multiplicative property.
You can also consider a more general notion of hardness: if there is a polynomial time $\alpha$-approximation algorithm for A, then P=NP. Even under this notion, a $\alpha$-hardness for A translates to a $\alpha^2$-hardness for "A2". Indeed, given an instance of A, duplicate it and pass it to the $\alpha^2$-approximation algorithm. One of the solutions is an $\alpha$-approximation. If only the objective value is given, then the smaller one is an $\alpha$-approximation.
Context
StackExchange Computer Science Q#69348, answer score: 3
Revisions (0)
No revisions yet.