principleModerate
Teaching NP-completeness - Turing reductions vs Karp reductions
Viewed 0 times
completenessteachingreductionsturingkarp
Problem
I'm interested in the question of how best to teach NP-completeness to computer science majors. In particular, should we teach it using Karp reductions or using Turing reductions?
I feel that the concepts of NP-completeness and reductions are something that every computer science major ought to learn. However, when teaching NP-completeness, I've noticed that the use of Karp reductions has some downsides.
First of all, Karp reductions seem to be unnecessarily confusing for some students. The intuitive notion of a reduction is "if I have an algorithm to solve problem X, then I can use it to solve problem Y, too". That's very intuitive -- but it maps much better to Turing reductions than to Karp reductions. As a result, I see students who are trying to prove NP-completeness get led astray by their intuition and form an incorrect proof. Trying to teach both kinds of reductions and emphasizing this aspect of Karp reductions sometimes feels a little bit like needless formalism and takes up unnecessary class time and student attention on what feels like an inessential technical detail; it's not self-evident why we use this more restricted notion of reduction.
I do understand the difference between Karp reductions and Turing (Cook) reductions, and how they lead to different notions of NP-completeness. I realize that Karp reductions give us a finer granularity of distinctions between complexity classes. So, for serious study of complexity theory, Karp reductions are obviously the right tool. But for computer science students who are just learning this and are never going to go into complexity theory, I'm uncertain whether this finer distinction is critical is critical for them to be exposed to.
Finally, as a student, I remember feeling puzzled when I ran across a problem like "tautology" -- e.g., given a boolean formula, check whether it is a tautology. What was confusing was that this problem is clearly hard: any polynomial-time algorithm for it would imply th
I feel that the concepts of NP-completeness and reductions are something that every computer science major ought to learn. However, when teaching NP-completeness, I've noticed that the use of Karp reductions has some downsides.
First of all, Karp reductions seem to be unnecessarily confusing for some students. The intuitive notion of a reduction is "if I have an algorithm to solve problem X, then I can use it to solve problem Y, too". That's very intuitive -- but it maps much better to Turing reductions than to Karp reductions. As a result, I see students who are trying to prove NP-completeness get led astray by their intuition and form an incorrect proof. Trying to teach both kinds of reductions and emphasizing this aspect of Karp reductions sometimes feels a little bit like needless formalism and takes up unnecessary class time and student attention on what feels like an inessential technical detail; it's not self-evident why we use this more restricted notion of reduction.
I do understand the difference between Karp reductions and Turing (Cook) reductions, and how they lead to different notions of NP-completeness. I realize that Karp reductions give us a finer granularity of distinctions between complexity classes. So, for serious study of complexity theory, Karp reductions are obviously the right tool. But for computer science students who are just learning this and are never going to go into complexity theory, I'm uncertain whether this finer distinction is critical is critical for them to be exposed to.
Finally, as a student, I remember feeling puzzled when I ran across a problem like "tautology" -- e.g., given a boolean formula, check whether it is a tautology. What was confusing was that this problem is clearly hard: any polynomial-time algorithm for it would imply th
Solution
I would say very definitely teach using Karp (many-one) reductions. Regardless of the benefits of using poly-time Turing reductions (Cook), Karp reductions are the standard model.
Everybody uses Karp and the main pitfall of teaching Cook is that you'll end up with a whole class of students who become pathologically confused whenever they read a textbook or try to discuss the subject with anyone who wasn't taught by you.
I agree that Cook reductions are in several ways more sensible and that there's no distinction between NP-hardness and coNP-hardness in practical terms, in the sense that they both mean "This problem is pretty hard and you're not going to get a general, efficient, exact algorithm that can cope with large instances." On the other hand, the distinction between NP and coNP isn't entirely an artifact of a theory based on Karp reductions: you don't often talk about graphs that are non-3-colourability or in which every set of $k$ vertices contains at least one edge. Somehow, the "natural" version of the problem often seems to be in NP rather than coNP.
Everybody uses Karp and the main pitfall of teaching Cook is that you'll end up with a whole class of students who become pathologically confused whenever they read a textbook or try to discuss the subject with anyone who wasn't taught by you.
I agree that Cook reductions are in several ways more sensible and that there's no distinction between NP-hardness and coNP-hardness in practical terms, in the sense that they both mean "This problem is pretty hard and you're not going to get a general, efficient, exact algorithm that can cope with large instances." On the other hand, the distinction between NP and coNP isn't entirely an artifact of a theory based on Karp reductions: you don't often talk about graphs that are non-3-colourability or in which every set of $k$ vertices contains at least one edge. Somehow, the "natural" version of the problem often seems to be in NP rather than coNP.
Context
StackExchange Computer Science Q#16386, answer score: 12
Revisions (0)
No revisions yet.