HiveBrain v1.2.0
Get Started
← Back to all entries
patternCritical

Why did Google not use an NP problem for their quantum supremacy experiment?

Submitted by: @import:stackexchange-cs··
0
Viewed 0 times
problemwhygooglesupremacydidforexperimentusenottheir

Problem

Reading discussions of the recent quantum supremacy experiment by Google I noticed that a lot of time and effort (in the experiment itself, but also in the excellent blog posts by Scott Aaronson and others explaining the results) is spent on verifying that the quantum computer did indeed compute the thing we believe it to have computed.

From a naive point of view this is completely understandable: the essence of any quantum supremacy experiment is that you have the quantum computer perform a task that is hard for a classical computer to achieve, so surely it would also be hard for the classical computer to verify that the quantum computer did complete the task we gave it, right?

Well, no. About the first thing you learn when starting to read blogs or talk to people about computational complexity is that, counter-intuitive as it may seem, there exist problems that are hard to solve, but for which it is easy to verify the validity of a given solution: the so called NP problems.

Thus it seems that Google could have saved themselves and others a lot of time by using one of these problems for their quantum supremacy experiment rather than the one they did. So my question is why didn't they?

An answer for the special case of the NP problem factoring is given in this very nice answer to a different question: https://cs.stackexchange.com/a/116360/26301. Paraphrasing: the regime where the quantum algorithm starts to out perform the best known classical algorithm starts at a point that requires more than the 53 qubits currently available.

So my follow-up question is: does this answer for the special case extend to all NP-problems where quantum speedups are expected or is it specific to factoring? And in the first case: is there a fundamental reason related to the nature of NP that quantum-supremacy 'kicks in later' for NP problems than for sampling problems or is it just that for NP problems better classical algorithms are available due to their being more famous?

Solution

there exist problems that are hard to solve, but for which it is easy to verify the validity of a given solution: the so called NP problems.

This statement is wrong. There are many NP problems which are easy to solve. "NP" simply means "easy to verify". It does not mean hard to solve.

What you are probably thinking of is NP-complete problems which is a subset of the NP problems for which we have very, very good evidence to think they are hard. However, quantum computers are not expected to be able to solve NP-complete problems significantly more "easily" than regular computers.

Factoring is also thought to be hard, but the evidence for this is only "very good" and not "very, very good" (in other words: factoring is likely not NP-complete). Factoring is one of very few natural problems which falls in between not being NP-complete and not being easy.

The list of problems that we know that are easy to verify, easy to solve on a quantum computer but hard classicly, is even shorter. In fact, I do not know of any problem other than factoring (and the very closely related discrete logarithm problem) with this property.

Moreover, any easy to verify problem would likely have the same issue as factoring: $53$ qubits is not that many, and $2^{53}$ is huge, but just within reach of classical computing. $2^{53}$ less than $10^{16}$, and most classical computers can execute on the order of $10^9$ operations per second. We could run through all possibilities in about $1/3$rd of a year on a single classical desktop computer.

Quantum computers have very few applications which they're known to be good at, and are essentially useless for most hard NP problems.

Context

StackExchange Computer Science Q#116408, answer score: 74

Revisions (0)

No revisions yet.