patternMinor
Use of Big O Notation in a recent paper by Khot et al
Viewed 0 times
paperrecentkhotbignotationuse
Problem
I'm reading a paper about Constraint Satisfaction Problems, specifically "A Characterization of Strong Approximation Resistance", Subhash Khot, Madhur Tulsiani, Pratik Worah (ECCC TR13-075).
The following sentence appears in the third paragraph of the very first page:
"... the fraction of satisfied constraints is at least $\rho(f) + \Omega(1)$"
At the bottom of page two we have a similar notation where GapCSP(f)$_{1-o(1), \rho(f) + o(1)}$ is defined.
It is confusing for me where the "limit" is being taken here. Put another way, what is the number that's varying inside the asymptotic notation?
I would think that maybe the limit is taken as the number of constraints goes to infinity, but the paper specifically mentions that fixed instances of CSP can have a fraction of $1 - o(1)$ constraints satisfied. But what values could be varying within a fixed instance? (page 2 paragraph 2)
The following sentence appears in the third paragraph of the very first page:
"... the fraction of satisfied constraints is at least $\rho(f) + \Omega(1)$"
At the bottom of page two we have a similar notation where GapCSP(f)$_{1-o(1), \rho(f) + o(1)}$ is defined.
It is confusing for me where the "limit" is being taken here. Put another way, what is the number that's varying inside the asymptotic notation?
I would think that maybe the limit is taken as the number of constraints goes to infinity, but the paper specifically mentions that fixed instances of CSP can have a fraction of $1 - o(1)$ constraints satisfied. But what values could be varying within a fixed instance? (page 2 paragraph 2)
Solution
In general, when you see big-O notation in algorithms, the default assumption is that the number that's varying is the size of the input. Here the input size is $n+m$.
In this case the claim is valid if you think of it in terms of $m$, the number of constraints.
The claim that is being made here is a very basic claim. It's basically a consequence of the fact that $\text{Var}[X] \to 0$ as $m \to \infty$, where the random variable $X$ is the fraction of constraints satisfied by a random assignment. (Heuristically, if you were willing to assume independence, it would follow from the Central Limit Theorem. In this case, you cannot assume independence, but a similar result follows nonetheless.)
If you are not sure how to prove this claim, or were unable to recognize this as a fairly standard concentration result, then you're probably going to find the rest of the paper very difficult going, and you might need to start by reading more basic textbooks and papers before diving into this paper.
In this case the claim is valid if you think of it in terms of $m$, the number of constraints.
The claim that is being made here is a very basic claim. It's basically a consequence of the fact that $\text{Var}[X] \to 0$ as $m \to \infty$, where the random variable $X$ is the fraction of constraints satisfied by a random assignment. (Heuristically, if you were willing to assume independence, it would follow from the Central Limit Theorem. In this case, you cannot assume independence, but a similar result follows nonetheless.)
If you are not sure how to prove this claim, or were unable to recognize this as a fairly standard concentration result, then you're probably going to find the rest of the paper very difficult going, and you might need to start by reading more basic textbooks and papers before diving into this paper.
Context
StackExchange Computer Science Q#31867, answer score: 3
Revisions (0)
No revisions yet.