gotchaMinor
Why does Karger's algorithm work "with high probability"
Viewed 0 times
whywithhighalgorithmworkdoesprobabilitykarger
Problem
I'm reading Karger 1993's "Global Min-cuts in RNC, and Other Ramifications of a Simple Min-Cut Algorithm" (link).
It states that a single round of contractions yields a min-cut with probability $\Omega(n^{-2})$ and concludes that $O(n^2 \log n)$ independent contractions are sufficient to find a min-cut with high probability.
It makes sense to me that $O(n^2)$ independent contractions yields a min-cut with a probability of approximately 1. Anything above this increases the probability, but why has $\log n$ being chosen?
It states that a single round of contractions yields a min-cut with probability $\Omega(n^{-2})$ and concludes that $O(n^2 \log n)$ independent contractions are sufficient to find a min-cut with high probability.
It makes sense to me that $O(n^2)$ independent contractions yields a min-cut with a probability of approximately 1. Anything above this increases the probability, but why has $\log n$ being chosen?
Solution
Lets say that the algorithm succeeds if it produces a min cut. First, it is proved that a single run of the algorithm succeeds with probability $\Omega(n^{-2})$, i.e. with probability $\ge \frac{c}{n^2}$ for some $c>0$.
This is, of course, very bad. For large $n$ our success probability converges to zero. However, if we run it independently $k$ times, then our failure probability is now $\left(1-\frac{c}{n^2}\right)^k$ (we choose the minimal cut obtained from the $k$ runs, in order to fail finding the cut in this manner we must fail every time). Finally, we have:
$\Pr\left[\text{failure}\right]\le \left(1-\frac{c}{n^2}\right)^k\le
e^{-\frac{kc}{n^2}}$, taking $k=n^2\log n$ we get:
$\Pr\left[\text{failure}\right]\le e^{-c\log n} = \frac{1}{n^c}$, which converges to zero, as apposed to our failure probability after a single run of the algorithm.
This is, of course, very bad. For large $n$ our success probability converges to zero. However, if we run it independently $k$ times, then our failure probability is now $\left(1-\frac{c}{n^2}\right)^k$ (we choose the minimal cut obtained from the $k$ runs, in order to fail finding the cut in this manner we must fail every time). Finally, we have:
$\Pr\left[\text{failure}\right]\le \left(1-\frac{c}{n^2}\right)^k\le
e^{-\frac{kc}{n^2}}$, taking $k=n^2\log n$ we get:
$\Pr\left[\text{failure}\right]\le e^{-c\log n} = \frac{1}{n^c}$, which converges to zero, as apposed to our failure probability after a single run of the algorithm.
Context
StackExchange Computer Science Q#57164, answer score: 6
Revisions (0)
No revisions yet.