HiveBrain v1.2.0
Get Started
← Back to all entries
patternModerate

Proof for boosting success probability of a random algorithm with binary output

Submitted by: @import:stackexchange-cs··
0
Viewed 0 times
randomboostingsuccesswithoutputalgorithmbinaryprobabilityforproof

Problem

There is a theorem stating that, given a random algorithm with a binary output that has a success probability $\geq 2/3$, you can always create the another algorithm that solves the same problem but with a success probability $\geq 1-\delta$ by just repeating the original algorithm $\mathcal{O}(1/\log\delta)$ times and taking a majority vote (stated in this YouTube video at 29:14).

Is there a (preferably simple) proof for this?

Edit: As remarked by Nathaniel there is a typo in my question and it should read $\mathcal{O}(\log(1/\delta))$ (I left the typo above so that the last part of his answer would still make sense in context)

Solution

The proof I know of uses a weak version of Chernoff bound:

If $X_1, X_2, …, X_n$ are independent Bernoulli random variables of same parameter $p$, and $X = \sum\limits_{i=1}^nX_i$, then:
$$\mathbb{P}(X \geqslant n/2)\leqslant \left(\frac{1+p}{\sqrt{2}}\right)^n$$

The idea is the following:

  • repeat the algorithm $n$ times;



  • denote $X_i = \left\{\begin{array}{ll}0 & \text{if the result is correct}\\1&\text{otherwise} \end{array}\right.$ for the $i$-th call of the algorithm.



Those $(X_i)$ form a sequence of independent Bernoulli random variables of same parameter $p\leqslant \frac13$ (because the probability of failure is lesser than one third).

There is a failure if the majority vote fails, meaning that $\sum\limits_{i=1}^nX_i \geqslant \frac{n}2$. Using Chernoff bound, the probability of faillure is lesser than:

$$\left(\frac{1+1/3}{\sqrt{2}}\right)^n = \left(\frac{4}{3\sqrt2}\right)^n$$

Now for the probability of failure to be less than $\delta$, it suffices that:

$$\begin{align*}\left(\frac{4}{3\sqrt2}\right)^n\leqslant \delta &\Leftrightarrow n\log_2\left(\frac{4}{3\sqrt2}\right) \leqslant \log_2 \delta\\
&\Leftrightarrow n\geqslant \frac{\log_2\delta}{\log_2\left(\frac{4}{3\sqrt2}\right)}\end{align*}$$

Note that the last inequality is reversed, because $\log_2\left(\frac{4}{3\sqrt2}\right)$ is negative.

In conclusion, one need to repeat the algorithm $\mathcal{O}\left(\log \frac1{\delta}\right)$ to get the probability of failure to less than $\delta$. Note that this is different of the $\frac{1}{\log \delta}$ you asked, but since this quantity is negative if $\delta < 1$, I think it was a mistake.

Context

StackExchange Computer Science Q#167131, answer score: 10

Revisions (0)

No revisions yet.