patternMinor
Understanding Monte Carlo Probabilities
Viewed 0 times
understandingmonteprobabilitiescarlo
Problem
I am trying to get a good grasp on Monte Carlo (MC) algorithms, but I feel I am missing something fundamental.
What I don't understand is how MC improves its confidence of giving the correct solution by running more executions.
So please correct me if I am wrong and assume that $Pr[false] \leq 0.5$. Assume we need to find the number of executions so that $Pr[false] \leq 0.125$, thus improving confidence.
Is it true that the number of needed executions is $3$, since $0.5^3=0.125$?
Thanks in advance...
What I don't understand is how MC improves its confidence of giving the correct solution by running more executions.
So please correct me if I am wrong and assume that $Pr[false] \leq 0.5$. Assume we need to find the number of executions so that $Pr[false] \leq 0.125$, thus improving confidence.
Is it true that the number of needed executions is $3$, since $0.5^3=0.125$?
Thanks in advance...
Solution
This is the idea of probability amplification, in which one improves the probability of correctness of the algorithm by running it multiple times. You are talking about Monte-Carlo algorithms which have one-sided error. Lets take an example (from Wikipedia) having such a similar feature :
The Solovay–Strassen primality test is used to determine whether a
given number is a prime number. It always answers true for prime
number inputs; for composite inputs, it answers false with probability
at least 1/2 and true with probability at most 1/2. Thus, false
answers from the algorithm are certain to be correct, whereas the true
answers remain uncertain; this is said to be a (1/2)-correct
false-biased algorithm.
The probability that the above algorithm returns an incorrect answer is thus at most 1/2. Now, the key idea is to understand that successive runs of the algorithm produce answers that are independent of each other - hence, on running it twice, the probability that the algorithm returns an incorrect answer both times is at most $(\frac{1}{2})^2$. Thus, after $k$ iterations, the probability of an incorrect answer reduces to at most $(\frac{1}{2})^k$. Thus, if we adopt the scheme that the algorithm returns false if the any of the first $k$ runs give an output of false, else return true, then the probability that our scheme returns an incorrect answer is at most $1 - (\frac{1}{2})^k$, which can be driven down to $0$ for large values of $k$.
The Solovay–Strassen primality test is used to determine whether a
given number is a prime number. It always answers true for prime
number inputs; for composite inputs, it answers false with probability
at least 1/2 and true with probability at most 1/2. Thus, false
answers from the algorithm are certain to be correct, whereas the true
answers remain uncertain; this is said to be a (1/2)-correct
false-biased algorithm.
The probability that the above algorithm returns an incorrect answer is thus at most 1/2. Now, the key idea is to understand that successive runs of the algorithm produce answers that are independent of each other - hence, on running it twice, the probability that the algorithm returns an incorrect answer both times is at most $(\frac{1}{2})^2$. Thus, after $k$ iterations, the probability of an incorrect answer reduces to at most $(\frac{1}{2})^k$. Thus, if we adopt the scheme that the algorithm returns false if the any of the first $k$ runs give an output of false, else return true, then the probability that our scheme returns an incorrect answer is at most $1 - (\frac{1}{2})^k$, which can be driven down to $0$ for large values of $k$.
Context
StackExchange Computer Science Q#25793, answer score: 4
Revisions (0)
No revisions yet.