HiveBrain v1.2.0
Get Started
← Back to all entries
patternMinor

Random generator considerations in the design of randomized algorithms

Submitted by: @import:stackexchange-cs··
0
Viewed 0 times
randomconsiderationsthedesignrandomizedgeneratoralgorithms

Problem

It is well known that the efficiency of randomized algorithms (at least those in BPP and RP) depends on the quality of the random generator used. Perfect random sources are unavailable in practice. Although it is proved that for all $0 < \delta \leq \frac{1}{2}$ the identities BPP = $\delta$-BPP and RP = $\delta$-RP hold, it is not true that the original algorithm used for a prefect random source can be directly used also for a $\delta$-random source. Instead, some simulation has to be done. This simulation is polynomial, but the resulting algorithm is not so efficient as the original one.

Moreover, as to my knowledge, the random generators used in practice are usually not even $\delta$-sources, but pseudo-random sources that can behave extremely badly in the worst case.

According to Wikipedia:


In common practice, randomized algorithms are approximated using a pseudorandom number generator in place of a true source of random bits; such an implementation may deviate from the expected theoretical behavior.

In fact, the implementations of randomized algorithms that I have seen up to now were mere implementations of the algorithms for perfect random sources run with the use of pseudorandom sources.

My question is, if there is any justification of this common practice. Is there any reason to expect that in most cases the algorithm will return a correct result (with the probabilities as in BPP resp. RP)? How can the "approximation" mentioned in the quotation from Wikipedia be formalized? Can the deviation mentioned be somehow estimated, at least in the expected case? Is it possible to argue that a Monte-Carlo randomized algorithm run on a perfect random source will turn into a well-behaved stochastic algorithm when run on a pseudorandom source? Or are there any other similar considerations?

Solution

It is well known that the efficiency of randomized algorithms (at least those in BPP and RP) depends on the quality of the random generator used. I have to disagree. What a good pseudurandom sequence ensemble gives you is a guarantee on the performance of the algorithm run a random sequence from the ensemble, compared to a trully random sequence. If you don't have such a guarantee, you can't conclude that the algorithm will work poorly - you just don't know.

Is there any justification of this common practice? It works.

Is it possible to argue that a Monte-Carlo randomized algorithm run on a perfect random source will turn into a well-behaved stochastic algorithm when run on a pseudorandom source? Complexity theory suffers from two shortcomings. The first one is that it is very hard to prove anything (for example, P vs. NP is open). The second one is that it is mostly concerned with worst-case analysis. Put together, these two limitations rule out complexity theory as a good model for the behavior of algorithms in practice.

Context

StackExchange Computer Science Q#11726, answer score: 2

Revisions (0)

No revisions yet.