snippetMinor
Example of an algorithm where a low-order term dominates the runtime for any practical input?
Viewed 0 times
orderthelowexamplewhereruntimetermanyinputalgorithm
Problem
Big-O notation hides constant factors, so some $O(n)$ algorithms exist that are infeasible for any reasonable input size because the coefficient on the $n$ term is so huge.
Are there any known algorithms whose runtime is $O(f(n))$ but with some low-order $o(f(n))$ term that is so huge that for reasonable input sizes it completely dominates the runtime? I'd like to use an algorithm like this an an example in an algorithms course, as it gives a good reason why big-O notation isn't everthing.
Thanks!
Are there any known algorithms whose runtime is $O(f(n))$ but with some low-order $o(f(n))$ term that is so huge that for reasonable input sizes it completely dominates the runtime? I'd like to use an algorithm like this an an example in an algorithms course, as it gives a good reason why big-O notation isn't everthing.
Thanks!
Solution
Cryptography is an example, if a degenerate one. For example, breaking AES encryption is $O(1)$ — all you have to do is find the right key among a finite number, $2^{128}$ or $2^{192}$ or $2^{256}$ depending on the key size (assume that enough of the plaintext is known to determine the key unambiguously). However even $2^{128}$ operations would take all of the computers today (a billion or thereabouts, each doing about a billion operations per sceond) more than the lifetime of the universe (about a billion billion seconds).
A slightly different way to illustrate why big-O isn't everything is to remark that we sometimes use a different algorithm for small input sizes. For example, take quicksort. With the right choice of pivot (which is a tricky business!), it's $O(n \lg n)$. Quicksort operates by divide-and-conquer: every instance involves doing a lot of sorting of small arrays. For small arrays, quadratic methods such as insertion sort perform better. So for best performance, a quicksort of a large array involves a lot of runs of insertion sort for small sizes.
A slightly different way to illustrate why big-O isn't everything is to remark that we sometimes use a different algorithm for small input sizes. For example, take quicksort. With the right choice of pivot (which is a tricky business!), it's $O(n \lg n)$. Quicksort operates by divide-and-conquer: every instance involves doing a lot of sorting of small arrays. For small arrays, quadratic methods such as insertion sort perform better. So for best performance, a quicksort of a large array involves a lot of runs of insertion sort for small sizes.
Context
StackExchange Computer Science Q#14823, answer score: 2
Revisions (0)
No revisions yet.