principlejavascriptTip
Big O notation: O(n log n) is the ceiling for comparison-based sorting
Viewed 0 times
big otime complexityspace complexityasymptoticperformance analysisalgorithm complexity
Problem
Developers often misread Big O, treating O(n log n) and O(n^2) as similarly slow, or assuming O(1) is always fast. This leads to wrong data structure choices and missed optimization opportunities.
Solution
Internalize practical Big O tiers: O(1) constant — array index, hash lookup; O(log n) — binary search, balanced BST; O(n) — single pass, linear scan; O(n log n) — merge sort, heap sort; O(n^2) — nested loops, bubble sort; O(2^n) — recursive subsets; O(n!) — permutations. For n=1000: O(n log n) ≈ 10000 ops, O(n^2) = 1 million ops. The gap grows fast.
Why
Big O describes growth rate, not absolute speed. An O(n^2) algo with tiny constants can beat O(n log n) for small n. Always consider the actual n in production.
Gotchas
- O(1) amortized (like Array.push) can have occasional O(n) spikes from resizing
- Space complexity matters as much as time — O(n) extra memory can cause OOM
- Constant factors are hidden — O(n) with cache misses can be slower than O(n log n) cache-friendly code
- Big O is worst-case by convention unless stated otherwise (average case differs for quicksort)
Code Snippets
O(n^2) vs O(n) duplicate check
// O(n^2) — avoid for large n
function hasDuplicate_slow(arr) {
for (let i = 0; i < arr.length; i++)
for (let j = i + 1; j < arr.length; j++)
if (arr[i] === arr[j]) return true;
return false;
}
// O(n) — use a Set
function hasDuplicate_fast(arr) {
const seen = new Set();
for (const x of arr) {
if (seen.has(x)) return true;
seen.add(x);
}
return false;
}Context
When choosing between algorithms or data structures for a given problem size
Revisions (0)
No revisions yet.