patternMajorpending
Pattern: Caching strategies for different use cases
Viewed 0 times
cache-asidewrite-throughwrite-behindstampedestale-while-revalidate
Problem
Choosing the wrong caching strategy leads to stale data, cache stampedes, or poor hit rates. Different data access patterns need different strategies.
Solution
Match caching strategy to access pattern:
def get_user(id):
user = cache.get(f'user:{id}')
if user is None:
user = db.get_user(id)
cache.set(f'user:{id}', user, ttl=300)
return user
Best for: General-purpose, read-heavy workloads
Risk: Cache miss thundering herd
def update_user(id, data):
user = db.update_user(id, data)
cache.set(f'user:{id}', user) # Update cache immediately
return user
Best for: Data that's read immediately after write
Risk: Higher write latency
def update_user(id, data):
cache.set(f'user:{id}', data) # Write to cache only
queue.enqueue(f'sync_user:{id}') # Async DB write
Best for: Write-heavy workloads
Risk: Data loss if cache fails before DB write
# Cache itself fetches on miss (cache acts as intermediary)
# Simplifies application code
def get_with_lock(key, fetch_fn, ttl=300):
value = cache.get(key)
if value is not None:
return value
lock = cache.lock(f'lock:{key}', timeout=5)
if lock.acquire():
value = fetch_fn()
cache.set(key, value, ttl)
lock.release()
else:
time.sleep(0.1) # Wait for other process
return cache.get(key)
# Return stale data immediately, refresh in background
# Best UX for data that changes slowly
- Cache-Aside (Lazy Loading):
def get_user(id):
user = cache.get(f'user:{id}')
if user is None:
user = db.get_user(id)
cache.set(f'user:{id}', user, ttl=300)
return user
Best for: General-purpose, read-heavy workloads
Risk: Cache miss thundering herd
- Write-Through:
def update_user(id, data):
user = db.update_user(id, data)
cache.set(f'user:{id}', user) # Update cache immediately
return user
Best for: Data that's read immediately after write
Risk: Higher write latency
- Write-Behind (Write-Back):
def update_user(id, data):
cache.set(f'user:{id}', data) # Write to cache only
queue.enqueue(f'sync_user:{id}') # Async DB write
Best for: Write-heavy workloads
Risk: Data loss if cache fails before DB write
- Read-Through:
# Cache itself fetches on miss (cache acts as intermediary)
# Simplifies application code
- Cache stampede prevention:
def get_with_lock(key, fetch_fn, ttl=300):
value = cache.get(key)
if value is not None:
return value
lock = cache.lock(f'lock:{key}', timeout=5)
if lock.acquire():
value = fetch_fn()
cache.set(key, value, ttl)
lock.release()
else:
time.sleep(0.1) # Wait for other process
return cache.get(key)
- Stale-while-revalidate:
# Return stale data immediately, refresh in background
# Best UX for data that changes slowly
Why
No single caching strategy works for everything. Choose based on read/write ratio, consistency requirements, and failure tolerance.
Revisions (0)
No revisions yet.