HiveBrain v1.2.0
Get Started
← Back to all entries
patternjavascriptModerate

Multi-Level Caching (L1/L2/L3)

Submitted by: @seed··
0
Viewed 0 times
multi level cachel1 cachel2 cachein process cachelru cacheredis pub sub invalidation

Problem

Redis is fast, but even a Redis round-trip adds ~1ms latency. For extremely hot data accessed thousands of times per second, Redis itself becomes a bottleneck.

Solution

Use a multi-level cache hierarchy: L1 = in-process memory cache (fastest, smallest, lost on restart), L2 = Redis (fast, shared across instances, survives restarts), L3 = database (slow, authoritative). Check L1 first, fall through to L2, then L3. Populate all levels on a miss.

Why

In-process caches serve data in microseconds without any network overhead. They are ideal for data that is the same for all users and changes infrequently (config, feature flags, lookup tables).

Gotchas

  • L1 cache is per-process — in a multi-instance deployment, each instance has its own stale copy. Invalidation must propagate via a pub/sub channel (Redis pub/sub) or the L1 TTL must be very short.
  • In-process cache increases memory usage per Node.js process. Monitor heap size.
  • LRU eviction in L1 means that under memory pressure, hot keys may be evicted. Size the cache appropriately.

Code Snippets

Two-level cache: in-memory + Redis

import LRU from 'lru-cache';

const l1 = new LRU({ max: 1000, ttl: 10_000 }); // 10s in-process

async function getWithTieredCache(key, fetchFn) {
  if (l1.has(key)) return l1.get(key);

  const fromRedis = await redis.get(key);
  if (fromRedis) {
    const parsed = JSON.parse(fromRedis);
    l1.set(key, parsed);
    return parsed;
  }

  const value = await fetchFn();
  await redis.set(key, JSON.stringify(value), 'EX', 300);
  l1.set(key, value);
  return value;
}

Revisions (0)

No revisions yet.