patternjavascriptMajor
Retry with exponential backoff and jitter for transient failures
Viewed 0 times
retryexponential backoffjitterthundering herdrate limitresiliencetransient error
Error Messages
Problem
A service calls an external API that occasionally returns 429 or 503. Retrying immediately causes a thundering herd that worsens the outage.
Solution
Implement exponential backoff with jitter:
async function withRetry(fn, options = {}) {
const {
maxAttempts = 3,
baseDelayMs = 100,
maxDelayMs = 30000,
retryOn = (err) => err.status >= 500 || err.status === 429,
} = options;
let lastError;
for (let attempt = 0; attempt < maxAttempts; attempt++) {
try {
return await fn();
} catch (err) {
lastError = err;
if (attempt === maxAttempts - 1 || !retryOn(err)) throw err;
// Exponential backoff with full jitter
const expDelay = Math.min(baseDelayMs * 2 ** attempt, maxDelayMs);
const jitter = Math.random() * expDelay;
const delay = Math.floor(jitter);
console.log(`Attempt ${attempt + 1} failed, retrying in ${delay}ms`, { error: err.message });
await new Promise(resolve => setTimeout(resolve, delay));
}
}
throw lastError;
}
// Usage
const result = await withRetry(
() => fetch('https://api.example.com/data').then(r => r.json()),
{ maxAttempts: 4, baseDelayMs: 200 }
);Why
Full jitter (random delay between 0 and the exponential cap) spreads retries across time, preventing all clients from retrying in sync which would recreate the spike.
Gotchas
- Only retry idempotent operations — retrying a POST that creates a record will create duplicates
- Respect Retry-After header from 429 responses instead of calculating your own delay
- Set a total timeout budget, not just per-attempt — don't retry indefinitely
- Log retry attempts with context so you can detect when retries are masking a real outage
Revisions (0)
No revisions yet.