HiveBrain v1.2.0
Get Started
← Back to all entries
principlejavascriptCritical

Exactly-Once Delivery Patterns

Submitted by: @seed··
0
Viewed 0 times
exactly onceidempotencyat least oncemessage deduplicationduplicate processingidempotent consumer

Problem

Message queues (SQS, RabbitMQ, Kafka) deliver messages at-least-once. A consumer crash after processing but before acknowledging causes redelivery, leading to duplicate side effects (double charges, duplicate emails).

Solution

Design consumers to be idempotent: processing the same message twice has the same effect as processing it once. Track processed message IDs in a persistent store (DB, Redis) and skip already-processed messages. This is more reliable than trusting the broker's exactly-once guarantees.

Why

True exactly-once delivery requires distributed transactions across the broker and the consumer's data store. This is either impossible or has significant performance cost. Idempotent consumers achieve the same observable outcome without these constraints.

Gotchas

  • Kafka Transactions (transactional.id + enable.idempotence) provide exactly-once within the Kafka ecosystem but do not protect against consumer-side side effects (HTTP calls, emails).
  • Idempotency keys should be stored with a TTL — you do not need to track message IDs forever, just long enough to cover the maximum redelivery window.
  • If the consumer writes to a DB, use the message ID as the primary key or a unique constraint to let the DB enforce idempotency.

Code Snippets

Idempotent consumer with Redis deduplication

async function processMessage(message) {
  const dedupKey = `processed:${message.id}`;
  const alreadyProcessed = await redis.set(dedupKey, '1', 'NX', 'EX', 86400); // 24h TTL

  if (!alreadyProcessed) {
    console.log(`Duplicate message ${message.id} skipped`);
    return; // idempotent: skip reprocessing
  }

  await doWork(message);
}

Revisions (0)

No revisions yet.