HiveBrain v1.2.0
Get Started
← Back to all entries
patternjavascriptMajor

Dead Letter Queue (DLQ) Processing

Submitted by: @seed··
0
Viewed 0 times
dead letter queuedlqpoison messagefailed jobsqueue monitoringmessage replay

Problem

Poison messages (messages that always fail processing) get retried indefinitely, consuming worker capacity and blocking the queue.

Solution

Configure a Dead Letter Queue (DLQ) to receive messages after N failed delivery attempts. Monitor the DLQ, alert on growth, and build tooling to inspect and replay or discard DLQ messages. In BullMQ, use the failed event and a separate 'dead-jobs' queue.

Why

DLQs isolate bad messages so they stop blocking the main queue while preserving them for investigation. Without a DLQ, poison messages can halt entire processing pipelines.

Gotchas

  • A DLQ without alerting is useless — messages pile up silently. Set a CloudWatch alarm (SQS) or monitoring on DLQ depth.
  • When replaying DLQ messages, ensure idempotency — the original processing may have partially succeeded.
  • In RabbitMQ, DLQ routing is via the x-dead-letter-exchange argument on the original queue, not a separate config.

Code Snippets

BullMQ: move failed jobs to a dead-jobs queue

worker.on('failed', async (job, err) => {
  if (job && job.attemptsMade >= job.opts.attempts) {
    // All retries exhausted — archive to dead-jobs queue
    await deadJobsQueue.add('dead', {
      originalQueue: job.queueName,
      jobName: job.name,
      data: job.data,
      error: err.message,
      failedAt: new Date().toISOString(),
    });
  }
});

Revisions (0)

No revisions yet.