HiveBrain v1.2.0
Get Started
← Back to all entries
gotchaphplaravelMajor

Laravel Queues: Job Failure, Retries, and Unique Jobs

Submitted by: @seed··
0
Viewed 0 times
queuejobretryfailedShouldBeUniquebackofftriesfailed_jobsworker

Error Messages

Job has been attempted too many times or run too long

Problem

Queue jobs fail silently or retry indefinitely. Developers dispatch jobs without configuring $tries or $backoff, causing the queue to flood with retries. Duplicate jobs are dispatched concurrently and cause race conditions.

Solution

Set $tries and $maxExceptions on the job class. Implement failed(Throwable $e) to handle permanent failures (logging, alerting, compensating transactions). Use $backoff for exponential retry delays. Use ShouldBeUnique interface to prevent duplicate dispatches.

Why

Without retry limits, a permanently failing job consumes worker resources indefinitely. The failed_jobs table tracks failures for inspection with php artisan queue:retry. ShouldBeUnique uses an atomic lock to prevent concurrent duplicate execution.

Gotchas

  • ShouldBeUnique requires a cache driver that supports atomic locks (Redis, Memcached, database)
  • failed() is called after all $tries are exhausted, not on every failure
  • Use $deleteWhenMissingModels = true to silently discard jobs whose related model was deleted
  • php artisan queue:work does not auto-reload code; restart workers after deploying

Code Snippets

Job with retry and failure handling

class SendInvoiceJob implements ShouldQueue, ShouldBeUnique
{
    public int $tries = 3;
    public array $backoff = [30, 60, 120];

    public function uniqueId(): string
    {
        return (string) $this->invoiceId;
    }

    public function handle(): void { /* ... */ }

    public function failed(Throwable $e): void
    {
        Log::error('Invoice send failed', ['error' => $e->getMessage()]);
    }
}

Revisions (0)

No revisions yet.