You know the feeling: your PHP app hums along nicely—until it suddenly doesn’t. Monday morning hits, your inbox explodes, and your background jobs are all red. Emails are stuck, invoices half-sent, the queue blinking in silent protest.
This isn’t PHP “failing.” It’s PHP doing exactly what it’s told—handling one request at a time—while you’re asking it to juggle a dozen slow, background tasks on top.
This article isn’t about reinventing PHP. It’s about taming it. You’ll see how queues, workers, and long-running processes can keep your app responsive and your servers calm. No buzzwords, just patterns that actually hold up in production.
Let’s clear one myth right away: PHP itself isn’t asynchronous. It’s synchronous, start-to-finish, every time.
When developers say “async PHP,” they really mean moving slow stuff somewhere else—usually to a background queue.
The setup looks like this:
a job, describing the work;
a queue, holding that job;
a worker, quietly chewing through jobs in the background.
That’s it. Nothing magical. You free the web request from heavy lifting so users get instant responses, while the boring, slow jobs—like generating PDFs or sending receipts—finish a few seconds later.
Here’s a basic example you’ve probably seen:
dispatch(new SendWelcomeEmail($userId));
And somewhere, a worker handles it:
class SendWelcomeEmailHandler {
public function __invoke(SendWelcomeEmail $job) {
Mail::to($job->userId)->send(new WelcomeMail());
}
}
It is sufficiently straightforward; however, the details are of significant importance. Keep job payloads small; store large data elsewhere and just pass an ID. Make sure the code can still process old jobs after a deployment—schema changes shouldn’t break messages waiting in the queue.
And test your failure handling on purpose. Throw an exception in a local job and watch what happens. Does it retry? Does it hit your “failed_jobs” table? You’ll learn more from one controlled failure than a week of smooth runs.
Every backend comes with trade-offs, and you’ll hit them faster than you think.
A database queue is fine for a side project or a small site, but it starts to choke when hundreds of jobs pile up. Redis is fast and simple, though once memory fills, eviction rules can make jobs disappear. SQS or RabbitMQ scale better, but force you to think about visibility timeouts, message retention, and connection tuning.
The golden rule: Set your queue’s visibility timeout longer than your job runtime.
Otherwise, the broker assumes the job’s lost and sends it again, giving you duplicates. Most queues guarantee at least once delivery, not exactly once, so build handlers that can safely run twice.
When in doubt, stick with what your team can operate confidently. Scaling isn’t about fancy tech—it’s about stability under stress.
Workers are the core of your async setup, and PHP doesn’t manage them automatically, so they need careful supervision. You need a process manager—Supervisor, systemd, or, if you’re running containers, Kubernetes.
Here’s a simple Supervisor config that could save more than one team’s weekend:
[program:php-worker]
command=/usr/bin/php artisan queue:work --sleep=1 --tries=3 --max-jobs=500 --max-time=1800 --memory=256
autostart=true
autorestart=true
stopsignal=TERM
That setup restarts workers before memory leaks pile up. In Kubernetes, a short preStop hook (say, sleep 5 ) gives workers time to finish jobs gracefully before shutdown.
If your tasks sometimes run long, you’ve got two options: extend their visibility window on the fly (SQS lets you do this) or break them into smaller sub-jobs. Either way, keep the worker predictable.
And if your app both writes to a database and queues a job in the same request, use the Outbox pattern. It ensures the data and the job stay in sync—even if your app crashes halfway through.
One long-running report can block a thousand tiny notifications. Separate workloads by queue type—short, default, and long—and let each have its own worker group.
Set timeouts in layers. The job timeout inside PHP should be shorter than the queue’s visibility timeout, and the visibility timeout should always be a bit longer than your worst-case runtime.
Retries need backoff, not panic. Use exponential backoff so your system doesn’t hammer an unstable API. If a job keeps failing, move it to a Dead Letter Queue (DLQ) for later debugging.
And above all, make jobs idempotent—check before doing anything irreversible. If you’ve already sent an email or charged a card, skip it the next time around.
Once your app starts running background work, the quiet parts can hurt you the most. You need visibility.
A few key things to watch:
Queue depth: Is it draining or filling up?
Job duration: Average and p95 times show trends early.
Retries and failures: Spikes hint at bad deployments or flaky services.
Worker uptime: Crashes or restarts often hide memory issues.
Laravel Horizon gives you these out of the box; Prometheus and Grafana make it visual. But logs are still your best friend. Include the job name, queue name, attempt count, duration, and a trace ID in every log line. Then pass that trace ID from the web request all the way to the job—it’s the cheapest form of observability you’ll ever build.
Scaling background work shouldn’t be a guessing game. Watch how the queue behaves over time.
If depth grows steadily while CPU stays maxed, add workers. If the queue sits empty for hours, scale down.
Kubernetes users can automate this with a Horizontal Pod Autoscaler, tied to CPU or custom metrics like queue length. Smaller setups can do it manually—sometimes a single added worker pool makes all the difference.
Remember, cost comes from concurrency. The fewer retries and restarts, the cheaper the system runs.
Sooner or later, you’ll trip over one of these:
Visibility timeout too short → duplicate jobs.
Handlers not idempotent → double sends.
Workers killed mid-deploy → half-done work.
Long and short jobs sharing a lane → blocked queues.
Database queues under load → locked tables.
Each one ties back to something we’ve already covered: tune timeouts, split queues, handle shutdowns, and supervise processes.
Queues deserve the same care as your main app. Don’t input personal data into payloads; send references instead. Use TLS for all queue traffic and enable encryption at rest if your broker supports it. Rotate keys regularly and use narrow permissions—your producer shouldn’t see the consumer’s credentials.
Security here is mostly discipline. Set it up once and you won’t think about it again.
Cron is your timekeeper; the queue is your executor. Use cron to enqueue jobs on a schedule (“every hour, queue report generation”), and let the queue handle execution at its own pace.
Always schedule in UTC, since daylight saving bugs are not worth it. And for large workloads, add a small random delay so every job doesn’t start on the same second. It’s a five-line change that prevents a world of spikes.
If you’re on shared hosting or Windows, you can fake background jobs with cron or task schedulers—but it’s fragile. Long-running workers thrive on Linux, where Supervisor or systemd can restart them predictably. Even a cheap VPS with one worker process beats a shared host that can’t run daemons.
Asynchronous PHP isn’t about fancy architecture. It’s about reliability—doing the same job every time, even when the lights flicker.
By giving slow tasks their own lane, watching your workers, and logging what matters, you turn PHP from a stop-and-go script runner into a calm, scalable system.
If you’re refining your queue setup or just tired of seeing “Failed Jobs: 97,” our team can review your async stack and help you tighten it up—quietly, behind the scenes, just like a good worker process should.