Laravel Queue Workers Stuck on Production: How a Hidden Opcache Mis‑Configuration, DB Lock, and Wrong File Permissions Killed My Real‑Time Order Processing and Turbo‑Boosted My Fix in Minutes!
If you’ve ever watched a Laravel queue grind to a halt while orders keep flooding in, you know the panic that follows. One missing “chmod”, a stale OPCache entry, or a stray MySQL lock can turn a high‑traffic e‑commerce site into a ghost town within seconds. In this article I walk you through the exact three‑pronged nightmare that froze my production queue, and how a 5‑minute fix restored sub‑second job processing on a 2 CPU Ubuntu VPS.
Why This Matters
Real‑time order processing isn’t just a nice‑to‑have; it’s a revenue lifeline. A stalled queue means lost sales, angry customers, and a spike in support tickets. Moreover, stuck workers waste CPU cycles, increase your VPS bill, and can trigger autoscaling alerts that cost you money.
Common Causes of Stuck Queue Workers
- Stale OPCache entries after a zero‑downtime deploy.
- MySQL row or table locks caused by long‑running transactions.
- Incorrect file permissions on
storage/andbootstrap/cache/preventing workers from writing logs. - Supervisor not restarting workers after code changes.
- Redis connection time‑outs or maxmemory policies that evict job payloads.
Step‑By‑Step Fix Tutorial
1. Verify OPCache Settings
First, dump the current OPCache status:
php -r 'opcache_get_status(true);' | grep -i "memory_used"
If opcache.revalidate_freq is set to 0 on production, the cache never refreshes after a deploy. Edit /etc/php/8.2/fpm/php.ini:
opcache.enable=1
opcache.revalidate_freq=2 ; check for file changes every 2 seconds
opcache.validate_timestamps=1
opcache.max_accelerated_files=10000
Then restart PHP‑FPM:
sudo systemctl restart php8.2-fpm
opcache.file_update_protection=2 to avoid race conditions when two deploys touch the same file almost simultaneously.
2. Release Any MySQL Locks
Connect to MySQL and look for open transactions:
SELECT * FROM information_schema.innodb_locks;
If you spot a lock on the jobs table, kill it:
KILL 12345; -- replace 12345 with the blocking thread ID
To prevent future deadlocks, wrap your job payloads in short transactions and avoid raw DB writes inside the same queue job.
3. Fix File Permissions
Laravel’s queue workers need write access to storage/logs and bootstrap/cache. On a typical VPS:
sudo chown -R www-data:www-data storage bootstrap/cache
sudo find storage bootstrap/cache -type d -exec chmod 755 {} \;
sudo find storage bootstrap/cache -type f -exec chmod 644 {} \;
If you’re on a shared hosting environment replace www-data with the appropriate user (often apache or nobody).
777 on storage. It opens a massive security hole and will trigger ModSecurity on most hosts.
4. Reload Supervisor Configuration
Supervisor ensures workers stay alive. After fixing the above issues, update its config:
[program:laravel-queue]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/html/artisan queue:work redis --sleep=3 --tries=3
autostart=true
autorestart=true
user=www-data
numprocs=4
stopwaitsecs=360
redirect_stderr=true
stdout_logfile=/var/www/html/storage/logs/worker.log
Then run:
sudo supervisorctl reread
sudo supervisorctl update
sudo supervisorctl restart laravel-queue:*
VPS or Shared Hosting Optimization Tips
- PHP‑FPM pools: Create a dedicated pool for queue workers with
pm.max_children=8andpm.process_idle_timeout=30s. - Redis persistence: Enable
appendonly yesand setmaxmemory-policy allkeys-lruto avoid job loss. - CPU pinning: On a dedicated VPS, bind the queue pool to isolated cores using
taskset. - Swap management: Disable swap for PHP-FPM to prevent latency spikes (
sudo swapoff -a).
Real World Production Example
Our SaaS client runs a Laravel‑based marketplace on a 2 vCPU Ubuntu 22.04 VPS. After a midnight deploy, the orders queue stalled. The sequence described above was applied:
- Reduced
opcache.revalidate_freqfrom 0 to 2. - Killed a lingering
InnoDBlock caused by a long‑running analytics job. - Corrected
storagepermissions to755/644. - Restarted Supervisor with four workers.
The outcome: order processing time fell from 38 s to 0.42 s, and server CPU usage dropped by 27%.
Before vs After Results
| Metric | Before Fix | After Fix |
|---|---|---|
| Avg Queue Latency | 38 seconds | 0.42 seconds |
| CPU (php-fpm) | 65% | 48% |
| Redis Memory Usage | 512 MB | 298 MB |
Security Considerations
When you tweak OPCache and file permissions, keep these in mind:
- Never expose
phpinfo()on production – it reveals OPCache config. - Lock down
bootstrap/cacheto550for the owner only if you use a non‑root process. - Enable
App::environment('production')checks before runningqueue:work --daemoncommands. - Use Laravel’s built‑in
queue:restartsignal instead of killing workers manually.
Bonus Performance Tips
queue:work redis to queue:work redis --queue=high,default,low --sleep=1 to prioritize critical jobs and reduce idle cycles.
- Enable
Laravel Horizonfor real‑time dashboard and auto‑scaling of workers. - Configure Nginx fastcgi_cache for static API responses (
fastcgi_cache_path /var/cache/nginx levels=1:2 keys_zone=api:10m;). - Set
opcache.memory_consumption=256on a 2 CPU box to keep hot classes in memory. - Use
composer dump-autoload -oafter each deploy to generate an optimized class map.
FAQ
Q: My queue workers keep restarting after the fix. What gives?
A: Check Supervisor’sstopwaitsecs– if a job exceeds that timeout the process is killed. Increase it to 600 for long batch jobs.
Q: Does OPCache affect Laravel’s config cache?
A: Yes. After clearing the config cache (php artisan config:clear) you should also reset OPCache to avoid stale env variables.
Final Thoughts
Stuck queue workers are rarely caused by a single factor. In my experience the perfect storm of OPCache staleness, a hidden MySQL lock, and the wrong file permissions can cripple any Laravel‑powered order system. The good news? All three issues are detectable with native tools and fixable in under five minutes. Apply the steps above, keep an eye on your supervisor logs, and your real‑time processing will stay rock‑solid—even under heavy traffic spikes.
Monetize the Knowledge
If you found this guide valuable, consider upgrading to a managed Laravel VPS service that handles OPCache, Redis, and Supervisor for you. Our partner TurboVPS Laravel offers a 24/7 monitoring layer, automated rollbacks, and a $49/month starter plan perfect for growing SaaS startups.
No comments:
Post a Comment