Laravel 10 Queue Workers Stuck on VPS: Why My Cron Jobs Keep Crashing and How to Fix the Redis & OpCache Deadlock in Hours
Ever watched a Laravel queue worker die silently while your cron keeps firing? You push a job, the php artisan queue:work spins, then—nothing. Hours of debugging, countless supervisorctl restart attempts, and still the same “stuck” screen. If you’re on a VPS, juggling Redis, OpCache and a tight PHP‑FPM pool, you know the frustration is real.
Why This Matters
Queue workers are the backbone of any SaaS‑style Laravel app. Missed jobs mean missed emails, failed webhook deliveries, and a broken user experience. In production, a deadlocked Redis + OpCache combo can bring your entire API‑layer to a halt, while your cron keeps spawning new processes that immediately time out. The cost? Lost revenue, angry customers, and wasted engineering hours.
Common Causes
- OpCache locking the compiled PHP files while Redis holds a stale lock.
- Supervisor
numprocsset too high for the VPS memory. - PHP‑FPM ping timeout lower than the average job runtime.
- Redis persistence mode (RDB/AOF) causing I/O spikes during backups.
- Mis‑configured
queue:restartsignals colliding with cron.
Step‑by‑Step Fix Tutorial
1. Verify the Deadlock
# Check Redis for lingering locks
redis-cli KEYS "*queue*"
# Inspect OpCache status
php -r "opcache_get_status(true);" | jq '.opcache_statistics'
2. Adjust PHP‑FPM Settings
# /etc/php/8.2/fpm/pool.d/www.conf
pm = dynamic
pm.max_children = 30 ; adjust to VPS memory
pm.start_servers = 5
pm.min_spare_servers = 3
pm.max_spare_servers = 10
request_terminate_timeout = 300
request_slowlog_timeout = 10
rlimit_cpu = 120
3. Tweak OpCache
# /etc/php/8.2/fpm/php.ini
opcache.enable=1
opcache.enable_cli=1
opcache.memory_consumption=256
opcache.max_accelerated_files=8000
opcache.validate_timestamps=1
opcache.revalidate_freq=2
opcache.file_update_protection=2 ; prevents lock race
opcache.validate_timestamps=1 with a low revalidate_freq during debugging; raise it back to 60 in production for performance.
4. Optimize Redis Persistence
# /etc/redis/redis.conf
save 900 1 ; every 15 minutes if at least 1 key changed
save 300 10 ; every 5 minutes if at least 10 keys changed
appendonly no ; disable AOF unless you need point‑in‑time recovery
maxmemory 256mb
maxmemory-policy allkeys-lru
5. Reconfigure Supervisor
# /etc/supervisor/conf.d/laravel-queue.conf
[program:laravel-queue]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/html/artisan queue:work redis --sleep=3 --tries=3 --timeout=120
autostart=true
autorestart=true
user=www-data
numprocs=4
stopwaitsecs=90
stdout_logfile=/var/log/laravel/queue.log
stderr_logfile=/var/log/laravel/queue_error.log
environment=QUEUE_CONNECTION="redis"
6. Clean Up Cron
# /etc/cron.d/laravel
* * * * * www-data php /var/www/html/artisan schedule:run >> /dev/null 2>&1
# Remove any duplicate schedule:run entries that were added manually
php artisan queue:restart from a cron; it forces a graceful shutdown that can clash with Supervisor restarts.
7. Restart Services
sudo systemctl restart php8.2-fpm
sudo systemctl restart redis
sudo supervisorctl reread && sudo supervisorctl update
sudo supervisorctl restart laravel-queue:*
VPS or Shared Hosting Optimization Tips
- Memory budgeting: Keep total RAM usage (PHP‑FPM + Redis + Nginx) under 80% of the VPS RAM.
- Use swap only as a safety net. High swap latency will kill queue latency.
- On shared hosting, use
php artisan schedule:runvia the provider’s cron UI, not system‑wide cron. - Enable Cloudflare “Rocket Loader” only for front‑end assets; keep API endpoints raw for fastest response.
Real World Production Example
Acme SaaS runs a Laravel 10 API on a 2‑CPU, 4 GB Ubuntu 22.04 VPS. After the Redis/OpCache deadlock, the email:send queue stalled, causing a 30 % drop in daily revenue. By applying the steps above, they reduced average job runtime from 45 s to 9 s and eliminated all “stuck worker” alerts in NewRelic.
Before vs After Results
| Metric | Before | After |
|---|---|---|
| Avg Queue Latency | 45 s | 9 s |
| CPU Utilization | 92 % | 58 % |
| Redis Memory | 320 MB | 215 MB |
queue:work process count and no “connection lost” errors in the logs.
Security Considerations
- Lock down Redis to
127.0.0.1and enablerequirepassin production. - Set
opcache.protect_memory=1to avoid PHP code injection via memory corruption. - Use a non‑root user for Supervisor (
www-data) and limitsudorights. - Enable UFW rules: allow only 80/443, 22 (limited), and 6379 from localhost.
Bonus Performance Tips
- Enable
php artisan horizonif you need real‑time dashboarding; it uses Redis more efficiently. - Switch to Redis Streams for high‑throughput event pipelines.
- Use
php artisan config:cacheandroute:cacheafter every deploy. - Compress outbound API responses with
gzipat the Nginx level. - Consider Docker‑based deployment with
php-fpmandredis:alpinecontainers for isolation.
FAQ
Q: My queue workers still restart after fixing OpCache. What else?
A: Check the systemd journal for OOM killer events. If the VPS is memory‑starved, lower pm.max_children or upgrade the plan.
Q: Can I use MySQL as a backup queue?
A: Yes, but Redis is 10‑20× faster for transient jobs. Use MySQL only for tasks that must survive a full Redis flush.
Q: Does Cloudflare cache Laravel API routes?
A: By default Cloudflare respects Cache-Control. For API endpoints, set Cache-Control: no‑store to avoid stale data.
Final Thoughts
Queue deadlocks on a VPS are rarely a Laravel bug; they’re usually a mis‑aligned stack of PHP‑FPM, OpCache, and Redis settings. By tightening each layer, you turn a chaotic “cron crash” into a reliable, scalable background processing engine. The payoff is measurable: lower latency, higher throughput, and a happier dev team.
Looking for Cheap, Secure Hosting?
For small teams or side‑projects, Hostinger offers lightning‑fast VPS plans with built‑in Redis and OpCache support. Use the referral code above for an extra discount and get your Laravel queues humming in minutes.
No comments:
Post a Comment