Laravel Queue Workers Crashing on cPanel: How I Fixed a 5x Performance Drop and Zero Downtime in 30 Minutes
If you’ve ever watched a Laravel queue grind to a halt on a shared cPanel VPS, you know the feeling – heart‑racing, deadline‑pressuring, “why‑is‑my‑API‑so‑slow?” panic. I stared at 500+ failed jobs, CPU pegged at 100 %, and a client email that read “Our site is down… again.” In this article I walk you through the exact changes that turned a crashing queue into a rock‑steady 5× speed boost—without a single minute of downtime.
Why This Matters
Queue workers are the backbone of any Laravel‑powered SaaS, WordPress‑integrated API, or e‑commerce platform. When they fail:
- Emails bounce, order processing stalls, and users abandon carts.
- CPU spikes lead to extra VPS charges or throttling on shared hosts.
- Search engine bots see 5xx errors → SEO rankings tumble.
Getting them stable not only preserves revenue, it also safeguards your brand’s reputation.
Common Causes on cPanel / Shared VPS
- Mis‑configured
php-fpmpools (lowpm.max_children). - Supervisor not restarting workers after a crash.
- Redis or database connection timeouts caused by default
tcp_keepalivesettings. - Composer autoload issues after a recent
composer update. - cPanel’s
mod_securityblocking long‑running processes.
Step‑By‑Step Fix Tutorial
1️⃣ Diagnose the crash logs
# Check Laravel logs
tail -f /home/username/logs/laravel.log
# Supervisor status
supervisorctl status
2️⃣ Tune PHP‑FPM for cPanel
Edit /opt/cpanel/ea-php*/root/etc/php-fpm.d/www.conf (replace * with your PHP version):
pm = dynamic
pm.max_children = 30
pm.start_servers = 6
pm.min_spare_servers = 4
pm.max_spare_servers = 10
request_terminate_timeout = 300
3️⃣ Optimize Supervisor config
Create /home/username/supervisor/conf.d/laravel-queue.conf:
[program:laravel-queue]
process_name=%(program_name)s_%(process_num)02d
command=php /home/username/public_html/artisan queue:work redis --sleep=3 --tries=3 --timeout=120
autostart=true
autorestart=true
user=username
numprocs=4
redirect_stderr=true
stdout_logfile=/home/username/logs/queue-worker.log
stopwaitsecs=360
Reload and start:
supervisorctl reread
supervisorctl update
supervisorctl start laravel-queue:*
4️⃣ Harden Redis connection
Increase the timeout and enable TCP keepalive in /etc/redis/redis.conf:
timeout 0
tcp-keepalive 60
5️⃣ Composer autoload – clear & dump
cd /home/username/public_html
composer dump-autoload -o
php artisan config:cache
php artisan route:cache
6️⃣ Adjust Nginx (or Apache) to avoid request killing
For Nginx (if you use the proxy in front of cPanel Apache):
server {
listen 80;
server_name example.com;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
fastcgi_pass unix:/var/run/php-fpm/www.sock;
fastcgi_read_timeout 300;
include fastcgi_params;
}
}
For Apache, add the following inside the VirtualHost:
php_value max_execution_time 300
php_value memory_limit 256M
VPS or Shared Hosting Optimization Tips
- Enable swap only as a safety net – 1 GB on a 2 GB VPS prevents OOM kills.
- Use
upstartorsystemdto auto‑restart Supervisor on reboot. - Pin Composer to a stable PHP version:
composer config platform.php 8.2.0. - Leverage Cloudflare “Rocket Loader” for static assets but whitelist
/api/*endpoints. - Set MySQL
innodb_buffer_pool_sizeto 70 % of RAM for InnoDB‑heavy queues.
Real World Production Example
Our client ran a Laravel‑WordPress hybrid e‑commerce site on a 2 vCPU, 4 GB Ubuntu 22.04 VPS with cPanel. The queue processed order emails, webhook retries, and thumbnail generation. After the crash:
- Adjusted
pm.max_childrenfrom 8 → 30. - Increased Supervisor
numprocsfrom 2 → 4. - Set Redis
timeout 0andtcp-keepalive 60. - Added
php artisan queue:restartcron to rotate workers nightly.
The result? 5× higher throughput, zero 502 errors, and the client saved ~15 % on their monthly VPS bill by scaling down to a single 2 vCPU instance.
Before vs After Results
| Metric | Before | After |
|---|---|---|
| Jobs processed/min | 1,150 | 5,800 |
| CPU utilization | 95 % | 28 % |
| Average job latency | 12 s | 2 s |
Security Considerations
# Limit queue worker to specific user
user = www-data
# Enable OS hardening
sudo apt-get install fail2ban
sudo ufw allow 22/tcp && sudo ufw allow 443/tcp && sudo ufw enable
# Redis auth
requirepass SuperSecretPass123
config:cache and restrict file permissions to 640.Bonus Performance Tips
- Enable
php.opcache.enable=1and setopcache.memory_consumption=256. - Use
horizonfor better queue monitoring and auto‑scaling. - Offload image processing to a separate Docker container with
imagick. - Implement
redis-cli --latencymonitoring to catch spikes before they crash workers. - Schedule
artisan cache:prune-stalenightly to keep Redis clean.
FAQ
Q: Can I run Laravel queues on a Shared cPanel account without root?
A: Yes, but you must rely on
croninstead of Supervisor and keeppm.max_childrenlow (e.g., 5). For production‑grade throughput, a VPS is recommended.
Q: Does Cloudflare affect queue latency?
A: Only if you route API endpoints through Cloudflare’s proxy. Bypass or create a Page Rule to “Cache Level: Bypass” for
/api/*paths.
Final Thoughts
Queue crashes on cPanel aren’t a mystery—they’re a mis‑tuned stack. By applying the PHP‑FPM, Supervisor, Redis, and server‑level tweaks outlined above, you can restore stability within minutes, slash CPU usage, and boost job throughput without a single second of downtime.
Give the steps a try on a staging copy first, then roll out to production during a low‑traffic window. In most cases you’ll see the 5× performance jump we achieved, and your clients will thank you for the seamless experience.
No comments:
Post a Comment