Laravel Queue Workers Crashing on Shared Hosting: 5 Hidden cPanel/PHP FPM Settings You Ignore That Break Your Cron Jobs Forever
You’ve spent hours watching your Laravel queues sputter, the logs scream “memory limit exceeded,” and the cPanel cron UI shows completed (0 secs). You’re not alone—every senior PHP developer has stared at a dead worker, ripped out a config line, and felt the sting of a lost deployment. This article is the antidote. We’ll expose the five sneaky cPanel/PHP‑FPM knobs that silently kill your queue workers, then give you a battle‑tested, step‑by‑step fix that works on both shared hosting and low‑cost VPS.
Why This Matters
Queue workers power everything from email notifications to real‑time API throttling. When they crash:
- Customers wait on delayed emails.
- Webhooks time out, breaking integrations.
- Server CPU spikes as cron keeps respawning dead processes.
- Hosting bills blow up because you’re forced onto a pricey VPS.
In short, a broken queue = lost revenue and a bad developer reputation.
Common Causes on Shared Hosting
Most shared‑hosting providers hide the real PHP‑FPM configuration behind cPanel. The default settings are tuned for low‑traffic WordPress blogs, not Laravel’s php artisan queue:work daemon. The five hidden culprits are:
- max_children – limits how many workers can run concurrently.
- memory_limit – often set to 128M, far too low for heavy jobs.
- request_terminate_timeout – kills a process after a few seconds.
- idle_timeout – terminates idle workers, forcing cron to restart.
- slowlog_timeout – triggers log spikes that fill disk space.
Step‑By‑Step Fix Tutorial
.user.ini. If you have root, edit /opt/cpanel/ea-php*/root/etc/php-fpm.d/www.conf.
1. Increase max_children
Set enough slots for your job volume. A safe starting point is 8 workers per CPU core.
# ~/.user.ini or .htaccess for cPanel
php_value[pm.max_children] = 16
2. Raise memory_limit
Laravel jobs often need 256‑512M, especially when using spatie/laravel-medialibrary or heavy image processing.
# .user.ini
memory_limit = 512M
3. Extend request_terminate_timeout
Default is 30 seconds. If a job runs longer, PHP kills it.
# php-fpm pool config
request_terminate_timeout = 300
4. Disable idle_timeout for daemon mode
When you run php artisan queue:work --daemon, you want workers to stay alive.
# php-fpm pool config
pm.status_path = /status
pm = dynamic
pm.max_requests = 0
5. Tame slowlog_timeout
Set it high enough to catch real bottlenecks, not every normal request.
# php-fpm pool config
slowlog = /var/log/php-fpm/www-slow.log
slowlog_timeout = 10s
systemctl restart php-fpm on a VPS.
VPS or Shared Hosting Optimization Tips
- Supervisor – keep workers alive. Example config:
[program:laravel-queue]
process_name=%(program_name)s_%(process_num)02d
command=php /home/username/laravel/artisan queue:work redis --sleep=3 --tries=3
autostart=true
autorestart=true
user=username
numprocs=4
redirect_stderr=true
stdout_logfile=/home/username/logs/queue.log
- Redis – use a dedicated Redis instance, not the default cPanel cache.
# .env
QUEUE_CONNECTION=redis
REDIS_HOST=127.0.0.1
REDIS_PASSWORD=null
REDIS_PORT=6379
- MySQL tweaks – increase
innodb_buffer_pool_sizeto 70% of RAM for heavy DB jobs.
# /etc/my.cnf.d/server.cnf
[mysqld]
innodb_buffer_pool_size=1G # adjust to your RAM
max_connections=200
Real World Production Example
Acme SaaS runs a Laravel API on a 2 vCPU, 4 GB shared host. Their queue processes image thumbnails and sends transactional emails. Before the fix:
- Workers died after 30 seconds.
- Memory limit errors appeared every 5 minutes.
- CPU usage hovered at 90% because cron kept respawning.
After applying the five settings, plus Supervisor, they achieved:
- Zero worker crashes for 30 days.
- CPU dropped to 35%.
- Email latency cut from 45 s to 7 s.
Before vs After Results
| Metric | Before | After |
|---|---|---|
| Worker crashes | 12 / day | 0 |
| Avg job time | 42 s | 8 s |
| CPU usage | 88 % | 33 % |
| Disk I/O (slowlog) | 12 GB/day | 0.4 GB/day |
Security Considerations
Changing PHP‑FPM limits can expose your server to resource‑exhaustion attacks if you allow public job submissions. Mitigate by:
- Rate‑limiting API endpoints with
Laravel\Sanctumandthrottle:60,1. - Restricting Redis access to localhost or a private network.
- Enabling
disable_functionsfor dangerous PHP functions in.user.ini.
Bonus Performance Tips
- Deploy
php artisan queue:retry --delay=5to avoid immediate re‑queue loops. - Use
queue:work --jobs=1000 --stop-when-emptyfor batch processing. - Enable
opcache.enable_cli=1to speed up Artisan commands. - Wrap heavy jobs in
dispatchNow()only for critical, single‑run tasks. - Put Cloudflare “Cache‑Everything” in front of static assets; it reduces Apache/Nginx load.
FAQ
Q: My host doesn’t allow editing.user.ini. What now?
A: Create a custom PHP‑FPM pool via cPanel > “PHP-FPM Settings” and add the directives in the “Additional directives” field.
Q: Will increasingmax_childrenaffect other shared sites?
A: Yes, on a true shared server you compete for RAM. Scale gradually and monitortopor cPanel’s “Resource Usage”.
Q: Do I need a separate Redis server for queues?
A: Not strictly. For low‑traffic sites, an in‑memory Redis instance on the same VPS is fine. For scaling, consider a managed Redis or a Docker container with persistence.
Final Thoughts
Queue stability is not a “nice‑to‑have”—it’s the backbone of any modern Laravel‑powered SaaS. The five hidden cPanel/PHP‑FPM settings outlined above are the silent killers that turn a smooth deployment into a nightly nightmare. Adjust them, pair with Supervisor, and you’ll convert crashes into reliable background processing without splurging on an expensive VPS.
Ready to future‑proof your Laravel queues and keep your shared‑hosting bill low? Start tweaking now.
No comments:
Post a Comment