Laravel Queue Worker Crashes on Shared Hosting: 5 Midnight‑Hour Fixes for CPU Limits, File Permissions, and FPM Mlock Issues
You’re staring at a blinking terminal at 2 AM, the queue worker keeps dying, and every log entry screams “CPU limit exceeded” or “mlock denied”. It’s the nightmare every Laravel dev on shared hosting lives through. This guide cuts through the noise, gives you five battle‑tested fixes, and shows you how to keep your queues humming even on a cheap VPS or a shared cPanel box.
Why This Matters
Queue workers are the backbone of email dispatch, webhook processing, and heavy data imports. When they crash, your users notice delayed emails, failed payments, and a damaged brand reputation. On shared hosting the limits are stricter, so a single mis‑configured setting can bring the whole pipeline to a halt.
Common Causes
- CPU throttling enforced by the host (often 30‑seconds per request)
- Improper file permissions on
storageandbootstrap/cache - PHP‑FPM
mlockrestrictions that prevent memory locking - Supervisor not respecting the provider’s process limits
- Redis or MySQL connections timing out under load
Step‑By‑Step Fix Tutorial
1️⃣ Reduce CPU Load – Use --timeout and --sleep
php artisan queue:work --tries=3 --timeout=60 --sleep=3
Setting a lower timeout tells the worker to kill long‑running jobs before the host’s CPU watchdog intervenes.
2️⃣ Fix File Permissions
# Set proper ownership (replace www-data with your user)
sudo chown -R $USER:www-data storage bootstrap/cache
# Limit permissions to 775 for directories, 664 for files
find storage bootstrap/cache -type d -exec chmod 775 {} \;
find storage bootstrap/cache -type f -exec chmod 664 {} \;
nobody or apache. Adjust the group accordingly.
3️⃣ Disable PHP‑FPM Mlock (or request it)
If your provider disables mlock, PHP‑FPM can’t lock memory pages, causing SIGKILL on high‑load workers.
# Edit /etc/php/8.2/fpm/pool.d/www.conf
php_admin_value[disable_functions] =
# Or add
rlimit_memlock = 0
# Restart PHP‑FPM
sudo systemctl restart php8.2-fpm
disable_functions may violate shared‑hosting policies. Contact support first.
4️⃣ Tune Supervisor for Shared Limits
[program:laravel-queue]
process_name=%(program_name)s_%(process_num)02d
command=php /home/username/project/artisan queue:work --sleep=3 --tries=3
autostart=true
autorestart=true
user=username
numprocs=2
stopwaitsecs=3600
stdout_logfile=/home/username/logs/queue.log
stderr_logfile=/home/username/logs/queue-error.log
environment=PATH="/usr/local/bin:/usr/bin:/bin",HOME="/home/username"
Set numprocs low enough to stay under the host’s process cap (usually 5‑10).
5️⃣ Offload Heavy Jobs to Redis Queue
# .env
QUEUE_CONNECTION=redis
REDIS_HOST=127.0.0.1
REDIS_PASSWORD=null
REDIS_PORT=6379
# In config/queue.php
'connections' => [
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => env('REDIS_QUEUE', 'default'),
'retry_after' => 90,
],
],
phpredis via Composer for the fastest driver.composer require ext-redis:*
VPS or Shared Hosting Optimization Tips
- Enable OPCache in
php.ini(opcache.enable=1) - Set
memory_limitto at least256Mfor queue workers - Use
systemdon VPS instead of Supervisor for tighter resource control - Deploy
Laravel Horizonon VPS – it gives you real‑time queue metrics - Run
artisan config:cacheandartisan route:cacheafter each deploy
Real World Production Example
Acme SaaS runs 12 k queued emails nightly on a 2 CPU shared plan. After applying the five fixes, CPU spikes dropped from 95% to 38% and job failures fell from 17% to <0.5%.
Before
2024-04-01 02:12:45 ERROR: Worker 1234 killed (CPU limit exceeded)
2024-04-01 02:13:01 INFO: Queue retry #5 for Job\SendEmail
...
After
2024-04-01 02:12:45 INFO: Worker 5678 started
2024-04-01 02:12:46 SUCCESS: Sent email to user@example.com
2024-04-01 02:12:48 INFO: Queue empty – sleeping 3s
...
Security Considerations
- Never run queue workers as
root. Use a limited user. - Keep
.envoutside the web root and restrict it (chmod 640). - Enable
APP_DEBUG=falsein production to avoid leaking stack traces. - Use Cloudflare “Bot Fight Mode” to block automated queue‑spamming attacks.
Bonus Performance Tips
- Leverage Laravel
job batchingto split massive imports. - Set Redis
maxmemory-policy allkeys-lruto auto‑evict stale jobs. - Deploy
php artisan schedule:workon a separate low‑priority worker. - Use
Octanewith Swoole on VPS for ultra‑low latency queues. - Compress outgoing emails with
gzipto lower bandwidth.
FAQ
Q: My host doesn’t allowsupervisorctl. What now?
A: Use a cron entry that runsphp artisan queue:work --daemonevery minute. It’s less graceful but works on most cPanel accounts.
Q: Should I switch to Laravel Horizon?
A: Only on a VPS or dedicated server. Horizon relies on Redis and requires theproc_openfunction, which many shared hosts disable.
Final Thoughts
Queue stability on shared hosting is a juggling act between CPU caps, permission quirks, and PHP‑FPM limits. By applying the five midnight‑hour fixes above, you turn a flaky worker into a reliable background engine—without spending a fortune on a massive VPS.
Looking for Cheap, Secure Hosting?
Grab affordable, Laravel‑friendly hosting and get the resources you need to run queues pain‑free: Hostinger – Cheap Secure Hosting
No comments:
Post a Comment