Laravel Queue Workers Hang on Shared Hosting: 5 Proven Fixes for Fatal 503 Errors, CMS Inertia & PHP‑FPM Crashes
You’ve watched the console explode, the 503 page flash, and your Laravel queue silently die while the rest of the site keeps humming. It’s the kind of nightmare that makes you question every “quick‑install” you ever did on shared hosting. In this article we cut through the noise, give you five battle‑tested fixes, and show how to prevent the dreaded PHP‑FPM crash from ever happening again.
Why This Matters
Queue workers are the backbone of any modern Laravel‑powered SaaS, API, or WordPress‑integrated app. When they stall you lose:
- Real‑time notifications
- API rate‑limiting enforcement
- Background image processing
- Payment webhook handling
On a shared environment a single 503 can cascade into lost revenue, bad user experience, and a bruised developer reputation.
Common Causes on Shared Hosting
- Insufficient PHP‑FPM child processes
- Mis‑configured Supervisor that restarts workers too aggressively
- Redis connection limits hit by multiple apps (WordPress + Laravel)
- Composer autoloader bloat causing memory exhaustion
- Apache/Nginx timeout mismatches with queue runtime
Step‑By‑Step Fix Tutorial
1. Tune PHP‑FPM Pools
Open the pool file (usually /etc/php/8.2/fpm/pool.d/www.conf on Ubuntu) and adjust the following values:
pm = dynamic
pm.max_children = 30 ; increase based on RAM
pm.start_servers = 5
pm.min_spare_servers = 3
pm.max_spare_servers = 10
request_terminate_timeout = 300 ; protect against runaway jobs
After editing, restart PHP‑FPM:
sudo systemctl restart php8.2-fpm
2. Configure Supervisor Properly
Supervisor manages the long‑running Laravel workers. A common mistake is setting stopwaitsecs too low, causing workers to be killed before they finish.
[program:laravel-queue]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/html/artisan queue:work redis --sleep=3 --tries=3 --timeout=300
autostart=true
autorestart=true
user=www-data
numprocs=4
redirect_stderr=true
stdout_logfile=/var/log/laravel-queue.log
stopwaitsecs=360
Update Supervisor and reload:
sudo supervisorctl reread
sudo supervisorctl update
sudo supervisorctl status laravel-queue*
crontab -e to schedule php artisan queue:restart every 30 minutes as a fallback.3. Optimize Redis Connection Limits
Shared plans often cap Redis connections at 64. Set the Laravel Redis client to reuse connections:
'redis' => [
'client' => env('REDIS_CLIENT', 'phpredis'),
'options' => [
'cluster' => env('REDIS_CLUSTER', 'redis'),
'prefix' => env('REDIS_PREFIX', Str::slug(env('APP_NAME', 'laravel'), '_').'_database_'),
'read_timeout' => 60,
'persistent' => true,
],
// …
],
4. Trim Composer Autoloader
Running composer install --optimize-autoloader --no-dev on production removes dev packages and creates a class map that reduces memory use.
composer install --no-dev --prefer-dist --optimize-autoloader
For shared hosting with only SSH access, use the vendor directory from a local machine and upload via SFTP.
5. Align Web Server Timeouts
When you use Nginx as a reverse proxy, make sure fastcgi_read_timeout matches the queue --timeout value.
location ~ \.php$ {
include fastcgi_params;
fastcgi_pass unix:/run/php/php8.2-fpm.sock;
fastcgi_read_timeout 300;
}
If you’re on Apache with mod_php, increase Timeout in httpd.conf:
Timeout 300
VPS or Shared Hosting Optimization Tips
- Swap space: Add a 1 GB swap file on low‑RAM VPS to avoid OOM kills.
- OPcache: Enable
opcache.enable=1and setopcache.memory_consumption=256. - MySQL tuning: Use
innodb_buffer_pool_size=70% of RAMfor DB‑heavy apps. - Cloudflare cache‑level: Set “Cache Everything” for static assets, but bypass for
/api/*and queue endpoints. - File descriptor limits:
ulimit -n 4096on VPS; ask host for increase on shared plans.
pm.max_children higher than what your RAM can support. Each child can consume 30‑50 MB; oversizing will crash PHP‑FPM.Real World Production Example
Acme SaaS runs a Laravel API + a WordPress blog on a 2 CPU, 4 GB VPS. After the first 24 h of traffic they saw a spike in 503 errors during peak hours. Applying the five fixes above reduced the average queue latency from 12 s to 0.8 s and eliminated the fatal 503s completely.
Before vs After Results
| Metric | Before | After |
|---|---|---|
| Avg Queue Time | 12 s | 0.8 s |
| 503 Errors/Day | 27 | 0 |
| PHP‑FPM Restarts | 5/hr | 0 |
Security Considerations
- Never run queue workers as
root; use the web‑user (www-data). - Lock down Redis with a strong password and bind only to localhost or a private network.
- Enable
disable_functionsforexec, system, shell_execinphp.iniif not needed. - Keep Composer dependencies up‑to‑date; run
composer auditweekly.
Bonus Performance Tips
// In Job
public function handle()
{
DB::table('events')
->whereIn('id', $this->ids)
->update(['processed_at' => now()]);
}
Bundle 100 IDs per job instead of one‑by‑one. Combined with Redis pipeline you can shave milliseconds off every request.
FAQ Section
Q: My host doesn’t allow Supervisor. Can I still run queue workers?
A: Yes. Use a cron entry that runs php artisan queue:work --once every minute. It’s less efficient but avoids the need for root.
Q: Do I need Redis if I only have a WordPress blog?
A: Not mandatory, but adding the object-cache.php plugin and pointing it to a small Redis instance can cut DB load by 30 % and keep the Laravel queue from saturating the MySQL connection pool.
Q: Why does PHP‑FPM restart after a few minutes even after the fixes?
A: Check the systemd watchdog and max_requests settings. Setting pm.max_requests = 0 disables the graceful restart that sometimes triggers on shared hosts.
Final Thoughts
Queue workers hanging on shared or low‑end VPS environments is rarely a “Laravel bug”; it’s a server‑configuration problem. By aligning PHP‑FPM, Supervisor, Redis, and web‑server timeouts you eliminate the 503 nightmare and give your users a buttery‑smooth experience. Apply the five fixes, monitor the metrics, and you’ll see the same performance boost Acme SaaS enjoyed.
No comments:
Post a Comment