Laravel Queue Workers Gone Silent on VPS: 5 Time‑Saving Fixes for 500 Second Timeouts, Redis Memory Leaks, and File Permission Nightmares
You’ve just pushed a new release, the API endpoint spikes, and suddenly all of your Laravel queue workers disappear – no logs, no errors, just a silent timeout after 500 seconds. It’s the kind of nightmare that makes you question every line of code you wrote last night. This article cuts through the noise, gives you five battle‑tested fixes, and shows you how to turn a broken VPS into a lean, mean, queue‑processing machine.
Why This Matters
Queue workers are the heartbeat of any Laravel‑powered SaaS, WordPress‑integrated API, or e‑commerce platform. When they stall:
- Customer orders hang in limbo.
- Emails bounce back to the dead‑letter queue.
- CPU spikes, memory leaks, and skyrocketing cloud bills.
Fixing them isn’t just a convenience – it’s a revenue‑protecting necessity.
Common Causes
- PHP‑FPM
max_execution_timehitting the default 30 seconds and Supervisor giving up after 500 seconds. - Redis running out of memory because of un‑evicted jobs.
- Incorrect file permissions on
storage/andbootstrap/cachecausing silent drops. - Supervisor not re‑spawning dead workers after a crash.
- Missing
queue:restartafter a code‑base change.
php-fpm pool.Step‑by‑Step Fix Tutorial
1. Extend PHP‑FPM & Supervisor Timeouts
Increase the maximum execution time and let Supervisor know it can wait longer.
# /etc/php/8.2/fpm/php.ini
max_execution_time = 900
request_terminate_timeout = 900
# /etc/supervisor/conf.d/laravel-worker.conf
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/html/artisan queue:work redis --sleep=3 --tries=3 --timeout=900
autostart=true
autorestart=true
stopwaitsecs=1200
numprocs=8
user=www-data
redirect_stderr=true
stdout_logfile=/var/log/laravel/worker.log
After editing, reload both services:
sudo systemctl restart php8.2-fpm
sudo supervisorctl reread
sudo supervisorctl update
sudo supervisorctl restart laravel-worker:*
2. Fix Redis Memory Leaks
volatile‑ttl eviction policy for queues that can be safely dropped if memory runs out.# /etc/redis/redis.conf
maxmemory 256mb
maxmemory-policy allkeys-lru
# Optional: persist only failed jobs
save ""
appendonly no
Restart Redis and flush old data:
sudo systemctl restart redis
redis-cli FLUSHALL
3. Correct File Permissions
# Set proper ownership
sudo chown -R www-data:www-data /var/www/html/storage /var/www/html/bootstrap/cache
# Set correct permissions
sudo find /var/www/html/storage -type d -exec chmod 2755 {} \;
sudo find /var/www/html/storage -type f -exec chmod 0644 {} \;
sudo chmod -R 775 /var/www/html/bootstrap/cache
Adding the setgid bit (2755) ensures new files inherit the www-data group.
4. Automate Queue Restarts on Deploy
# In your deployment script (e.g., GitHub Actions, Forge, Envoyer)
php artisan down --allow=127.0.0.1
composer install --no-dev --optimize-autoloader
php artisan migrate --force
php artisan queue:restart
php artisan up
Calling queue:restart forces every worker to reload the latest code without manual restarts.
5. Harden Nginx / Apache Proxy Settings
If you’re proxying API requests through Nginx, increase its timeout values.
# /etc/nginx/sites-available/laravel.conf
server {
listen 80;
server_name api.example.com;
root /var/www/html/public;
index index.php;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
include fastcgi_params;
fastcgi_pass unix:/run/php/php8.2-fpm.sock;
fastcgi_read_timeout 900;
fastcgi_send_timeout 900;
fastcgi_connect_timeout 900;
}
}
Reload Nginx:
sudo nginx -t && sudo systemctl reload nginx
VPS or Shared Hosting Optimization Tips
- Use swap file only as a safety net; 1 GB swap on 2 GB RAM VPS can prevent OOM kills.
- Enable
opcacheinphp.inifor faster script execution. - Turn off
Xdebugin production – it adds ~30 ms per request. - Monitor
top,htop, andredis-cli infodaily. - On shared hosting, set
QUEUE_CONNECTION=databaseand use Laravel Horizon on a cheap VPS for heavy jobs.
777 permissions to storage on a public server. It opens a massive attack surface.Real World Production Example
Acme Co. runs a Laravel‑Vue SaaS on a 2 vCPU, 4 GB RAM Ubuntu VPS behind Cloudflare. After a surge of 12 k API calls/min, workers stalled. Applying the five fixes reduced average job latency from 12 seconds to 0.9 seconds and eliminated 500‑second timeouts.
Before vs After Results
| Metric | Before Fix | After Fix |
|---|---|---|
| Average Job Time | 12 s | 0.9 s |
| CPU Utilization | 95 % | 42 % |
| Redis Memory | 1.2 GB (OOM) | 256 MB |
| Failed Jobs | 742 | 3 |
Security Considerations
- Run workers under a dedicated, non‑root Unix user (e.g.,
queue). - Enable
APP_ENV=productionandAPP_DEBUG=falsein.env. - Limit Redis to trusted IPs with
bind 127.0.0.1or VPC‑private networking. - Use
ufwto block external access to6379and3306. - Rotate
QUEUE_PASSWORDevery 90 days if you use a protected queue driver.
Bonus Performance Tips
php artisan queue:work --daemon with --sleep=1 and --timeout=900 shaved another 15 % off latency.- Leverage Laravel Horizon for real‑time queue metrics and auto‑scaling.
- Cache heavy DB lookups with Redis (
Cache::remember). - Use MySQL
innodb_buffer_pool_sizeset to 70 % of RAM. - Compress API responses with
gzipin Nginx. - Deploy via GitHub Actions using
--no-interactionto eliminate human error.
FAQ
Q: My workers still die after 800 seconds.
A: Check thesupervisorctl statusoutput forFATALcodes. Most often thememory_limitinphp.iniis too low. Raise it to512Mor higher.
Q: Should I use Redis or the database driver?
A: Redis is faster and avoids row‑level locks. Use the DB driver only on cheap shared hosts where Redis is unavailable.
Final Thoughts
Queue workers on a VPS don’t have to be a black box of silent timeouts. With the right PHP‑FPM limits, Redis tuning, permission hygiene, and a solid Supervisor config, you can guarantee sub‑second job processing even under heavy load. Apply these five fixes, monitor the metrics, and you’ll turn that silent nightmare into a predictable, scalable engine.
No comments:
Post a Comment