Sunday, May 10, 2026

Laravel Queues on Docker: Why My Workers Keep Crashing After a MySQL Dump and How to Fix It in 10 Minutes

Laravel Queues on Docker: Why My Workers Keep Crashing After a MySQL Dump and How to Fix It in 10 Minutes

You’ve just spun up a fresh docker-compose stack, ran a massive mysqldump to back up your production data, and—boom—every Laravel queue worker goes down in a spectacular fashion. You stare at the logs, see “Out of memory” or “Connection lost”, and wonder why a simple dump turned your async system into a flop. This is the kind of frustration that steals hours from a sprint and makes you question every Docker‑compose line you ever wrote.

Why This Matters

Queue workers are the heart of any Laravel‑powered SaaS. They handle email, webhook retries, image processing, and more. If they crash after a database backup, your users get delayed notifications, your API latency spikes, and your SLA is instantly broken. In a production VPS or shared hosting environment the ripple effect can cost you money, reputation, and valuable developer time.

Common Causes

  • Container‑wide memory limits too low for the dump process.
  • MySQL socket or TCP connection reset during the dump.
  • Supervisor not re‑spawning workers after a signal.
  • Docker network alias changes causing stale QUEUE_CONNECTION URLs.
  • Cache (Redis) eviction during heavy I/O.
INFO: Most crashes happen because Docker’s default mem_limit is 2 GB, while a big dump can temporarily push the container’s RAM usage to 3 GB, forcing the OOM killer to terminate the php-fpm and supervisord processes.

Step‑By‑Step Fix Tutorial

1. Increase Docker Memory & Swap

# /etc/docker/daemon.json
{
  "default-runtime": "runc",
  "live-restore": true,
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m"
  },
  "default-shm-size": "1g",
  "default-ulimits": {
    "nofile": {
      "Name": "nofile",
      "Hard": 65535,
      "Soft": 65535
    }
  }
}

After saving, restart Docker and give the container at least 4g memory + 2g swap.

2. Tune PHP‑FPM Pool

# docker-compose.yml (php service)
php:
  image: php:8.2-fpm
  environment:
    PHP_MEMORY_LIMIT: 512M
  volumes:
    - ./php-fpm.conf:/usr/local/etc/php-fpm.d/www.conf
# php-fpm.conf
[www]
pm = dynamic
pm.max_children = 30
pm.start_servers = 6
pm.min_spare_servers = 4
pm.max_spare_servers = 12
php_admin_value[memory_limit] = 256M
TIP: Keep pm.max_children under the container’s total RAM / 128 MB to avoid OOM.

3. Configure Supervisor to Auto‑Restart Workers

# ./supervisor/laravel-queue.conf
[program:laravel-queue]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/html/artisan queue:work redis --sleep=3 --tries=3
autostart=true
autorestart=true
stopwaitsecs=3600
user=www-data
numprocs=4
redirect_stderr=true
stdout_logfile=/var/log/laravel-queue.log

4. Use a Dedicated MySQL Dump Container

# docker-compose.yml (add dump service)
dump:
  image: mariadb:10.11
  command: ["sh","-c","mysqldump -h db -u root -p${MYSQL_ROOT_PASSWORD} --all-databases | gzip > /dump/db_$(date +%F).sql.gz"]
  volumes:
    - ./dump:/dump
  depends_on:
    - db
  restart: "no"

5. Gracefully Pause Queue Workers During Dump

# pause-queues.sh
#!/bin/bash
docker exec $(docker ps -qf "name=php") supervisorctl stop laravel-queue:*
docker exec $(docker ps -qf "name=php") supervisorctl status laravel-queue

Run this script right before starting the dump service and resume after it finishes:

# resume-queues.sh
#!/bin/bash
docker exec $(docker ps -qf "name=php") supervisorctl start laravel-queue:*

VPS or Shared Hosting Optimization Tips

  • On a VPS, allocate at least 2 vCPU and 8 GB RAM for Docker + MySQL.
  • Set vm.overcommit_memory=1 in /etc/sysctl.conf to allow MySQL to allocate more buffers during dump.
  • For shared hosting, replace Docker with Docker‑Compose‑Lite or use a remote MySQL dump service.
  • Enable opcache in php.ini and set opcache.memory_consumption=256.
SUCCESS: After applying the memory bump and Supervisor auto‑restart, my workers stayed alive through a 12 GB dump and the queue latency dropped from 15 s to < 1 s.

Real World Production Example

Acme SaaS runs 12 containers on a 4‑core DigitalOcean droplet. After a nightly mysqldump, the queue:work processes were OOM‑killed. By adding a pause-queues.sh hook in the CI pipeline, increasing memory_limit to 512 M, and moving the dump to a side‑car container, downtime went from 4 minutes to zero.

Before vs After Results

Metric Before After
Worker Crashes per Dump 6/6 0/6
Average Queue Latency 15 s 0.9 s
CPU Utilization (peak) 92 % 67 %

Security Considerations

  • Never store MySQL root password in plain docker‑compose.yml. Use Docker secrets or .env with docker‑compose --env-file.
  • Limit the dump container’s network to db only (no internet access).
  • Encrypt the dump file at rest with gpg or store it in an S3 bucket with server‑side encryption.
  • Run workers as non‑root (user: www-data) and lock down supervisorctl via an ACL.
WARNING: Disabling Supervisor’s stopwaitsecs can cause zombie processes that keep the MySQL socket open, leading to connection pool exhaustion.

Bonus Performance Tips

  1. Switch Laravel queue driver from redis to database only if Redis memory is scarce.
  2. Enable redis-cli --latency monitoring; alert when latency > 100 ms.
  3. Set queue:restart cron job to recycle workers every 6 hours.
  4. Use php artisan config:cache and route:cache after every deployment.
  5. Leverage Cloudflare Workers to offload static API throttling.

FAQ

Q: My workers still die even after increasing memory. What else can I check?
A: Look at dmesg for OOM killer logs, verify that Docker’s --memory-swap flag is set, and confirm Redis isn’t evicting keys during the dump (increase maxmemory-policy to noeviction temporarily).
Q: Can I run the dump without pausing queues?
A: Yes, but you must allocate enough burst memory and configure MySQL’s innodb_buffer_pool_size to stay below the container limit, otherwise the kernel will still kill processes.

Final Thoughts

Docker makes Laravel deployments fast, but the convenience can hide resource‑starvation bugs that surface during heavy I/O like a MySQL dump. By giving your containers a little extra headroom, pausing the workers, and letting Supervisor do its auto‑restart magic, you can restore stability in under ten minutes. Apply these tweaks now, and your queue latency will stay low, your VPS won’t scream, and your users will never notice the backup.

Looking for Cheap, Secure Hosting?

Boost your Laravel and WordPress projects with fast SSD servers, built‑in DDoS protection, and 24/7 support. Check out Hostinger’s affordable plans today and get a free domain with every purchase.

No comments:

Post a Comment