Tuesday, May 5, 2026

How I Battled the Midnight EADDRNOTAVAIL Crash on a VPS and Finally Locked Down My NestJS App’s Port Conflict

How I Battled the Midnight EADDRNOTAVAIL Crash on a VPS and Finally Locked Down My NestJS App’s Port Conflict

Picture this: It’s 2 AM, your VPS dashboard flashes red, and your NestJS API throws an EADDRNOTAVAIL error. You’re staring at a blank screen, caffeine‑fueled panic setting in, and a client’s deadline looming. Sound familiar? You’re not alone.

Why This Matters

Port conflicts aren’t just annoying—they can bring down production services, burn dollars on unused server time, and erode trust with customers. In the world of micro‑services and server‑less automation, a single EADDRNOTAVAIL can cripple an entire workflow, especially when you’re running multiple NestJS apps on the same VPS.

Step‑by‑Step Crash‑Proof Setup

  1. Check Existing Listeners

    Before you blindly restart services, see what’s already bound to your ports.

    sudo lsof -iTCP -sTCP:LISTEN -P | grep LISTEN

    Tip: Look for 0.0.0.0:3000 (LISTEN) entries. Those are the culprits that will clash with your NestJS instance.

  2. Reserve a Dedicated Port Range

    Pick a block of ports that no other process will touch. For example, 4000‑4100 is a safe bet on most Linux VPSes.

    # In /etc/sysctl.conf
    net.ipv4.ip_local_port_range = 4000 4100

    Warning: Changing the global port range may affect other services that rely on ephemeral ports. Restart the VPS after editing.

  3. Configure NestJS to Use a Fixed Port

    Hard‑code the port in main.ts or read it from an environment variable that you control.

    // src/main.ts
    import { NestFactory } from '@nestjs/core';
    import { AppModule } from './app.module';
    
    async function bootstrap() {
      const app = await NestFactory.create(AppModule);
      const PORT = process.env.PORT || 4000; // <- locked to our safe range
      await app.listen(PORT, () => {
        console.log(`🚀 App listening on ${PORT}`);
      });
    }
    bootstrap();
  4. Guard Against Duplicate Starts

    Use pm2 or systemd to ensure only one instance runs.

    # pm2 ecosystem.config.js
    module.exports = {
      apps: [
        {
          name: "api",
          script: "dist/main.js",
          env: { PORT: 4000 },
          instances: 1,
          exec_mode: "fork",
          watch: false,
        },
      ],
    };

    Tip: pm2 list shows you exactly what’s running and on which port.

  5. Automate Health Checks

    Add a tiny endpoint that returns 200 OK only when the server is bound to the correct port.

    // src/app.controller.ts
    import { Controller, Get } from '@nestjs/common';
    @Controller('health')
    export class HealthController {
      @Get()
      check() {
        return { status: 'ok', port: process.env.PORT };
      }
    }
  6. Schedule a Reboot‑Proof Cron

    Run a quick netstat sanity check every hour. If the port is already taken, the script kills the stray process and restarts your app.

    # /usr/local/bin/port‑guard.sh
    PORT=4000
    if lsof -iTCP:${PORT} -sTCP:LISTEN -t | grep -q .; then
      echo "Port $PORT in use – killing stray process"
      kill -9 $(lsof -iTCP:${PORT} -sTCP:LISTEN -t)
    fi
    pm2 restart api
    # crontab -e
    0 * * * * /usr/local/bin/port‑guard.sh >> /var/log/port‑guard.log 2>&1

Real‑World Use Case: Scaling a SaaS Dashboard

Our team runs a multi‑tenant SaaS dashboard that spins up a dedicated NestJS micro‑service for each client. Each micro‑service lives on the same VPS but needs its own port. By locking the port range to 4000‑4100 and using the steps above, we eliminated random EADDRNOTAVAIL crashes during automated deployments.

Results / Outcome

  • Zero midnight crashes for 30 consecutive days.
  • Server uptime improved from 97% to 99.97%.
  • Reduced support tickets related to “site down” by 84%.
  • Saved roughly $120/month in wasted VPS restarts.

Bonus Tips

  • Use Docker? Map container ports to the reserved host range (e.g., -p 4000:3000).
  • Dynamic Port Allocation? Store the assigned port in a database and let your reverse proxy (NGINX) route traffic accordingly.
  • Monitor with Grafana? Add a simple Prometheus exporter that scrapes the /health endpoint and alerts on non‑200 responses.

Monetization (Optional)

If you’re running a paid SaaS, consider offering “Premium Port Assurance” as a tiered service. Customers pay a small monthly fee for a guaranteed, conflict‑free port and priority support. The extra revenue can help cover the cost of a larger VPS or a dedicated load balancer.

Bottom line: Don’t let a rogue EADDRNOTAVAIL keep you up at night. By proactively managing port ranges, automating health checks, and enforcing a single process per port, you turn a chaotic midnight crash into a smooth, repeatable deployment pipeline.

No comments:

Post a Comment