Frustrated in 30‑Minute Error: Why My NestJS App Crashes on VPS After Deployment (And How to Fix It Forever)
Picture this: you spend an hour polishing a NestJS micro‑service, push it to a fresh Ubuntu VPS, hit npm start… and the whole thing detonates with a cryptic “Error: listen EADDRINUSE”. You stare at the log, heart pounding, wondering if you just wasted precious development time.
Welcome to the 30‑minute nightmare that haunts nearly every Node.js developer deploying to a virtual private server. In this article you’ll discover the exact cause, see a step‑by‑step fix, and walk away with a bullet‑proof setup that never crashes again.
Why This Matters
Every minute your app is down means lost traffic, frustrated users, and, if you’re selling a service, lost revenue. The “quick‑and‑dirty” VPS deployment method is popular because it’s cheap, but a single mis‑configuration can turn a $5/month server into a costly debugging session.
Fixing the issue once is good, but automating the fix guarantees that your next project—or a teammate’s project—won’t repeat the same mistake. That’s the difference between a hobbyist tinkerer and a professional developer who can reliably ship code.
Step‑by‑Step Tutorial
-
Check the Port Conflict
Most VPSes come with services (SSH, Apache, etc.) already listening on common ports. NestJS defaults to
3000. Run the following command to see if something else owns it:sudo lsof -i :3000If you see output, you need a new port or you must stop the offending service.
Tip: Use ports 8080, 4000, or any >1024 number that isn’t reserved. -
Make the Port Configurable
Hard‑coding the port in
main.tsis a recipe for disaster. Replace the static value with an environment variable:// src/main.ts import { NestFactory } from '@nestjs/core'; import { AppModule } from './app.module'; async function bootstrap() { const app = await NestFactory.create(AppModule); const port = process.env.PORT || 3000; await app.listen(port); console.log(`🚀 App listening on ${port}`); } bootstrap();Now you can set
PORT=8080in your.envfile or systemd service without touching code. -
Create a Robust
.envFileInstall
dotenv(if you haven’t) and load variables at the very top ofmain.ts:import 'dotenv/config'; // <-- add this lineThen, on the server, store the file securely:
# .env (keep this out of git!) PORT=8080 NODE_ENV=production DATABASE_URL=postgres://user:pass@localhost:5432/mydb -
Set Up a Systemd Service (Forever‑Up)
Running
npm run start:prodmanually will kill your app when you close the SSH session. Create a systemd unit file that restarts on failure:# /etc/systemd/system/nestapp.service [Unit] Description=NestJS Application After=network.target [Service] Type=simple User=ubuntu # replace with your VPS user WorkingDirectory=/var/www/nestapp EnvironmentFile=/var/www/nestapp/.env ExecStart=/usr/bin/npm run start:prod Restart=on-failure RestartSec=5 LimitNOFILE=65535 [Install] WantedBy=multi-user.targetEnable and start the service:
sudo systemctl daemon-reload sudo systemctl enable nestapp sudo systemctl start nestappWarning: ForgettingEnvironmentFilewill make your app fall back to the default port and crash again. -
Verify with
curlandsystemctl statusAfter the service is up, run:
curl -I http://localhost:8080 sudo systemctl status nestappYou should see a
200 OKheader and a green “active (running)” status.
Real‑World Use Case: SaaS Dashboard on a $5/mo VPS
Imagine you’re building a small SaaS dashboard for freelance clients. The backend is a NestJS API handling authentication, billing, and WebSocket notifications. You spin up a cheap DigitalOcean droplet, deploy the code with the steps above, and set PORT=4000 to avoid clash with the default SSH port 22.
Because the port is now environment‑driven and the service auto‑restarts, you never have to SSH in again for a simple crash. Your clients experience 99.9% uptime, and you can focus on adding features instead of firefighting.
Results / Outcome
- Zero “EADDRINUSE” crashes after the first deployment.
- Automatic restart on crash saves hours of manual SSH work.
- Environment‑driven ports let you run multiple services on the same VPS without conflict.
- Systemd logs give you a single source of truth for debugging.
Bonus Tips
- Use a Process Manager: If you prefer
pm2over systemd, runpm2 startup systemdand save the ecosystem file. - Enable HTTP/2 with Nginx: Put Nginx in front of NestJS, terminate TLS, and proxy to the internal port. This adds caching and rate‑limiting for free.
- Health Checks: Add an endpoint
/healththat returns{ status: 'ok' }. Configure your load balancer or uptime monitor to ping it. - Log Rotation: Install
logrotatefor/var/log/syslogor configurejournalctl --rotateto keep logs tidy.
Monetization (Optional)
If you’re turning your NestJS API into a product, consider these quick wins:
- Offer a “Premium API” tier with higher rate limits and a dedicated sub‑domain.
- Integrate Stripe billing directly into your NestJS service using
@nestjs/stripe. - Use a referral program: every new user that signs up with your affiliate link nets you $5/month.
Deploying a NestJS app to a VPS no longer has to be a 30‑minute nightmare. With a configurable port, a clean .env, and a systemd service that watches your process, you get rock‑solid uptime and the freedom to focus on what really matters—building features and earning money.
Ready to upgrade your deployment pipeline? Grab the full starter repo, clone it, and replace the placeholder logic with your own. In less than an hour you’ll have a production‑grade NestJS app that never crashes again.
No comments:
Post a Comment