Crushing the 502 Bad Gateway Nightmare: How I Fixed NestJS Connection Refused Errors on a Budget VPS in 45 Minutes
Picture this: you’ve just pushed a brand‑new NestJS microservice to your cheap VPS, your heart’s racing because the demo is due in an hour, and then—boom—502 Bad Gateway stares back at you. The error log screams “connection refused,” and every minute feels like a tiny eternity. If you’ve ever been there, you know the panic, the wasted coffee, and the creeping doubt that maybe you should’ve paid for a fancy server instead.
Good news: you don’t need a high‑end cloud provider to get past this roadblock. In this hands‑on tutorial I’ll walk you through exactly how I diagnosed and solved the NestJS “connection refused” problem on a $5/month VPS—all in under 45 minutes. By the end you’ll have a rock‑solid deployment, a cleaner network stack, and a few money‑saving tricks you can reuse on any Node.js project.
Why This Matters
502 errors are more than just an annoying UI glitch; they signal a broken connection between your web server (NGINX, Apache, Caddy…) and the process that actually runs your code. For startups and freelancers, every minute of downtime translates directly into lost revenue, missed demos, and a bruised reputation. Fixing the root cause—rather than repeatedly restarting services—lets you:
- Maintain uptime for paying customers.
- Keep your VPS costs low by avoiding over‑provisioning.
- Improve debugging confidence for future deployments.
Step‑by‑Step Tutorial
-
1️⃣ Verify the 502 Source
Log into your VPS and check the NGINX error log:
sudo tail -n 20 /var/log/nginx/error.logYou’ll likely see something like:
connect() failed (111: Connection refused) while connecting to upstream, client: 203.0.113.5, server: api.example.com, request: "GET /health HTTP/1.1"Warning: If the log mentions “upstream timed out” instead of “connection refused,” the cause is different (slow response, not a port issue). -
2️⃣ Confirm NestJS Is Running
Run
pm2 list(ordocker psif you’re containerized) to see if the app process is alive:pm2 listIf the app is stopped, start it:
pm2 start dist/main.js --name my-nest-api -
3️⃣ Check the Listening Port
NestJS defaults to
3000. Verify that the process is bound to the expected port:sudo lsof -iTCP -sTCP:LISTEN -P | grep nodeIf you see
*:3000 (LISTEN)you’re good. If the port is different, note it for the next step. -
4️⃣ Align NGINX upstream configuration
Open the site config (usually in
/etc/nginx/sites‑available/yourdomain.conf) and make sure theproxy_passURL matches the NestJS port:server { listen 80; server_name api.example.com; location / { proxy_pass http://127.0.0.1:3000; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } }Tip: Use127.0.0.1instead oflocalhostto avoid IPv6 resolution issues on cheap VPS images. -
5️⃣ Restart NGINX & PM2
Apply the changes:
sudo systemctl restart nginx pm2 restart my-nest-api -
6️⃣ Test the Endpoint Directly
Skip NGINX and curl the app to ensure it answers:
curl -i http://127.0.0.1:3000/healthExpected
200 OKwith JSON payload. If you get a response here but still see 502 through the domain, the issue is still in the reverse proxy. -
7️⃣ Firewall Check (UFW/Iptables)
On low‑cost VPSes, a default firewall may block internal traffic. List UFW rules:
sudo ufw status verboseIf
3000isn’t allowed, add it:sudo ufw allow 3000/tcp sudo ufw reloadTip: Keep the port open only for127.0.0.1by usingsudo ufw allow from 127.0.0.1 to any port 3000if you’re security‑conscious. -
8️⃣ Verify DNS & SSL (Optional)
If you’re using Cloudflare or Let’s Encrypt, make sure the DNS A record points to the VPS IP and that the SSL termination (if on Cloudflare) isn’t forcing HTTP/2 on a non‑TLS backend. A quick
digconfirms the IP. -
9️⃣ Celebrate 🎉
Open your browser, hit
https://api.example.com/health, and watch the 502 vanish. Your NestJS app is now reachable through NGINX, and you’ve saved a handful of dollars by staying on a budget VPS.
Real‑World Use Case: Tiny SaaS Billing Service
Imagine you run a SaaS that charges $9.99/month per user. Your billing microservice, built with NestJS, lives on a $5 DigitalOcean droplet. During a weekend rollout, the payment gateway throws 502 errors, causing failed invoices and angry customers.
Following the steps above, you discover the app had silently crashed after a sudden Node.js memory spike, and NGINX kept trying to forward traffic to a dead port. By restarting the process with pm2 restart and adding a simple pm2 start ecosystem.config.js --env production auto‑restart rule, the service stayed up for the next 30 days without a single manual touch.
Results / Outcome
After the 45‑minute rescue mission:
- Uptime: 99.9% for the next month (only scheduled maintenance).
- Cost: Stayed under the $5 VPS budget—no need to upgrade to a $20 plan.
- Time saved: Instead of hours spent hunting logs, you fixed the issue in under an hour and got back to building features.
More importantly, you now have a repeatable checklist you can embed into your deployment scripts.
Bonus Tips & Automation Hacks
- Health‑Check Endpoint: Add
@Get('health')in a dedicated controller and let NGINX probe it every 30 seconds withproxy_next_upstreamto automatically mark a failed instance as down. - Zero‑Downtime Deploys: Use
pm2 reload ecosystem.config.js --env productionto reload without dropping connections. - Watchdog Script: Create a tiny Bash script that runs every minute via
cronto verify the port is listening; if not, it triggerspm2 restart. - Log Aggregation: Ship NGINX and NestJS logs to a free Loggly or CloudWatch tier—quick pattern detection for future 502 spikes.
Monetization Suggestion (Optional)
If you found this guide useful, consider offering a “VPS Health‑Check” service for other devs. A one‑time $19 audit includes:
- Full firewall audit.
- NGINX & Node.js config review.
- Automated PM2/Ecosystem setup.
It’s a tiny upsell that can turn a free tutorial into a steady side‑income stream.
No comments:
Post a Comment