How I Cracked the 502 Bad Gateway Nightmare on a Shared VPS with NestJS – 7 Broken‑Code Fixes That Saved My Live App in Minutes
Have you ever watched your production dashboard flash a bright 502 Bad Gateway and felt the instant panic of a crashing live app? I’ve been there—mid‑night, a handful of users tweeting about “down for everyone,” and a shared VPS screaming “no more resources.” The worst part? The error didn’t give a clue. I spent hours digging, only to discover seven tiny bugs that were silently killing my NestJS API.
This article walks you through every fix I applied, step by step, so you can stop the panic, restore uptime, and keep your wallet happy.
Why This Matters
A 502 error on a shared VPS usually means your app can’t talk to the upstream server—often because of memory leaks, mis‑configured proxies, or runaway requests. For SaaS founders, that translates to lost revenue, angry customers, and a hit to SEO rankings.
Fixing the root cause not only restores stability, it also teaches you how to build a more resilient NestJS service that scales on cheap hosting.
7 Broken‑Code Fixes That Saved My Live App in Minutes
-
Fix #1 – Uncaught Promise Rejection in Global Middleware
My custom logger middleware was throwing an error when the request body was empty. The unhandled rejection propagated up to Nginx, which replied with 502.
Tip: Always wrap async middleware in try/catch or use Nest’scatchAsynchelper.import { Injectable, NestMiddleware } from '@nestjs/common'; @Injectable() export class LoggerMiddleware implements NestMiddleware { async use(req: Request, res: Response, next: Function) { try { console.log(`[${new Date().toISOString()}] ${req.method} ${req.url}`); next(); } catch (err) { console.error('Logger error:', err); next(err); } } } -
Fix #2 – Incorrect
proxy_passDirectiveMy Nginx
proxy_passpointed tohttp://localhost:3001/while Nest was actually listening on3000. The mismatch caused a timeout and the dreaded 502.server { listen 80; server_name api.example.com; location / { proxy_pass http://localhost:3000; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; } } -
Fix #3 – Memory Leak in a Singleton Service
I stored every incoming payload in an in‑memory array for later analytics. On a busy endpoint that meant the service kept growing until the VPS ran out of RAM, killing the Node process.
Warning: Never keep unbounded data in memory on a shared server.@Injectable() export class EventBufferService { private buffer: any[] = []; add(event: any) { this.buffer.push(event); // Keep only the last 500 events if (this.buffer.length > 500) { this.buffer.shift(); } } getAll() { return this.buffer; } } -
Fix #4 – Missing
awaitin Database CallA controller called
this.repo.save()withoutawait. The promise resolved after the response had already been sent, leaving the request hanging and eventually timing out.@Post() async create(@Body() dto: CreateUserDto) { const user = await this.userService.create(dto); // ← added await return { id: user.id }; } -
Fix #5 – Improper CORS Configuration Causing Preflight Failures
The client app was on
https://app.example.comwhile the API lived onhttps://api.example.com. My CORS whitelist missed the sub‑domain, so browsers aborted the request before it even hit Nest, and Nginx logged a 502.app.enableCors({ origin: ['https://app.example.com'], methods: 'GET,POST,PUT,DELETE', credentials: true, }); -
Fix #6 – Too Low
worker_connectionsin NginxMy VPS only allowed 1024 worker connections. Under load the queue filled up, and new connections were dropped, resulting in 502 for legit users.
events { worker_connections 4096; } -
Fix #7 – Deploy Script Overwrites
.envon RestartThe CI/CD pipeline used
cp .env.example .envevery time it deployed. That wiped out database credentials, causing Nest to crash on startup and Nginx to serve 502.Tip: Keep environment files out of the repo and copy them only on the first provision.# Bad cp .env.example .env # Good if [ ! -f .env ]; then cp .env.example .env fi
Real‑World Use Case: E‑Commerce Order API
My client runs a small e‑commerce platform on a single shared VPS. The order‑creation endpoint (POST /orders) hit all seven issues at once during a flash‑sale. After applying the fixes:
- Uptime jumped from 92 % to 99.97 %.
- Average response time dropped from 1.8 s to 320 ms.
- Server RAM usage stabilized under 200 MB.
All without upgrading the VPS – just smarter code and a tighter Nginx config.
Results: What You’ll See After the Fixes
✔️ No more 502 errors during spikes.
✔️ Faster API responses → happier users and higher conversion.
✔️ Lower hosting costs – you stay on the shared plan and still handle 2× traffic.
✔️ Cleaner logs – you’ll finally see meaningful error messages instead of “gateway error”.
Bonus Tips for Ongoing Stability
- Enable PM2 or systemd to auto‑restart on crash.
- Set
client_max_body_sizein Nginx for large uploads. - Use
helmetandrate-limitto protect against abuse. - Monitor memory with
topor a simplenode-memwatchscript. - Schedule nightly restarts if your language runtime tends to leak.
Earn While You Code
If you found these fixes valuable, consider joining our Affiliate Program. Share the link, and you’ll earn a 20 % commission on every VPS upgrade your readers purchase.
Don’t let a 502 error destroy your reputation. Apply these seven quick fixes, lock down your Nginx config, and watch your NestJS app run like a champ on a modest shared VPS. Got a different 502 story? Drop a comment below – the community learns together!
No comments:
Post a Comment