NGINX
HTTP web server, reverse proxy, content cache, load balancer, TCP/UDP proxy server, and mail proxy server.
Open-source reverse proxy and web server, honestly reviewed. Not for everyone — but if you’re self-hosting anything, you’ll end up here eventually.
TL;DR
- What it is: Open-source (BSD-2-Clause) web server, reverse proxy, load balancer, API gateway, and content cache — originally written by Igor Sysoev, now maintained by F5, Inc [README][3].
- Who it’s for: Developers, sysadmins, and self-hosters who need to route web traffic, terminate SSL, serve static files at scale, or front multiple backend apps on a single public IP [1][5].
- Cost: Free to run yourself. Enterprise distributions with commercial support are sold by F5 [README].
- Key strength: Handles absurd concurrency with minimal memory — 10,000 idle HTTP keep-alive connections consume about 2.5MB RAM [README][nginx.org]. That’s why it’s the world’s most popular web server by Netcraft’s count and a perennial top Docker image by DataDog’s [README].
- Key weakness: The configuration DSL has a steep learning curve. A single missing
http://prefix on aproxy_passdirective will silently break your reverse proxy [1]. There’s no GUI. You configure it in text files and reload. That’s it. - Ownership risk: F5 acquired NGINX in 2019. The open-source community had legitimate concerns about a networking company owning the most popular web server. F5 has so far kept the BSD license intact, but the question of long-term stewardship is real [3].
What is NGINX
NGINX (pronounced “engine x”) is a web server that does a lot more than serve files. The official description is accurate: it’s a high-performance web server, reverse proxy, load balancer, API gateway, and content cache [README]. In practice, most people self-hosting anything — a Node.js app, a Dockerized service, a homelab full of tools — use NGINX as the front door that handles incoming traffic and routes it to whatever’s running behind it.
The core design is event-driven and asynchronous. One master process manages several worker processes; the workers run under an unprivileged user and handle thousands of simultaneous connections without spawning a thread per connection [nginx.org]. This is the architectural decision that made NGINX famous: in 2004, Igor Sysoev published the C10K problem solution — how do you handle 10,000 simultaneous connections on a single server? Apache, which spawned a new process or thread per connection, struggled. NGINX, using epoll and kqueue, did not.
The project now sits at 29,757 GitHub stars. It’s distributed under the 2-clause BSD license, meaning you can use, modify, redistribute, and build commercial products with it — no restrictions beyond attribution [3][README]. F5, Inc. acquired NGINX Inc. in 2019 for reportedly $670M. The open-source codebase remains BSD-licensed; F5 sells NGINX Plus (an enterprise distribution with additional modules, active health checks, a dashboard, and commercial support) on top of it [3][README].
Why people choose it
The honest answer: most people don’t choose NGINX — they inherit it. If you follow any self-hosting tutorial for a Node app, a Django service, a Ghost blog, or a Docker stack, you will encounter an NGINX configuration file at some point. It’s the de facto standard for reverse proxying on Linux servers [1][2][5].
That said, there are specific reasons it wins on its merits:
Performance under concurrency. The event-driven architecture means NGINX handles traffic spikes gracefully. It’s not the fastest server in microbenchmarks — specialized setups can beat it — but at real-world traffic volumes with mixed static and dynamic content, it’s consistently at the top [nginx.org].
Reverse proxy simplicity. Routing a domain to a backend running on localhost:3000 is a dozen lines of config. Adding SSL termination via Let’s Encrypt is another handful. The Red Hat blog [2] puts it plainly: NGINX separates the “proxy” concept cleanly — it sits in front, handles TLS, passes requests upstream, and your app doesn’t need to care about any of that.
Multi-site hosting on a single IP. This is why self-hosters love it. You have one public IP address and five different services running in Docker on different ports. NGINX reads the Host header, matches it to a server block, and sends the request to the right container. A practical self-hosting guide [5] demonstrates routing site1.example.com to 192.168.1.10:8080 and site2.example.com to 192.168.1.11:8080 — all from a single public-facing port 80/443. Without a reverse proxy, you’d be forcing users to type port numbers in URLs.
Proven at scale. It powers Kubernetes ingress controllers — including F5’s own — and is one of the most-pulled Docker images on Docker Hub [README]. When the stakes are high, NGINX’s track record matters.
The F5 acquisition concern. The 2019 acquisition by F5 caused genuine anxiety in the web hosting community [3]. F5 is a NetOps company, not a developer tools company, and the fear was that NGINX would go the way of many open-source acquisitions: neglected, paywalled, or slowly killed. Three years later, the BSD license is intact, the community edition is still actively developed, and the GitHub repository shows recent commits. But the concern isn’t irrational — when a commercial entity with a vested interest in selling NGINX Plus controls the open-source roadmap, the incentives aren’t perfectly aligned with the community [3]. ReviewHell noted at acquisition time: “the real effect that this acquisition may have on the 2-clause BSD License that NGINX currently comes with is yet to be seen.” [3]. So far so good, but it’s worth monitoring.
Features
Web server:
- Static file serving with autoindexing [nginx.org]
- Open file descriptor cache (keeps frequently-accessed file handles open, reduces syscalls) [nginx.org]
- HTTP/1.1, HTTP/2 with weighted and dependency-based prioritization, HTTP/3 support [nginx.org]
- Name-based and IP-based virtual servers (multiple sites per server) [nginx.org]
- Gzip compression, byte-range responses, chunked transfer [nginx.org]
- XSLT and SSI (Server-Side Includes) filters [nginx.org]
- Embedded Perl scripting; njs (NGINX JavaScript) scripting language [nginx.org]
Reverse proxy:
- Accelerated reverse proxying with caching [nginx.org]
- Support for FastCGI, uwsgi, SCGI, and memcached backends [nginx.org]
- WebSocket proxying [1]
- Custom header management (pass client IP, modify upstream headers) [1]
- SSL termination — NGINX handles TLS so your backend app doesn’t have to [1][2]
Load balancing:
- Round-robin, least-connections, and IP-hash strategies [nginx.org][1]
- Fault tolerance with passive health checks [nginx.org]
- Upstream server weight configuration [1]
Rate limiting and access control:
- Per-IP rate limiting (limit requests and simultaneous connections) [nginx.org]
- Access control by client IP, HTTP Basic Auth, or subrequest result [nginx.org]
- HTTP referer validation [nginx.org]
Content caching:
- Proxy cache for upstream responses [nginx.org]
- FastCGI and uwsgi caching [nginx.org]
- SiteGround’s well-known SuperCacher is built on NGINX’s reverse proxy caching layer [4]
Mail proxy (less commonly used):
- IMAP, POP3, SMTP proxying with external HTTP authentication [nginx.org]
- SSL/TLS, STARTTLS support [nginx.org]
TCP/UDP proxying:
- Generic TCP/UDP stream proxying (not just HTTP) [nginx.org]
- Load balancing for TCP workloads [nginx.org]
Architecture:
- One master process + multiple worker processes under unprivileged users [nginx.org]
- Zero-downtime config reload (
nginx -s reload) and binary upgrades without dropping connections [nginx.org] - epoll (Linux), kqueue (FreeBSD), /dev/poll (Solaris) — picks the best async I/O primitive per platform [nginx.org]
- 10,000 idle keep-alive connections ≈ 2.5MB RAM [nginx.org]
- Dynamic modules — extend functionality without recompiling [README]
Pricing: self-hosted vs. managed
NGINX Open Source: Free. BSD-2-Clause license. You can use it on any number of servers, for any commercial purpose, without paying anyone [README][3].
NGINX Plus (F5’s commercial edition): Pricing is not publicly listed — you contact F5 sales. NGINX Plus adds active health checks, a real-time dashboard, advanced load balancing algorithms, JWT authentication, OIDC support, and enterprise support SLAs. For most self-hosters, NGINX Plus is irrelevant — the open-source version covers 95% of use cases.
Managed NGINX hosting — if you want NGINX without managing it yourself, several hosts build it in:
- SiteGround: starts at $2.99/mo, NGINX built into all shared and cloud plans [4]
- Liquid Web: custom pricing, NGINX with easy customization, auto-updates [4]
- DigitalOcean: custom pricing droplets, full root access, NGINX by package [4]
The self-hosted math: A Hetzner VPS at €3.29/mo (2 vCPU, 4GB RAM, 40GB SSD) running NGINX can front dozens of Docker services simultaneously. Compared to paying for individual SaaS for each service, the server cost is a fixed line item while the SaaS savings compound. This is the fundamental self-hosting value proposition — NGINX is the infrastructure layer that makes it work.
There’s no SaaS pricing to compare directly because NGINX is infrastructure. You don’t buy NGINX instead of a SaaS subscription — you use NGINX to reduce or eliminate SaaS subscriptions by self-hosting the apps behind it.
Deployment reality check
NGINX is available via package managers on every major Linux distribution: apt install nginx on Debian/Ubuntu, dnf install nginx on Fedora/RHEL, pacman -S nginx on Arch. The official packages from nginx.org are preferred over distro packages — they’re more current and include the latest security patches [README].
What a basic reverse proxy setup looks like:
A working configuration for routing a domain to a local service is genuinely simple — a server block, a location block, a proxy_pass directive, a few header-passing lines [5]. If you’ve installed Certbot (certbot --nginx), it auto-modifies your config for HTTPS. That’s the happy path, and it works cleanly.
Where it gets hard:
- The config DSL. NGINX configuration is its own language, and it’s not forgiving. Context matters — directives valid in
http {}are invalid instream {}. Blocks nest in non-obvious ways. The DigitalOcean tutorial [1] flags a classic trap: if yourproxy_passvalue is missing thehttp://prefix, NGINX will silently fail or throw a cryptic error. These footguns are everywhere for beginners. nginx -tis your best friend. Before reloading, always runnginx -tto validate syntax. The error messages are terse but point to the right line. Most tutorial guides skip this.- No GUI. Everything is text files, shell commands, and log files.
/var/log/nginx/error.logis where you debug. If you need a GUI for your reverse proxy, look at Nginx Proxy Manager (a separate project layered on top of NGINX) or Caddy. - Two branches to track. NGINX maintains a Stable branch and a Mainline branch [README]. Mainline gets new features and is what you want for production if you’re not deeply conservative. Stable is one version behind but fully supported.
- Dynamic modules. Extending NGINX beyond its built-in functionality (say, adding ModSecurity for WAF capabilities or Brotli compression) requires either building from source with the module compiled in, or loading a pre-compiled dynamic module. This is more involved than
apt installand can break on version upgrades if the module ABI isn’t compatible [README].
Realistic time estimate: Getting a basic reverse proxy working on a fresh VPS with NGINX, Certbot, and one service behind it: 20–30 minutes if you’ve done it before, 1–3 hours if you’re following a guide for the first time. Getting it production-hardened (proper rate limiting, security headers, log rotation, monitoring): another few hours.
Pros and cons
Pros
- BSD-2-Clause license. True open source — use it commercially, modify it, ship it inside your product, redistribute it. No catch [README][3].
- Exceptional performance under load. The event-driven async architecture handles tens of thousands of concurrent connections with minimal memory. 10K idle connections = 2.5MB RAM is a famous benchmark [nginx.org].
- Zero-downtime reloads.
nginx -s reloadapplies new config without dropping a single active connection [nginx.org]. This matters in production. - Multi-site reverse proxying. Route unlimited domains to different backends from a single public IP. The backbone of any homelab or self-hosted stack [5].
- Enormous community and documentation. Every possible configuration question has been answered on Stack Overflow, Reddit, and DigitalOcean’s community tutorials [1][2]. The official documentation at nginx.org is thorough.
- HTTP/2 and HTTP/3 support. Not all web servers have shipped HTTP/3 yet. NGINX has [nginx.org].
- Ubiquitous in hosting. SiteGround, Liquid Web, DigitalOcean all build on it [4]. Learning NGINX means understanding what’s running underneath a significant portion of the internet.
- Module architecture. Dynamic modules let you extend without forking [README].
Cons
- Config DSL learning curve. The configuration syntax is unique to NGINX and has sharp edges. Silent failures on misconfiguration are common for beginners [1].
- No built-in GUI. Text files only. If your team isn’t comfortable in a terminal, you’ll need to layer Nginx Proxy Manager or similar on top.
- F5 ownership risk. An enterprise networking company owns the open-source roadmap. They’ve behaved well so far, but the incentive misalignment is real — F5 profits from NGINX Plus adoption, and that can subtly shape what features make it into open source vs. Plus [3].
- Static module compilation. Adding non-bundled modules (ModSecurity, Brotli) still requires building from source in many cases, which adds maintenance burden on upgrades [README].
- No active health checks in OSS. Automatic removal of unhealthy upstream servers requires NGINX Plus. The open-source version does passive health checking only — it detects failures after they happen, not before [nginx.org].
- Verbose for complex configurations. Large NGINX configs get unwieldy fast. There’s no high-level abstraction — you’re writing raw directives across potentially dozens of included files. Caddy’s Caddyfile format is meaningfully simpler for common tasks.
- Cryptic error messages.
nginx: [emerg] unknown directivetells you what broke, not why it doesn’t make sense in context.
Who should use this / who shouldn’t
Use NGINX if:
- You’re self-hosting multiple services and need a reverse proxy to route traffic from a single public IP.
- You need SSL termination at the edge so your backend apps don’t deal with TLS.
- You’re serving static files at volume (documentation sites, asset CDN for your app).
- You’re load balancing multiple backend instances and need passive health checks.
- You want the de facto standard that every hosting tutorial assumes you’re using.
- You’re comfortable in a terminal and reading log files.
Skip NGINX (use Caddy instead) if:
- You want automatic HTTPS with zero configuration — Caddy handles Let’s Encrypt automatically without Certbot setup.
- Your team is non-technical and you need something readable without deeply understanding the config DSL.
- You want simpler config syntax — Caddy’s Caddyfile is dramatically less verbose for common reverse proxy setups.
Skip NGINX (use Nginx Proxy Manager) if:
- You want NGINX’s capabilities but need a web GUI for managing proxy hosts, SSL, and access control — Nginx Proxy Manager is a Docker-based GUI that sits on top of NGINX and removes most of the config-file work.
Skip NGINX (use Traefik) if:
- You’re running a Docker or Kubernetes environment and want automatic service discovery — Traefik reads container labels and configures itself dynamically. NGINX needs manual config updates when services change.
Don’t use NGINX if:
- You need something that configures itself — NGINX doesn’t watch Docker events, it serves config files you write.
- You’re looking for an application firewall out of the box — that requires ModSecurity or similar, which means custom compilation or a third-party module.
Alternatives worth considering
- Caddy — also open source, also written in Go, automatically provisions and renews Let’s Encrypt certificates, dramatically simpler Caddyfile syntax. The trade-off: smaller community, fewer modules, less documentation. Honest assessment: for simple reverse proxy setups, Caddy is easier. For complex production environments with fine-grained tuning needs, NGINX gives you more control.
- Traefik — designed specifically for containerized environments. Reads Docker/Kubernetes metadata and configures routes automatically. No manual config files when services come and go. Heavier resource footprint than NGINX.
- Nginx Proxy Manager — NGINX under the hood, GUI on top. For self-hosters who want NGINX’s performance without hand-editing config files. Managed via a web dashboard.
- HAProxy — a dedicated TCP/HTTP load balancer with more sophisticated load balancing algorithms and better observability than NGINX. If load balancing is your primary concern (not web serving), HAProxy is worth evaluating.
- Apache HTTP Server — the other legacy giant. Thread-per-connection model,
.htaccesssupport, broader PHP integration. Still widely used in shared hosting. Slower under high concurrency than NGINX, but some workloads (dynamic PHP via mod_php) integrate more cleanly. - LiteSpeed — proprietary, but dramatically outperforms both NGINX and Apache on WordPress/PHP workloads. Common in shared hosting [4]. Not for self-hosters who want open source.
For most self-hosters choosing a reverse proxy for the first time: NGINX vs. Caddy is the real decision. NGINX if you want more control and are willing to learn the config syntax. Caddy if you want automatic HTTPS and cleaner config files. Both are production-grade.
Bottom line
NGINX is infrastructure, not a product — it’s the layer that everything else sits behind. If you’re self-hosting more than one or two services, you’ll end up using it or something built on it. The BSD license is genuine, the performance is proven, and the community documentation is extensive enough that almost every problem you’ll encounter has already been solved on a forum somewhere. The F5 acquisition hasn’t visibly degraded the open-source project, but it’s a factor worth monitoring over a 5-year horizon. The real friction is the config DSL — it takes time to learn and punishes typos — but once you’ve written a few server blocks and understand how location matching works, it becomes second nature. For non-technical founders, the honest recommendation is Nginx Proxy Manager (GUI on top of NGINX) or Caddy first, then graduate to raw NGINX when you need the control it offers.
Sources
-
Tony Tran, DigitalOcean — “How To Configure Nginx as a Reverse Proxy on Ubuntu” (Sep 16, 2022, modified Oct 6, 2025). https://www.digitalocean.com/community/tutorials/how-to-configure-nginx-as-a-reverse-proxy-on-ubuntu-22-04
-
Seth Kenlon, Red Hat Blog — “Setting up reverse proxies with NGINX” (Jul 10, 2019, modified Nov 20, 2025). https://www.redhat.com/en/blog/setting-reverse-proxies-nginx
-
ReviewHell — “NGINX Acquired by F5 - What Does it Mean for Web Hosts?” https://www.reviewhell.com/blog/nginx-acquired-by-f5-what-does-it-mean-for-web-hosts/
-
WhoIsHostingThis — “Best Nginx Hosting of 2024”. https://whoishostingthis.com/best-web-hosting/nginx-hosting/
-
Matthew Pick — “How to self-host a website using NGINX as a Reverse Proxy” (Jan 2016). https://www.matthewpick.com/2016/01/how-to-self-host-a-website-using-nginx-as-a-reverse-proxy/
Primary sources:
- Official NGINX website: https://nginx.org/en/
- GitHub repository: https://github.com/nginx/nginx (29,757 stars, BSD-2-Clause license)
- NGINX documentation: https://nginx.org/en/docs/
- F5 enterprise distributions: https://www.f5.com/products/nginx
Features
Integrations & APIs
- Plugin / Extension System
Category
Related DevOps & Infrastructure Tools
View all 196 →Coolify
52KSelf-hosting platform that deploys apps, databases, and services to your own server with a single click. Open-source alternative to Heroku, Netlify, and Vercel.
Portainer
37KEnterprise container management platform for Kubernetes, Docker and Podman environments. Deploy, troubleshoot, and secure across any infrastructure.
1Panel
34KModern, open-source Linux server management panel. Web-based interface for managing servers, websites, databases, and containers.
CasaOS
33KA simple, easy-to-use, elegant open-source personal cloud system.
Dokku
32KA docker-powered PaaS that helps you build and manage the lifecycle of applications. The smallest PaaS implementation you've ever seen.
Dokploy
32KThe lightest self-hosted PaaS — one command, 3 minutes, and your apps are deploying with automatic SSL on a $4/month VPS.