Nginx use cases that every engineer must know
nginx use cases

Nginx is one of the most influential and versatile web servers, widely used for handling high-performance applications. Initially designed as a web server, it has evolved into a multi-functional tool used for various engineering tasks. Whether you’re a DevOps engineer, a site reliability engineer, or a backend developer, understanding Nginx’s capabilities can significantly enhance your infrastructure.
In this article, we’ll explore the unique and essential use cases of Nginx that every engineer must know.
1. Web Server (Serving Static & Dynamic Content)
This configuration serves static files from /var/www/html
and forwards PHP requests to a backend.
Listens on port 80 for requests to
example.com
.Serves files from the
/var/www/html
directory.If a user visits the site, Nginx will look for
index.html
,index.htm
, orindex.php
as the default page.Use this setup if you are hosting a PHP-based website like WordPress.
server {
listen 80;
server_name example.com;
root /var/www/html;
index index.html index.htm index.php;
# Tries to serve the requested file or directory.
# If the file doesn't exist, it returns a 404 Not Found error.
location / {
try_files $uri $uri/ =404;
}
location ~ \.php$ {
include fastcgi_params;
fastcgi_pass unix:/run/php/php-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
}
2. Reverse Proxy (Forwarding Requests to Backend Server)
This configuration forwards requests to a backend application running on a port 5000
.
server {
listen 80;
server_name api.example.com;
# Listens on port 80 for api.example.com.
location / {
proxy_pass http://127.0.0.1:5000; # Forwards all requests to a backend application running on port 5000
proxy_set_header Host $host; # Pass important headers like the real IP and scheme to the backend.
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
3. Load Balancer (Distributing Traffic Among Multiple Servers)
This setup distributes traffic among three backend servers.
Nginx uses Below load-balancing method
Round Robin (Default)
Requests are distributed sequentially across all available servers.
Forwards requests to one of the backend servers in a round-robin manner.
upstream backend_servers {
server 192.168.1.101;
server 192.168.1.102;
server 192.168.1.103;
}
server {
listen 80;
server_name app.example.com;
location / {
proxy_pass http://backend_servers;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Each backend (192.168.1.101, 192.168.1.102, 192.168.1.103) must have Nginx installed and serving content.
upstream
block defines a group of backend servers that can be used for load balancing and proxying requests.
👉 How Round Robin Works?
1st request → 192.168.1.101
2nd request → 192.168.1.102
3rd request → 192.168.1.103
4th request → 192.168.1.101
(cycle repeats)
What is a Sticky Session in Load Balancing?
A sticky session (also called session persistence) ensures that a user’s requests always go to the same backend server during their session. This is useful when backend servers store session-specific data (e.g., user authentication, shopping carts).
Without sticky sessions, requests from a single user might be routed to different backend servers, leading to inconsistent session behavior.
upstream backend_servers {
ip_hash; # Basic Sticky Session
server 192.168.1.101;
server 192.168.1.102;
server 192.168.1.103;
}
When Should You Use Sticky Sessions?
✅ required when:
Your backend servers store session-specific data (e.g., user authentication).
You cannot use a shared session storage (like Redis or a database).
🚫 Not needed when:
Your app stores sessions in a centralized database (e.g., PostgreSQL, MySQL).
You use distributed session storage (e.g., Redis, Memcached).
For scalability, it’s better to store sessions in Redis instead of enabling sticky sessions.
Types of Routing in Nginx (With Examples)
1. Path-Based Routing
Route different paths to different backends.
location /api/ { proxy_pass http://backend_api; }
location /admin/ { proxy_pass http://backend_admin; }
2. Host-Based Routing
Route based on domain name.
server { server_name api.example.com; proxy_pass http://backend_api; }
server { server_name www.example.com; proxy_pass http://backend_web; }
3. Header-Based Routing
Route based on HTTP headers (e.g., User-Agent).
if ($http_user_agent ~* "Mobile") { proxy_pass http://backend_mobile; }
4. Load Balancing Routing
Distribute traffic across multiple servers.
upstream backend_servers {
least_conn;
server 192.168.1.101;
server 192.168.1.102;
server 192.168.1.103;
}
location / { proxy_pass http://backend_servers; }
5. Geolocation Routing
Route users based on country.
if ($geo_country = "US") { proxy_pass http://backend_us; }
6. Cookie-Based Routing
A/B testing, feature rollouts.
if ($cookie_experiment_group = "beta") { proxy_pass http://backend_beta; }
7. Rate-Based Routing
Limit API calls to prevent abuse.
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;
8. Query Parameter Routing
Route based on URL parameters.
if ($arg_version = "beta") { proxy_pass http://backend_beta; }
9. File Extension-Based Routing
Serve static files separately.
location ~* \.(jpg|css|js)$ { root /var/www/static; }
Conclusion
Nginx is more than just a web server — it’s a powerful tool for engineers handling complex infrastructure needs. Whether you’re optimizing performance, securing applications, or managing API traffic, mastering these use cases will help you build scalable and resilient systems.
Do you use Nginx in any unique way? Let me know in the comments! 🚀
Last updated