blog
  • Blogs
    • Medium Articles
      • Linux
        • 40 Powerful Linux Networking Commands You Must Know.
        • These (Linux) VI Editor Shortcuts You Must Know
        • Bash/Linux Interview Questions for DevOps Engineers
        • Page 1
      • Git
        • 40 Powerful Git Commands Every Developer Should Know
        • 10 Git Best Practices That Every Developer Must Know
      • DevOps/SRE Interview Questions and Answers
        • Top DevOps/SRE Interview Questions and Answers on AWS VPC
        • Top 10 Common DevOps/SRE Interview Questions and Answers on Terraform Best Practices
        • Top 10 Common DevOps/SRE Interview Questions and Answers on Kubernetes Best Practices
        • Top 10 Common DevOps/SRE Interview Questions and Answers on Dockerfiles
        • Top 10 Common DevOps/SRE Interview Questions and Answers on Grafana
      • Installation
        • Docker Installation on Ubuntu 20/22
        • Install WireGuard VPN on Docker Compose
        • Install Redis on Docker Compose
        • Gravitee Docker Compose
      • Kubernetes Series 2025
        • Understanding Kubernetes: Part 1 -Control Plane
        • Understanding Kubernetes: Part 2 -Worker Node
        • Understanding Kubernetes: Part 3 -Pod
        • Understanding Kubernetes: Part 4-ReplicaSets
        • Understanding Kubernetes: Part 5 -Deployment
        • Understanding Kubernetes: Part 6 -DaemonSets
        • Understanding Kubernetes: Part 7 -StatefulSet
        • Understanding Kubernetes: Part 8 -ConfigMap
        • Understanding Kubernetes: Part 9 -Kubernetes Secret
        • Understanding Kubernetes: Part 10 -StorageClass
        • Understanding Kubernetes: Part 11 -Persistent Volume (PV)
        • Understanding Kubernetes: Part 12 -Persistent Volume Claim (PVC)
        • Understanding Kubernetes: Part 13 -Services
        • Understanding Kubernetes: Part 14 -ClusterIP Service
        • Understanding Kubernetes: Part 15 -NodePort Service
        • Understanding Kubernetes: Part 16 -Load Balancer Service
        • Understanding Kubernetes: Part 17 -Ingress
        • Understanding Kubernetes: Part 18 -Ingress Controller
        • Understanding Kubernetes: Part 19 -Headless Service
        • Understanding Kubernetes: Part 20-Network Policy
        • Understanding Kubernetes: Part 21 -CNI
        • Understanding Kubernetes: Part 22 Kubernetes Resource Requests & Limits
        • Understanding Kubernetes: Part 23 Node Selector
        • Understanding Kubernetes: Part 24 Taints and Tolerations
        • Understanding Kubernetes: Part 25 Affinity and Anti-Affinity
        • Understanding Kubernetes: Part 26 Preemption and Priority
        • Understanding Kubernetes: Part 27 Role and RoleBinding
        • Understanding Kubernetes: Part 28 ClusterRole and ClusterRoleBinding
        • Understanding Kubernetes: Part 29 Service Account
        • Understanding Kubernetes: Part 30 Horizontal Pod Autoscaler (HPA)
        • Understanding Kubernetes: Part 31 Vertical Pod Autoscaler (VPA)
        • Understanding Kubernetes: Part 33 Startup Probe
        • Understanding Kubernetes: Part 34 Liveness Probe
        • Understanding Kubernetes: Part 35 Readiness Probe
        • Understanding Kubernetes: Part 36 Container Network Interface (CNI)
        • Understanding Kubernetes: Part 37 Container Runtime Interface (CRI)
        • Understanding Kubernetes: Part 38 Container Storage Interface (CSI)
      • Cloudflare
        • Cloudflare Tunnel for Secure HTTP Routing
      • Nginx
        • Nginx use cases that every engineer must know
Powered by GitBook
On this page
  1. Blogs
  2. Medium Articles
  3. Nginx

Nginx use cases that every engineer must know

PreviousNginx

Last updated 2 months ago

nginx use cases

Nginx | Web Serer | Load Balancer | Reverse Proxy

Nginx is one of the most influential and versatile web servers, widely used for handling high-performance applications. Initially designed as a web server, it has evolved into a multi-functional tool used for various engineering tasks. Whether you’re a DevOps engineer, a site reliability engineer, or a backend developer, understanding Nginx’s capabilities can significantly enhance your infrastructure.

In this article, we’ll explore the unique and essential use cases of Nginx that every engineer must know.

1. Web Server (Serving Static & Dynamic Content)

This configuration serves static files from /var/www/html and forwards PHP requests to a backend.

  • Listens on port 80 for requests to example.com.

  • Serves files from the /var/www/html directory.

  • If a user visits the site, Nginx will look for index.html, index.htm, or index.php as the default page.

  • Use this setup if you are hosting a PHP-based website like WordPress.

server {
    listen 80;
    server_name example.com;

    root /var/www/html;
    index index.html index.htm index.php;

# Tries to serve the requested file or directory.
# If the file doesn't exist, it returns a 404 Not Found error.

    location / {
        try_files $uri $uri/ =404;
    }

    location ~ \.php$ {
        include fastcgi_params;
        fastcgi_pass unix:/run/php/php-fpm.sock;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    }
}

2. Reverse Proxy (Forwarding Requests to Backend Server)

This configuration forwards requests to a backend application running on a port 5000.


server {
    listen 80;
    server_name api.example.com;
# Listens on port 80 for api.example.com.

    location / {
        proxy_pass http://127.0.0.1:5000;   # Forwards all requests to a backend application running on port 5000
        proxy_set_header Host $host;        # Pass important headers like the real IP and scheme to the backend.
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

3. Load Balancer (Distributing Traffic Among Multiple Servers)

This setup distributes traffic among three backend servers.

Nginx uses Below load-balancing method

Round Robin (Default)

  • Requests are distributed sequentially across all available servers.

  • Forwards requests to one of the backend servers in a round-robin manner.

upstream backend_servers {
    server 192.168.1.101;
    server 192.168.1.102;
    server 192.168.1.103;
}

server {
    listen 80;
    server_name app.example.com;

    location / {
        proxy_pass http://backend_servers;  
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

Each backend (192.168.1.101, 192.168.1.102, 192.168.1.103) must have Nginx installed and serving content.

upstream block defines a group of backend servers that can be used for load balancing and proxying requests.

👉 How Round Robin Works? 1st request → 192.168.1.101 2nd request → 192.168.1.102 3rd request → 192.168.1.103 4th request → 192.168.1.101 (cycle repeats)

What is a Sticky Session in Load Balancing?

A sticky session (also called session persistence) ensures that a user’s requests always go to the same backend server during their session. This is useful when backend servers store session-specific data (e.g., user authentication, shopping carts).

Without sticky sessions, requests from a single user might be routed to different backend servers, leading to inconsistent session behavior.

upstream backend_servers {
    ip_hash;     # Basic Sticky Session
    server 192.168.1.101;
    server 192.168.1.102;
    server 192.168.1.103;
}

When Should You Use Sticky Sessions?

✅ required when:

  • Your backend servers store session-specific data (e.g., user authentication).

  • You cannot use a shared session storage (like Redis or a database).

🚫 Not needed when:

  • Your app stores sessions in a centralized database (e.g., PostgreSQL, MySQL).

  • You use distributed session storage (e.g., Redis, Memcached).

For scalability, it’s better to store sessions in Redis instead of enabling sticky sessions.

Types of Routing in Nginx (With Examples)

1. Path-Based Routing

Route different paths to different backends.

location /api/ { proxy_pass http://backend_api; }
location /admin/ { proxy_pass http://backend_admin; }

2. Host-Based Routing

Route based on domain name.

server { server_name api.example.com; proxy_pass http://backend_api; }
server { server_name www.example.com; proxy_pass http://backend_web; }

3. Header-Based Routing

Route based on HTTP headers (e.g., User-Agent).

if ($http_user_agent ~* "Mobile") { proxy_pass http://backend_mobile; }

4. Load Balancing Routing

Distribute traffic across multiple servers.

upstream backend_servers {
    least_conn;
    server 192.168.1.101;
    server 192.168.1.102;
    server 192.168.1.103;
}
location / { proxy_pass http://backend_servers; }

5. Geolocation Routing

Route users based on country.

if ($geo_country = "US") { proxy_pass http://backend_us; }

6. Cookie-Based Routing

A/B testing, feature rollouts.

if ($cookie_experiment_group = "beta") { proxy_pass http://backend_beta; }

7. Rate-Based Routing

Limit API calls to prevent abuse.

limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;

8. Query Parameter Routing

Route based on URL parameters.

if ($arg_version = "beta") { proxy_pass http://backend_beta; }

9. File Extension-Based Routing

Serve static files separately.

location ~* \.(jpg|css|js)$ { root /var/www/static; }

Conclusion

Nginx is more than just a web server — it’s a powerful tool for engineers handling complex infrastructure needs. Whether you’re optimizing performance, securing applications, or managing API traffic, mastering these use cases will help you build scalable and resilient systems.

Do you use Nginx in any unique way? Let me know in the comments! 🚀