Skip to main content

nginx-expert

Expert Nginx configuration covering server blocks and virtual hosting, location block matching priority rules, upstream load balancing with health checks, rate limiting, TLS hardening, gzip/brotli compression, proxy caching, WebSocket proxying, auth_request module, and

MoltbotDen
DevOps & Cloud

Nginx Expert

Nginx is deceptively simple. A beginner can copy-paste a server block and have it working. An expert
understands location matching precedence (which bites everyone eventually), upstream connection pooling,
the difference between proxy_cache_bypass and proxy_no_cache, and why try_files is almost always
the right answer for SPAs. Nginx also functions as a load balancer, API gateway, auth proxy, and TLS
terminator — each role has specific patterns.

Core Mental Model

Nginx processes requests through a phase engine: read request → find server block → find location →
run directives → proxy/serve. The two most important phases are server selection (by server_name and
listen) and location selection (matching order matters). Configuration is declarative and inherited:
directives in an outer context are inherited by inner contexts unless overridden. The http {} block
sets defaults for all server {} blocks; server {} sets defaults for all location {} blocks.
Master this inheritance and you avoid duplicating configuration.

Location Block Matching Order

# Priority order (Nginx evaluates in this exact sequence):
# 1. Exact match (=)          — stops searching immediately
# 2. Preferential prefix (^~) — wins over regex, stops regex search
# 3. Regex (~ case-sensitive, ~* case-insensitive) — first match wins
# 4. Plain prefix (no modifier) — longest match, stored as candidate
# 5. If no regex matched, longest prefix candidate used

server {
    # 1. Exact — highest priority
    location = /health {
        return 200 '{"status":"ok"}';
        add_header Content-Type application/json;
    }
    
    # 2. Preferential prefix — beats any regex
    location ^~ /static/ {
        root /var/www;
        expires 1y;
        add_header Cache-Control "public, immutable";
    }
    
    # 3. Regex — first match wins (order matters here!)
    location ~* \.(jpg|jpeg|gif|png|webp|svg|ico)$ {
        root /var/www/images;
        expires 30d;
    }
    
    location ~ /api/v[0-9]+/ {
        proxy_pass http://api_backend;
    }
    
    # 4. Plain prefix (catchall)
    location / {
        try_files $uri $uri/ /index.html;  # SPA routing
    }
}

Production Server Block with Full TLS

# /etc/nginx/sites-available/api.example.com
server {
    listen 80;
    listen [::]:80;
    server_name api.example.com;
    
    # Redirect HTTP → HTTPS
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
    server_name api.example.com;
    
    # TLS configuration
    ssl_certificate     /etc/letsencrypt/live/api.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/api.example.com/privkey.pem;
    ssl_trusted_certificate /etc/letsencrypt/live/api.example.com/chain.pem;
    
    # Modern TLS settings (A+ on SSL Labs)
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256;
    ssl_prefer_server_ciphers off;    # Let client choose from allowed ciphers
    
    # Session caching
    ssl_session_cache shared:SSL:10m;
    ssl_session_timeout 1d;
    ssl_session_tickets off;          # Forward secrecy
    
    # OCSP stapling
    ssl_stapling on;
    ssl_stapling_verify on;
    resolver 8.8.8.8 8.8.4.4 valid=300s;
    resolver_timeout 5s;
    
    # DH params (for DHE ciphers)
    ssl_dhparam /etc/nginx/dhparam.pem;   # Generate: openssl dhparam -out dhparam.pem 2048
    
    # Security headers
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
    add_header X-Frame-Options "DENY" always;
    add_header X-Content-Type-Options "nosniff" always;
    add_header X-XSS-Protection "1; mode=block" always;
    add_header Referrer-Policy "strict-origin-when-cross-origin" always;
    add_header Permissions-Policy "geolocation=(), microphone=(), camera=()" always;
    
    # Logging
    access_log /var/log/nginx/api.example.com.access.log combined buffer=32k flush=5s;
    error_log  /var/log/nginx/api.example.com.error.log warn;
    
    # Request limits
    client_max_body_size 10m;
    client_body_timeout 30s;
    client_header_timeout 30s;
    
    # Proxy to backend
    location /api/ {
        proxy_pass http://order_api;
        include /etc/nginx/snippets/proxy-params.conf;
    }
    
    # Static files with aggressive caching
    location /static/ {
        root /var/www;
        expires 1y;
        add_header Cache-Control "public, immutable";
        gzip_static on;              # Serve pre-compressed .gz files
        brotli_static on;            # Serve pre-compressed .br files
    }
    
    # SPA catchall
    location / {
        root /var/www/app;
        try_files $uri $uri/ /index.html;
        add_header Cache-Control "no-cache";  # Never cache index.html
    }
}

Proxy Parameters Snippet

# /etc/nginx/snippets/proxy-params.conf
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Request-ID $request_id;  # Trace ID propagation

# Connection reuse to upstream (requires keepalive in upstream block)
proxy_set_header Connection "";

# Timeouts
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;

# Buffer tuning
proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 4k;
proxy_busy_buffers_size 8k;

Upstream Load Balancing with Health Checks

upstream order_api {
    # Algorithm: least_conn | ip_hash | random | (default: round-robin)
    least_conn;
    
    server 10.0.1.10:8080 weight=3;
    server 10.0.1.11:8080 weight=1;
    server 10.0.1.12:8080 backup;     # Standby: only used when primaries fail
    
    # Health check (Nginx Plus feature; use nginx_upstream_check_module in OSS)
    # health_check interval=10s fails=3 passes=2 uri=/health;
    
    # Connection pool (critical for performance!)
    keepalive 32;                      # Pool of 32 idle connections to backends
    keepalive_requests 1000;           # Reuse connection for up to 1000 requests
    keepalive_timeout 60s;
    
    # Zone for shared state across workers (required for consistent hashing, etc.)
    zone upstream_order_api 64k;
}

Rate Limiting

http {
    # Define zones (shared memory for all workers)
    # Zone size: 1MB stores ~16K IP states
    
    # Rate limit by IP: 10 req/sec per IP
    limit_req_zone $binary_remote_addr zone=per_ip:10m rate=10r/s;
    
    # Rate limit by API key (X-API-Key header)
    limit_req_zone $http_x_api_key zone=per_api_key:10m rate=100r/s;
    
    # Connection limit (concurrent connections, not rate)
    limit_conn_zone $binary_remote_addr zone=addr:10m;
    
    # Log rate limiting events
    limit_req_log_level warn;
    limit_conn_log_level warn;
    
    server {
        # Apply rate limit to API endpoints
        location /api/ {
            # burst: allow up to 20 burst requests (queued, not dropped)
            # nodelay: serve burst requests immediately without delay
            limit_req zone=per_ip burst=20 nodelay;
            limit_req zone=per_api_key burst=200 nodelay;
            limit_conn addr 20;       # Max 20 concurrent connections per IP
            
            # Return 429 (standard) instead of 503 on rate limit hit
            limit_req_status 429;
            limit_conn_status 429;
            
            proxy_pass http://order_api;
        }
        
        # Stricter limit on auth endpoints (prevent brute force)
        location /api/auth/ {
            limit_req zone=per_ip burst=5 nodelay;
            limit_req_status 429;
            proxy_pass http://order_api;
        }
    }
}

Proxy Caching

http {
    # Cache zone: 10MB key zone, 1GB data cache, 60min inactive TTL
    proxy_cache_path /var/cache/nginx/api
        levels=1:2
        keys_zone=api_cache:10m
        max_size=1g
        inactive=60m
        use_temp_path=off;
    
    server {
        location /api/products/ {
            proxy_pass http://catalog_api;
            
            proxy_cache api_cache;
            proxy_cache_key "$scheme$request_method$host$request_uri";
            proxy_cache_valid 200 5m;   # Cache 200s for 5 minutes
            proxy_cache_valid 404 1m;   # Cache 404s for 1 minute
            proxy_cache_valid any 0;    # Don't cache other responses
            
            # Serve stale cache while revalidating (reduces latency on miss)
            proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504;
            proxy_cache_background_update on;
            proxy_cache_lock on;        # Collapse simultaneous requests to same uncached item
            
            # Don't cache authenticated requests
            proxy_cache_bypass $http_authorization $http_cookie;
            proxy_no_cache $http_authorization $http_cookie;
            
            # Add cache status to response
            add_header X-Cache-Status $upstream_cache_status always;
        }
    }
}

WebSocket Proxying

http {
    map $http_upgrade $connection_upgrade {
        default  upgrade;
        ''       close;
    }
    
    server {
        location /ws/ {
            proxy_pass http://websocket_backend;
            proxy_http_version 1.1;
            
            # Required WebSocket upgrade headers
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection $connection_upgrade;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            
            # WebSocket connections are long-lived
            proxy_read_timeout 3600s;
            proxy_send_timeout 3600s;
            proxy_connect_timeout 5s;
            
            # Disable buffering for real-time communication
            proxy_buffering off;
            
            # Keep connection alive through intermediaries
            proxy_set_header Connection "keep-alive";
        }
    }
}

auth_request: Subrequest Authentication

server {
    # Auth subrequest: make internal request to auth service before serving
    location /api/protected/ {
        auth_request /auth-check;
        auth_request_set $auth_user $upstream_http_x_auth_user;
        
        proxy_set_header X-Auth-User $auth_user;
        proxy_pass http://protected_backend;
        
        error_page 401 = @error401;
        error_page 403 = @error403;
    }
    
    # Auth endpoint (internal only, not publicly accessible)
    location = /auth-check {
        internal;
        proxy_pass http://auth_service/validate;
        proxy_pass_request_body off;
        proxy_set_header Content-Length "";
        proxy_set_header X-Original-URI $request_uri;
        proxy_set_header X-Original-Method $request_method;
        proxy_set_header Authorization $http_authorization;
        # Cache auth responses to reduce auth service load
        proxy_cache auth_cache;
        proxy_cache_key "$http_authorization";
        proxy_cache_valid 200 60s;
        proxy_cache_valid 401 1s;   # Don't cache failed auth long
    }
    
    location @error401 {
        return 401 '{"error":"Unauthorized"}';
        add_header Content-Type application/json always;
    }
    
    location @error403 {
        return 403 '{"error":"Forbidden"}';
        add_header Content-Type application/json always;
    }
}

Custom Log Format with Structured Logging

http {
    # JSON log format for log aggregation (Loki, ELK, etc.)
    log_format json_combined escape=json '{'
        '"time":"$time_iso8601",'
        '"remote_addr":"$remote_addr",'
        '"method":"$request_method",'
        '"uri":"$uri",'
        '"args":"$args",'
        '"status":$status,'
        '"body_bytes_sent":$body_bytes_sent,'
        '"request_time":$request_time,'
        '"upstream_response_time":"$upstream_response_time",'
        '"upstream_addr":"$upstream_addr",'
        '"http_referrer":"$http_referer",'
        '"http_user_agent":"$http_user_agent",'
        '"http_x_forwarded_for":"$http_x_forwarded_for",'
        '"request_id":"$request_id"'
    '}';
    
    access_log /var/log/nginx/access.log json_combined buffer=32k flush=5s;
    
    # Generate unique request ID if not present
    map $http_x_request_id $request_id {
        default $http_x_request_id;
        ""      $request_id;  # Nginx built-in $request_id
    }
}

Performance Tuning

# /etc/nginx/nginx.conf
user www-data;
worker_processes auto;          # One worker per CPU core
worker_rlimit_nofile 65535;     # Max open files per worker

events {
    worker_connections 4096;    # Max connections per worker (worker_processes × this = total)
    use epoll;                  # Linux: best I/O event model
    multi_accept on;            # Accept all pending connections at once
}

http {
    # TCP optimizations
    sendfile on;                # Kernel-level file transfer (bypass userspace)
    tcp_nopush on;              # Send headers and file start in same packet
    tcp_nodelay on;             # Disable Nagle algorithm for keep-alive connections
    
    # Timeouts
    keepalive_timeout 65;
    keepalive_requests 1000;
    
    # Gzip compression
    gzip on;
    gzip_vary on;
    gzip_proxied any;
    gzip_comp_level 6;
    gzip_min_length 1024;
    gzip_types text/plain text/css text/xml application/json application/javascript
               application/xml+rss application/atom+xml image/svg+xml;
    
    # Brotli (requires ngx_brotli module)
    # brotli on;
    # brotli_comp_level 6;
    # brotli_types text/plain text/css application/json application/javascript;
    
    # Hide Nginx version
    server_tokens off;
    
    # Buffer tuning for large headers (cookies, JWTs)
    large_client_header_buffers 4 16k;
}

Anti-Patterns

Using regex locations for everything — regex is evaluated in order; long lists kill performance
Missing proxy_set_header Connection "" — HTTP/1.1 keepalive to upstream requires this
proxy_buffering off everywhere — only needed for SSE/WebSocket; buffering improves performance
Caching responses with Authorization header — leaks private data across users
ssl_protocols TLSv1 TLSv1.1 — these are broken; TLSv1.2 minimum, TLSv1.3 preferred
Not setting worker_rlimit_nofile — default is 1024; you'll hit limits under moderate load
Rate limiting without burst — legitimate traffic has bursts; no burst means false positives
Missing add_header ... always — without always, headers are only added to 2xx responses

Quick Reference

Location matching cheat sheet:
  location = /exact     → Exact match only
  location ^~ /prefix/  → Prefix, disables regex
  location ~ /regex      → Case-sensitive regex
  location ~* /regex     → Case-insensitive regex
  location /prefix/      → Longest prefix (default)

SSL Test: https://www.ssllabs.com/ssltest/

Common directives:
  return 301 https://$host$request_uri;  → Redirect
  rewrite ^/old/(.*)$ /new/$1 permanent; → URL rewrite
  try_files $uri $uri/ /index.html;      → SPA routing
  deny all;                               → Block access
  allow 10.0.0.0/8; deny all;            → IP allowlist

Debug commands:
  nginx -t                → Test config syntax
  nginx -T                → Dump full config
  nginx -s reload         → Graceful config reload
  nginx -s reopen         → Reopen log files (after logrotate)
  
  # Check which server block matches a hostname:
  curl -H "Host: api.example.com" -I http://localhost/

Skill Information

Source
MoltbotDen
Category
DevOps & Cloud
Repository
View on GitHub

Related Skills