Nginx is a powerful and widely used open-source web server that has gained popularity
due to its high performance, stability, rich feature set, simple configuration, and low
resource consumption. Beyond serving static content like traditional web servers, Nginx
excels as a reverse proxy, load balancer, HTTP cache, and mail proxy. Its event-driven
architecture allows it to handle a large number of concurrent connections efficiently.
Nginx Request Handling
The process of how Nginx handles a client request can be broken down as follows:
Request: A client (e.g., a web browser) sends a request to the Nginx server.
Master Process: The Nginx server has a master process that is responsible for reading the
configuration and managing worker processes.
Event Loop: The master process creates and manages multiple worker processes. Each worker process
contains an event loop. This event loop efficiently manages multiple client connections within a single
process, using non-blocking I/O operations.
Worker Process: When a request comes in, it is picked up and processed by one of the worker
processes within its event loop.
Response: After processing the request (which might involve serving static content, proxying to another
server, etc.), the worker process sends the response back to the client.
Nginx Use Cases
Nginx is versatile and can be employed in various scenarios:
Load Balancer: Nginx can distribute incoming client requests across multiple backend servers. This
improves performance, scalability, and reliability by preventing any single server from being overwhelmed.
Source mentions the upstream directive, which is fundamental to configuring Nginx as a load balancer.
Reverse Proxy: In this role, Nginx acts as an intermediary for requests from clients seeking resources
from one or more servers. The client sends the request to the Nginx server, which then forwards the request
to the appropriate backend server. The backend server's response is then sent back to the client by Nginx.
This can enhance security, provide SSL termination, and improve performance through caching. Source
explicitly lists "reversy proxy" as a use case.
Forward Proxy: While less commonly configured for this primary purpose than tools like Squid, Nginx
can also act as a forward proxy, allowing clients on a private network to connect to servers on the internet
through it. Source lists "forward proxy" as a use case.
Caching: Nginx can cache responses from backend servers, serving subsequent identical requests
directly. This reduces the load on backend servers and improves response times for clients. Source
discusses "nginx caching server" and related directives like proxy cach dir, proxy cach key, and proxy cach
vaild.
Nginx Installation
Installation methods vary depending on the operating system (e.g., using package managers like apt on
Debian/Ubuntu or yum on CentOS/RHEL).
Important Nginx Directories
Understanding the key directories used by Nginx is crucial for configuration and management:
/etc/nginx/nginx.conf: This is the main configuration file for Nginx. It includes global settings and
directives, and often references other configuration files.
/etc/nginx/sites-available/: This directory typically contains configuration files for individual
websites or applications hosted on the server. These files define virtual host settings.
/etc/nginx/sites-enabled/: This directory contains symbolic links to the configuration files in sitesavailable that are currently active. Nginx only loads the configurations of the files that are linked here. The
command ln -s /etc/nginx/site-available/helloworld /etc/nginx/sites-enabled demonstrates how to
enable a site configuration by creating a symbolic link.
/etc/nginx/conf.d/: This directory often contains additional configuration snippets that can be
included in the main nginx.conf or within server block configurations. For example, source mentions
/etc/nginx/conf.d/.htpasswd as a file to set basic authentication.
/var/www/: This is a common root directory for web content. However, the actual location for website
files is defined in the server block configuration.
/etc/nginx/mime.types: This file defines the MIME types that Nginx uses to determine the ContentType header for responses based on file extensions.
/etc/nginx/nginx.pid: This file stores the process ID (PID) of the Nginx master process.
/var/log/nginx/: This directory is where Nginx access logs (recording client requests) and error logs
(recording any issues encountered) are typically stored. Source mentions "logs and log format options"
under monitoring and troubleshooting.
Nginx Commands
These are some essential Nginx commands for managing the server:
nginx -h: Displays help information about the Nginx command-line options.
nginx -v: Shows the Nginx version.
nginx -V: Shows the Nginx version and configuration arguments that were used during the build
process. This can be useful for determining compiled-in modules.
nginx -t: Tests the Nginx configuration files for syntax errors. It's crucial to run this command before
reloading or restarting Nginx.
nginx -T: Similar to -t, but it also dumps the entire Nginx configuration as seen by the server. This is
useful for debugging.
nginx -s stop: Forcefully stops the Nginx server immediately.
nginx -s quit: Gracefully stops the Nginx server. It waits for worker processes to finish processing
current requests before exiting.
nginx -s reload: Reloads the Nginx configuration without interrupting the processing of current
requests. The master process re-reads the configuration files and starts new worker processes with the
updated configuration while the old ones finish their work.
nginx -s reopen: Reopens the log files. This is useful after log rotation.
systemctl restart nginx: Restarts the Nginx service (if managed by systemd). This typically involves
stopping and then starting the Nginx server.
systemctl start nginx: Starts the Nginx service.
systemctl reload nginx: Reloads the Nginx configuration (if managed by systemd), similar to nginx
-s reload.
systemctl status nginx: Shows the current status of the Nginx service (active, inactive, failed, etc.).
curl and its Options
curl is a command-line tool used for transferring data with URLs. It's often used to interact with web
servers, including Nginx, for testing and debugging.
curl --header "Host: nasser.com" localhost: This command sends an HTTP request to localhost (the local
machine) and includes a custom Host header with the value nasser.com. The Host header is crucial for
name-based virtual hosting, allowing a single IP address to serve multiple websites. Nginx uses this header
to determine which server block should handle the request.
curl -sI -H "Host: example1.com" http://localhost: This command also sends an HTTP request to
localhost.
-s: This option makes curl silent; it won't display progress meters or error messages unless something goes
wrong.
-I: This option tells curl to only retrieve the HTTP headers of the response, without the actual content.
-H "Host: example1.com": This again sets a custom Host header to example1.com, useful for testing
virtual host configurations.
curl -k --head https://example.com: This command interacts with the HTTPS version of example.com.
-k: This option tells curl to disable SSL certificate verification. This is generally not recommended for
production environments but can be useful for testing with self-signed certificates or when troubleshooting
SSL/TLS issues. Source mentions "ssl vs tls" and creating certs.
--head: This option is similar to -I and requests only the HTTP headers of the response.
Nginx Return Rule (Redirects)

The return directive in Nginx allows you to stop processing the current request and send a specified status
code and optional URL to the client. This is commonly used for redirects:
location / { return 301 https://$host$request_uri; }: This configuration within a server block will
redirect all requests (/) to the HTTPS version of the same URL.
301: This is the HTTP status code for a permanent redirect. It indicates that the requested resource has
moved permanently to the new URL.
https://: This specifies the new protocol for the redirect.
$host: This is an Nginx built-in variable that holds the value of the Host header in the client request.
$request_uri: This is another Nginx built-in variable that contains the full original request URI
(including the path and query string).
return 301: This is an incomplete return directive as it's missing a URL. It would likely result in an error
or unexpected behavior.
Status Codes: These are codes sent by the server to the client indicating the outcome of the request.
Source lists a few:
301 Moved Permanently: This status code is used in Nginx with the return directive to permanently
redirect a request to a new URI. For example, return 301 https://$host$request_uri; will permanently
redirect HTTP requests to their HTTPS counterparts.
404 Not Found: This status code indicates that the server cannot find the requested resource. In Nginx,
you can explicitly return a 404 error using =404 within the try_files directive in a location block if none of
the specified files or directories are found.
200 OK: This is a standard HTTP status code that signifies the request has been successful. While not
extensively detailed in the sources, its mention implies that Nginx returns this code when a request is
successfully processed and the requested content is served.
302 Found (Moved Temporarily): This status code indicates that the requested resource has been
temporarily moved to a different URI. The client should continue to use the original URI for future
requests. This specific status code is not explicitly mentioned in the provided sources in the context of
Nginx.
401 Unauthorized: This status code indicates that the client request has not been completed because it
lacks valid authentication credentials for the requested resource. The sources discuss basic
authentication in Nginx using the auth_basic and auth_basic_user_file directives. If a client tries to access
content protected by basic authentication without providing the correct credentials, the server would
typically respond with a 401 status code (though this specific code is not explicitly stated in the
authentication sections)
.403 Forbidden: This status code indicates that the server understands the request but refuses to
authorize it. The sources mention blocking traffic based on IPs or IP ranges using the http_access

module. If a client whose IP is denied tries to access the server, Nginx would likely return a 403 Forbidden
status code (although this is not explicitly stated in the http_access section).
500 Internal Server Error: This is a generic error response indicating that the server encountered an
unexpected condition that prevented it from fulfilling the request. The provided sources do not specifically
detail scenarios within Nginx configurations that would directly lead to a 500 error. This type of error
often arises from issues in the server's configuration or problems with the application being served.
502 Bad Gateway: This status code indicates that the server, while acting as a gateway or proxy, received
an invalid response from an upstream server it accessed to fulfill the request. The sources mention
Nginx as a reverse proxy and the directive pass-proxy (likely a typo for proxy_pass). In a reverse proxy
setup, if Nginx cannot establish a connection with a backend server or receives an invalid response from it,
it might return a 502 Bad Gateway error to the client.
503 Service Unavailable: This status code indicates that the server is temporarily unable to handle the
request. This could be due to the server being overloaded, under maintenance, or temporarily unavailable
for other reasons. The sources discuss rate limiting, and while exceeding rate limits results in a 429 Too
Many Requests status code, a server experiencing very high traffic might also return a 503 Service
Unavailable status code to new requests as it's temporarily overloaded.
Rewrite Directive
The rewrite directive in Nginx allows you to modify the request URI based on regular expressions. It's a
powerful tool for URL manipulation. Source mentions:
server { rewriter }: This indicates that rewrite directives are typically placed within a server block to
apply to requests handled by that virtual host. However, rewrite directives can also be used within location
blocks.
REGEX: This refers to regular expressions, which are patterns used to match strings. rewrite directives
use regular expressions to match parts of the request URI and can then replace or modify them.
Nginx as Load Balancer
Nginx can function as a load balancer to distribute traffic to multiple backend servers:
upstream: This directive in the nginx.conf file is used to define a group of backend servers that Nginx
will distribute requests to. You can specify the IP addresses and ports of these servers, as well as load
balancing methods.
pass-proxy: This seems to be a shorthand for proxy_pass, which is a directive used within a location
block to forward requests to the backend servers defined in an upstream block. For example:
upstream backend {
server backend1.example.com;
server backend2.example.com;

}
server {
location /app/ {
proxy_pass http://backend;
}
}
In this example, requests to /app/ will be forwarded to either backend1.example.com or
backend2.example.com based on the load balancing method configured in the upstream block.
SSL vs TLS
SSL (Secure Sockets Layer) and TLS (Transport Layer Security) are cryptographic protocols designed
to provide secure communication over a network. TLS is the successor to SSL, and while the term "SSL" is
still often used, most modern systems use TLS. They encrypt data exchanged between a client and a server,
ensuring confidentiality and integrity.
Create cert using mkcert and certbot: These are tools used to obtain and manage SSL/TLS certificates.
mkcert: A simple tool for creating locally trusted development certificates.
Certbot: A widely used, free, and open-source tool provided by the Electronic Frontier Foundation (EFF)
for automating the process of obtaining and installing Let's Encrypt certificates, which are trusted by most
web browsers.
HTTP Headers
HTTP headers are key-value pairs that carry additional information about the HTTP request and response.
They are essential for communication between clients and servers. Source categorizes HTTP headers:
General Header: These headers apply to both request and response messages (e.g., Cache-Control,
Connection).
Request Header: These headers provide information about the client making the request (e.g., User-Agent,
Accept-Language, Host).
Response Header: These headers provide information about the server's response (e.g., Server, ContentType, Content-Length).
Security Header: These headers help enhance the security of web applications by providing instructions to
the browser (e.g., Strict-Transport-Security, X-Frame-Options, Content-Security-Policy).
Authentication Header: These headers are used for client-server authentication (e.g., Authorization,
WWW-Authenticate). Source and discuss basic authentication.
Caching Header: These headers control how responses are cached by clients and proxies (e.g., CacheControl, Expires, ETag). Source mentions caching-related directives.

CORS Header (Cross-Origin Resource Sharing): These headers control whether a web page running
under one domain can request resources from another domain (e.g., Access-Control-Allow-Origin).
Proxy Header: These headers provide information when requests and responses pass through proxies (e.g.,
X-Forwarded-For, X-Forwarded-Proto). Source mentions proxy set header x-proxy-cache
$upstream_cache_status.
Custom Header: Applications can define their own custom headers to exchange specific information.
Nginx Built-in Variable
Nginx provides a rich set of built-in variables that contain information about the server, the request, and
the connection. These variables can be used in Nginx configuration to make it more dynamic and flexible,
for example, when setting headers or in log_format directives. Source mentions using them with headers.
For example, $host and $request_uri were used in the return directive example.
add_header vs. proxy_set_header
Both add_header and proxy_set_header directives are used to manipulate HTTP headers, but they operate
in different contexts:
add_header: This directive adds a header to the HTTP response that Nginx sends directly to the client.
It is typically used within http, server, or location blocks. Source provides examples of using add_header to
set security headers like Strict-Transport-Security, X-Frame-Options, Content-Security-Policy, and
Referrer-Policy. The index directive in source (index index.html index.htm index.nginx-debian.html;)
specifies the default files to serve if a directory is requested.
proxy_set_header: This directive sets or modifies a header that Nginx sends to a backend server when
acting as a reverse proxy. It is typically used within location blocks that are configured with proxy_pass.
Source shows an example: proxy set header x-proxy-cache $upstream_cache_status. This header informs
the backend server about the cache status of the request.
Nginx Basic Authentication
Nginx allows you to implement basic HTTP authentication to restrict access to certain parts of your
website. It's important to note, as mentioned in source, that basic authentication is not recommended for
external access websites due to its lack of strong security. HTTPS should always be used in conjunction
with basic authentication.
There are two main options for setting up basic authentication:
Option 1: htpasswd utility: This utility (usually provided by Apache HTTP Server utils) is used to create
and manage password files in a specific format that Nginx can understand. Source shows the command
sudo htpasswd -c /etc/nginx/conf.d/.htpasswd admin, which creates a new password file (-c) at the
specified path and adds the user admin. You will be prompted to enter a password for this user. The configuration in /etc/nginx/confi.d/ (likely a typo and should be /etc/nginx/conf.d/ or within a server or
location block in nginx.conf or a linked site configuration) would then use these credentials:
auth_basic "Restricted Content";
auth_basic_user_file /etc/nginx/conf.d/.htpasswd;
auth_basic "Restricted Content"; sets the authentication realm (the message displayed in the login dialog),
and auth_basic_user_file /etc/nginx/conf.d/.htpasswd; specifies the path to the password file.
Option 2: openssl utility: You can also use openssl commands to create password hashes that Nginx can
use. The specific commands and file format would need to be configured appropriately in Nginx.
Blocking Traffic
Nginx provides ways to block unwanted traffic based on IP addresses, bots, or network traffic:
http_access module: This Nginx module allows you to allow or deny access based on client IP
addresses or IP address ranges. You can use the allow and deny directives within http, server, or location
blocks. For example:
location /admin {
allow 192.168.1.0/24; # Allow access from this IP range
deny all; # Deny access from all other IPs
}
fail2ban: This is a separate intrusion prevention software framework that can monitor log files (like
Nginx access or error logs) for suspicious activity, such as repeated authentication failures, bad bots, or
excessive requests, and automatically block the offending IP addresses by updating firewall rules.
sudo fail2ban-client status nginx-http-auth: This command checks the status of the nginx-http-auth
jail in Fail2ban, which is likely configured to monitor Nginx logs for authentication failures.
/etc/fail2ban/jail.local: This is a configuration file for Fail2ban where you can define and enable jails for
different services, including Nginx.
/etc/fail2ban/filter.d/: This directory contains filter definitions used by Fail2ban to identify patterns of
malicious activity in log files. You can find nginx-http-auth.conf (or similar) here, as well as other filters.
fail2ban-client unban ip your_ip: If an IP address has been blocked by Fail2ban and you need to unblock
it, you can use this command, replacing your_ip with the actual IP address.
Performance
Optimizing Nginx performance is crucial for handling high traffic and providing a good user experience:
Rate Limiting: This technique is used to control the number of requests a client can make within a
specific time period. This can help protect your server from denial-of-service (DoS) attacks and prevent
abusive usage.
Request Rate Limiting: Limits the number of HTTP requests. Source shows an example:

limit_req_zone $binary_remote_addr zone=limit_per_ip:10m rate=10r/s;
limit_req_status 429;
server {
location /api/ {
limit_req zone=limit_per_ip;
limit_req_status 429; # Set the status code for rejected requests
# ...
}
}
limit_req_zone defines a shared memory zone (limit_per_ip of 10MB) to store the state of request rates for
each IP address ($binary_remote_addr). The rate=10r/s specifies a limit of 10 requests per second.
limit_req zone=limit_per_ip; applies this limit to the /api/ location. limit_req_status 429; sets the "Too
Many Requests" status code for when the limit is exceeded.
Connection Rate Limiting: Limits the number of concurrent connections from a single IP address. Source
provides an example:
limit_conn_zone $binary_remote_addr zone=conn_per_ip:10m;
server {
location / {
limit_conn conn_per_ip 10;
try_files $uri $uri/ =404;
}
}
limit_conn_zone defines a shared memory zone (conn_per_ip of 10MB) to track the number of connections
per IP. limit_conn conn_per_ip 10; limits the number of concurrent connections from a single IP to 10 for
the / location.
Apache Benchmark (ab): This is a command-line tool used for benchmarking HTTP servers. The
example ab -n 1000 https://example.com/ sends 1000 requests (-n 1000) to the specified URL to test its
performance.
Request Rate Limiting: Limits the number of HTTP requests. Source shows an example:

limit_req_zone $binary_remote_addr zone=limit_per_ip:10m rate=10r/s;
limit_req_status 429;
server {
location /api/ {
limit_req zone=limit_per_ip;
limit_req_status 429; # Set the status code for rejected requests
# ...
}
}
limit_req_zone defines a shared memory zone (limit_per_ip of 10MB) to store the state of request rates for
each IP address ($binary_remote_addr). The rate=10r/s specifies a limit of 10 requests per second.
limit_req zone=limit_per_ip; applies this limit to the /api/ location. limit_req_status 429; sets the "Too
Many Requests" status code for when the limit is exceeded.
Connection Rate Limiting: Limits the number of concurrent connections from a single IP address. Source
provides an example:
limit_conn_zone $binary_remote_addr zone=conn_per_ip:10m;
server {
location / {
limit_conn conn_per_ip 10;
try_files $uri $uri/ =404;
}
}
limit_conn_zone defines a shared memory zone (conn_per_ip of 10MB) to track the number of connections
per IP. limit_conn conn_per_ip 10; limits the number of concurrent connections from a single IP to 10 for
the / location.
Apache Benchmark (ab): This is a command-line tool used for benchmarking HTTP servers. The
example ab -n 1000 https://example.com/ sends 1000 requests (-n 1000) to the specified URL to test its
performance.
Method 1: gzip: This is the standard and widely supported compression method in Nginx. You can
configure gzip compression using directives like gzip on;, gzip_types text/plain application/xml ...;, and
gzip_comp_level.
Method 2: brotli: If supported (e.g., with the ngx_brotli module or in Nginx Plus), brotli can offer better
compression ratios than gzip.
Keepalive: HTTP keepalive (or persistent connections) allows multiple HTTP requests and responses
to be sent over the same TCP connection, reducing the overhead of establishing new connections for each
request. HTTP/1.1 uses keepalive by default.
HTTP versions and how use http v1.1 to you config and why use this not v1: HTTP/1.1 is the widely
used version that supports keepalive by default. You generally don't need to explicitly "configure"
HTTP/1.1 in Nginx unless you need to restrict to an older version for specific reasons (which is rare).
HTTP/1.0, by default, did not have keepalive, requiring a Connection: keep-alive header for persistent
connections. HTTP/1.1 offers performance advantages due to keepalive and other improvements.
sendfile: This is a Linux kernel feature that allows the operating system to efficiently copy data from a
file directly to a socket without the need for the data to be copied into user-space memory first. Enabling
sendfile on; in Nginx can improve performance when serving static files.
tcp_nopush: This TCP option, when enabled (tcp_nopush on; in Nginx), delays sending small packets of
data, waiting until a full-sized packet is ready or a certain timeout occurs. This can reduce network
congestion and improve performance in some cases.
Monitoring and Troubleshooting
Effective monitoring and troubleshooting are essential for maintaining a healthy Nginx server:
Logs and log format options: Nginx generates access logs (recording details of client requests) and error
logs (recording any issues encountered). You can configure the format of these logs using the log_format
directive in the http block of your nginx.conf file. This allows you to customize the information recorded in
the logs. Directives like access_log and error_log specify the paths and formats of these log files. Regularly
reviewing these logs is crucial for identifying and resolving issues.