Getting Started with Nginx

Introduction:

Nginx is a lightweight, highly extensible open source HTTP server. Today, it is said to account for dozens of percent of websites around the world. It has also been found that nginx has a high share, especially in large-scale web services. This is also related to the fact that nginx was created to solve the C10K problem (one server can handle connections from 10,000 clients at the same time and the server cannot keep up). By combining techniques such as event-driven, I/O Multiplexing, non-blocking I/O, and asynchronous IO, it has a robust architecture that can withstand C10K problems.

Nginx basic design

A is used as the basic description method of the configuration file.
There are three main ways to write directives: a type that simply specifies one parameter, a type that specifies multiple parameters, and a directive with a block. In particular, the specified value in the directive with block is called the context.

  • Directive to specify parameters:
worker_processes 1;
  • Directive to specify multiple parameters:
error_log /var/log/nginx/error.log error;
  • Directive with block:
server {
server_name some-website.example.com;
root /var/www/html;
}
  • Virtual server (server directive): With nginx, it is possible to run multiple HTTP servers with different settings for each IP address, port, and host name (specifically, the host header of the HTTP request).
    These are called virtual servers and are defined within the server directive. It takes the form of a server context within an HTTP directive.
    In the following, you can split the HTTP server between www.some-website.some-website.example.com and www.some-website.example2.com
  • Public directory (root directive): In nginx, the directory path specified by the root directive is mapped to the root of the URI.
  • Access log output (log_format directive, access_log directive): Typical configuration specifies the log format, and access_log specifies the output destination. It is convenient to use the typical functions available in nginx for .
  • Error log output (error_log directive): The first parameter is the output destination, and the second parameter is the error level, and an error above the specified error level is output.
  • Error control regarding file presence (log_not_found directive): By default, if there is no file to respond, an error level error is output, but a
    non-existent file is requested , such as a web server that delivers a large number of files. If there is no problem, you can turn it off individually.

Design related to process operation

  • Pid directive: Specify the storage destination of the PID file of the nginx master process.
  • User directive: Specifies the user who executes the worker process.
  • Worker_processes directive: Specifies the number of worker processes. Generally, it is appropriate to match with the number of CPU cores.
    By specifying auto, worker processes with the number of CPU cores are automatically started.
  • Worker_rlimit_nofile directive: The number of file descriptors that a process can open is limited by the OS (nofile), but
    it can also be set on the nginx side. If you set it, set it below the value set by the OS.
  • Events directive: It’s related to the worker’s event-driven method. It cannot be omitted. Specifically, with a block, set the following directive inside.
  • Worker_connections directive: Specify the number of connections that the worker can process. If the number of connections is insufficient, worker_connections are not enough is output in the error log.
  • Use directive: Specify the processing method of the connection to be used, but nginx automatically selects the most suitable one for the system in the internal logic. No basic specification required.

Example:

pid /run/nginx.pid;
user nginx;
worker_processes auto;
worker_rlimit_nofile 512;
events {
worker_connections 1200;
}

Performance-related design in Nginx

  • keepalive_timeout directive: Timeout time for clients that are always connected to nginx.
  • sendfile directive: Execute the OS sendfile () system call to read the file and send the response. By using this, it is possible to send a file efficiently because the file descriptor that has an open file sends it directly to the client.
  • tcp_nopush directive: When the sendfile directive is valid, the option is used on Linux systems. This option allows you to send the response headers and file contents with the largest packet size, thus minimizing the number of packets sent. I basically want to enable it.
  • open_file_cache directive : When enabled, nginx saves information such as file descriptor, size, and update time once opened for a certain period of time. It is better to set it in an environment where open/close is intense.
  • open_file_cache_errors directive: Parameters for caching error information when the directive is enabled.

Example:

keepalive_timeout 90s;
sendfile on;
tcp_nopush on;
open_file_cache max=1200 inactive=90;
open_file_cache_errors on

Static website construction in Nginx

Publish static content

  • Location directive: Individual settings can be defined for each specified URI. In the example below, if it matches/sites, the document root will be . The matching mechanism is as follows

The setting that matches the condition is more strict (Longest Matching: Matching a longer number of characters) is prioritized.

When the ^ ~ qualifier (location ^ ~ / AAAAA) is matched Will not be evaluated in subsequent location directives. If the priority is the same, evaluate in order from the top.

  • Index directive: Defines the page to be displayed when accessing a directory. In the example below, main.html is displayed when the URI of is accessed, and index.html is displayed without it.
  • Error_page directive: Defines the web page to be displayed when an error occurs. error_page Status code URI to be displayed In the example below, when the HTTP status code is 404 (resource does not exist), the error page is displayed. The HTTP status code returned to the client is 404.

Example:

Nginx Access control

  • Allow, deny directive: Controls the access source IP address. Describe the allowed IP address and CIDR in allow, and describe the rejected IP address and CIDR in deny whitelist method that allows access to URI/whitelist only in the following example (allow above, finally deny all), blacklist method that denies access to URI / blacklist by (deny above, allow all at the end).
  • Auth_basic, auth_basic_user_file directive: Define password authentication, which is the simplest authentication method. Almost all browsers support it, so you can use it without having to create an authentication screen.

auth_basic authentication area name (optional);
auth_basic_user_file password file name;

Password file generation
username: password
username: password: comment
→ password seems to be the one encrypted with openssl passwd [password].

  • Limit_conn_zone, limit_conn directive: Limit the maximum number of simultaneous connections You can limit the maximum number of simultaneous connections to a specific IP address or that nginx server.
    In limit_conn_zone, define the key to count, zone name, and table size.
    In the example below, 10MB of remote address table is allocated in shared memory with the zone name addr_limit. And allow up to 100 connections per remote address. If it exceeds 100, a response is returned with HTTP status code 503 (Service Unavailable).
  • Limit_req_zone, limit_req directives: This is number of connections limit per time can limit the number of connections per time whether a particular IP address for that nginx server. In limit_req_zone, define the maximum rate of request count. Specify the number of times per second in the form rate which is . If the burst parameter is specified for limit_req, the specified number of burst allowable requests will be queued.
    For requests that exceed the queuing allowance, a response is returned with HTTP status code 503 (Service Unavailable).

Example:

Communication security in Nginx

🛎️ Make settings to use HTTPS. 🛎️

HTTPS is a technology for performing HTTP communication by encrypted communication provided by TLS. By using HTTPS, not only can communication contents be encrypted to prevent eavesdropping and tampering, but also spoofing can be prevented by authenticating the correct server.

Example of encryption flow:

https://www.cloudflare.com/learning/ssl/what-happens-in-a-tls-handshake/

To enable TLS, add ssl to the listener directive. Also, specify the SSL certificate in the directive and the SSL private key in the directive.

HTTPS communication

Other points that should be tuned with SSL/TLS related parameters are also listed.

  • Since vulnerabilities have been found in the SSL/TLS protocol TLSv1.1 (many websites use TLSv1.1), TLSv1.2 is currently appropriate.
  • Encryption suite: HTTPS has a combination of multiple elements such as key authentication, message authentication code, key exchange, and common key cryptography, and you can select the appropriate one for each. The security of HTTPS varies greatly depending on which encryption suite is specified.

ssl_ciphers Encrypted suite list (listed in order of priority. Those that refuse to use are listed first.

  • Server encryption suite: In TLS, the encryption suite to be used is negotiated and decided between the client and server, but in the default, the encryption suite determined by the client is prioritized. Prefer the server’s encryption suite to avoid choosing weak strengths by the client.

Speedup HTTPS communication by minimizing TTFB

  1. Encryption of communication by: is formulated as a new version of HTTP, and shortens TTFB by multiplexing communication paths, header compression, and pipe lining technology . HTTP/2 is available with nginx 1.9.5 and above. When using with nginx, add http2 to the listen directive.
  2. Acceleration by restarting, TLS session (session cache): When TLS handshake is performed, the client / server exchanges a common session ID. The session cache uses the session ID as a key to cache the session information on the server side and omits the next TLS handshake.
    If the cache (built in) with built-in OPENSSL is used, the cache will be split for each worker, but the session cache can be shared for each worker by using the shared memory cache (shared). You can also specify the size of the cache.
  3. Optimization of buffer size: In nginx, the response is buffered and encrypted at regular size intervals when performing TLS communication.
    By default, 16k is specified as the buffer size with the intention of a relatively large file size, but standard web pages can reduce TTFB by making it smaller.
  4. When connecting to the OCSP stapling server, the client must check the validity of the server certificate using the certificate revocation list (CRL) or Online Certificate Status Protocol (OCSP). The problem with CRLs is that the list is huge and it takes a very long time to download. OCSP is very lightweight and only retrieves a single record. However, as a side effect, an OCSP request must be made to the OCSP responder when connecting to the server. To solve this, allow the server to send OCSP records cached during the TLS handshake. This will bypass the OCSP responder. This eliminates the round trip time between the client and the OCSP responder, a feature called OCSP Stapling. In OCSP Stapling, the server makes an OCSP request and caches the response, rather than the client making an OCSP request. The server sends the cached OCSP response to the client along with the server certificate.

DevOps Consultant. I’m strongly focused on automation, security, and reliability.