Migrating From NGINX to Envoy Proxy

Maciej
7 min readSep 11, 2020

--

Introduction

In this article I would like to touch on the topic of migrating NGINX configuration to ENVOY, because my Kubernetes, which is currently in dev environment, and uses Nginx Ingress Controller, and I wanted to switch to Envoy if Envoy had advantages over Nginx and was easy to migrate.

In katacoda there is simple scenario for this,

Migrating from NGINX to Envoy

This scenario is intended to support the migration from NGINX to Envoy. This will help you apply your previous experience and understanding of NGINX to Envoy.

We learn:

  • Configure Envoy server configuration and settings
  • Configure Envoy to proxy traffic to external services.
  • Set AccessLog and ErrorLog.

At the end of the scenario, you’ll learn about the core features of Envoy and how to migrate your existing NGINX scripts to the platform.

NGINX Example Configuration

NGINX configuration usually has three main components.

  1. NGINX server, logging structure, Gzip feature configuration. It is defined globally across all instances.
  2. Configure NGINX to accept requests from the one.example.com host on port 8080.
  3. Configure the target location for how to handle traffic to different parts of the URL.

Not all configurations apply to the Envoy Proxy and you do not need to configure any particular aspect. Envoy Proxy has four main components that support the core infrastructure provided by NGINX.

  • Listeners: Defines how the Envoy Proxy accepts incoming requests. Currently, Envoy Proxy only supports TCP-based listeners. Once the connection is established, it is passed through a set of filters for processing.
  • Filters: It is part of a pipeline architecture that can handle inbound and outbound data. This feature enables filters such as Gzip that compress the data before sending it to the client.
  • Routers: Forwards traffic to the required destinations defined as a cluster.
  • Clusters: Define target endpoints and configuration settings for traffic.

Use these four components to create an Envoy proxy configuration that matches the defined NGINX configuration.

NGINX Configuration Explanation

Worker Connections

The following settings focus on defining the number of worker processes and connections. This shows how NGINX scales to handle demand.

worker_processes  2;events {
worker_connections 2000;
}

Envoy spawns worker threads for every hardware thread in the system. Each worker thread runs a non-blocking event loop.

  1. Listen to all listeners
  2. Accept new connection
  3. Instantiation of filter stack for connection
  4. Processing of all IOs for the life of the connection.

All subsequent connection processing, including forwarding operations, is completely handled within the worker thread.

All Envoy connection pools are per worker thread. The HTTP/2 connection pool creates only one connection to each upstream host at a time, but if you have four workers, there are four HTTP/2 connections per upstream host in steady state. By keeping everything in a single worker thread, you can write almost any code as if it were a single thread, with no locks. Unnecessarily large numbers of workers waste memory, increase idle connections, and reduce connection pool hit rates.

HTTP Configuration

The next block of NGINX settings defines HTTP settings such as:

  • Supported MIME types
  • Default timeout
  • Gzip settings

These are set within the Filters component within the Envoy Proxy, which we will discuss later.

Server Configuration:

Within the http block of the NGINX configuration below, it listens on port 8080 and responds to requests to the domains one.example.com and www.sample.mars.com .

server {
listen 8080;
server_name one.example.com www.one.example.com;

Envoy sets these with the listeners component.

Envoy Listeners

The most important setting when starting Envoy is to define a listener. You need to create a configuration file that describes how to run your Envoy instance.

The following snippet creates a new listener and binds it to port 8080.

static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 0.0.0.0, port_value: 8080 }

In NGNIX, it server_nameis filtersset by the Envoy component, so it does not need to be defined here.

Configuration Location

NGINX locationblocks define how traffic is handled and where it is forwarded. With the settings below, all traffic (/) to your site will http://targetCluster/be proxied.

location / {
proxy_pass http://targetCluster/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}

In Envoy, filtersit is set in the component.

Envoy Filters

For static settings, filtersdefines how to handle the request. In this case, we set to server_namesmatch in the previous step filters. Defined domainsas routesa request that matches is received, the traffic is forwarded to the cluster.

filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: backend
domains:
- "sample.mars.com"
- "www.sample.mars.com"
routes:
- match:
prefix: "/"
route:
cluster: targetCluster
http_filters:
- name: envoy.router

As the name implies HTTP, it is a filter that controls. Other examples of filters, Redis, Mongo, TCPthere seems to be such. For more information on other load balancing policies, please see the Envoy documentation.

Proxy and Upstream Configuration

Upstream configuration in NGINX defines a set of target servers to handle traffic. Two clusters are assigned in the following cases:

upstream targetCluster {
172.22.0.5:80;
172.22.0.6:80;
}

In Envoy, it clustersis set by the component.

Envoy Clusters

The upstream equivalent is clustersdefined as. In the following cases, the host that handles the traffic is defined. How to access the host, such as timeouts, is defined as a cluster configuration. This gives you more control over aspects such as timeouts and load balancing.

clusters:
- name: targetCluster
connect_timeout: 0.25s
type: STRICT_DNS
dns_lookup_family: V4_ONLY
lb_policy: ROUND_ROBIN
hosts: [
{ socket_address: { address: 172.22.0.5, port_value: 80 }},
{ socket_address: { address: 172.22.0.6, port_value: 80 }}
]

STRICT_DNSWhen using service discovery, Envoy resolves the specified DNS target continuously and asynchronously. Each IP address returned in the DNS results is considered an explicit host in the upstream cluster. So if the query returns two IP addresses, Envoy should assume that there are two hosts in the cluster and balance the load on both. When a host is removed from the results, Envoy considers the host non-existent and drains traffic from the existing connection pool.

Logging Access and Errors

  • Error log: Instead of pipe the error logs to disk, follow the Envoy cloud-native approach. This means that all of the application log stdoutand stderrwill be output to.
  • Access log: Access logging is optional and is disabled by default. To enable access logging, set a clause envoy.http_connection_managerwithin access_log. The output path can be either stdouta device such as, or a file on disk , depending on your requirements. In the following example, all access logs are stdoutoutput to. The default Log Format is as follow
- name: envoy.http_connection_manager
config:
codec_type: auto
stat_prefix: ingress_http
access_log:
- name: envoy.file_access_log
config:
path: "/dev/stdout"
route_config:

Example log format:

[%START_TIME%] "%REQ(:METHOD)% %REQ(X-ENVOY-ORIGINAL-PATH?:PATH)% %PROTOCOL%"
%RESPONSE_CODE% %RESPONSE_FLAGS% %BYTES_RECEIVED% %BYTES_SENT% %DURATION%
%RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)% "%REQ(X-FORWARDED-FOR)%" "%REQ(USER-AGENT)%"
"%REQ(X-REQUEST-ID)%" "%REQ(:AUTHORITY)%" "%UPSTREAM_HOST%"\n

Example output:

[2020-09-10T20:37:02.221Z] "GET / HTTP/1.1" 200 - 0 58 4 1 "-" "curl/7.47.0" "f21ebd42-6770-4aa5-88d4-e56118165a7d" "one.example.com" "172.22.0.5:80"

You can change the output by customizing the fields.

access_log:
- name: envoy.file_access_log
config:
path: "/dev/stdout"
format: "[%START_TIME%] "%REQ(:METHOD)% %REQ(X-ENVOY-ORIGINAL-PATH?:PATH)% %PROTOCOL%" %RESPONSE_CODE% %RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)% "%REQ(X-REQUEST-ID)%" "%REQ(:AUTHORITY)%" "%UPSTREAM_HOST%"\n"

You can also use the field to output the log as json.

access_log:
- name: envoy.file_access_log
config:
path: "/dev/stdout"
json_format: {"protocol": "%PROTOCOL%", "duration": "%DURATION%", "request_method": "%REQ(:METHOD)%"}

Launch Envoy

If you add the settings for envoy so far, it will be as follows.

Start Envoy based on this setting.

Envoy Run As User

The line at the top of the nginx configuration file user www www;indicates that you should run NGINX as a low-privileged user for added security.

Envoy uses a cloud-native approach to manage process owners. You can specify a low privileged user when launching Envoy through the container.

Starting Envoy Proxy

The following command launches Envoy through a Docker container on the host. This command exposes Envoy to listen for requests on port 80. However, Envoy itself is listening on port 8080 as specified by the listener. Also, by --userconfiguration, the process is running as a low privileged user.

$ docker run --name envoyproxy -p 80:8080 --user 1000:1000 -v /root/envoy.yaml:/etc/envoy/envoy.yaml envoyproxy/envoy

Start testing

The following curl command makes envoy.yamla request using the host header defined in.

$ curl -H "Host: sample.mars.com" localhost -i
HTTP/1.1 503 Service Unavailable
content-length: 57
content-type: text/plain
date: Thu, 10 Sep 2020 18:38:44 GMT
server: envoy
upstream connect error or disconnect/reset before headers$
$

The result of this request is a 503 error. This is simply because the upstream connection is not running and cannot be used. There are no target destinations available for the request from Envoy’s perspective.
Let’s to launch a series of HTTP services that match the configuration that is defined.

$ docker run -d katacoda / docker-http-server ; 
$ docker run -d katacoda / docker-http-server ;

Envoy can now successfully proxy traffic to the target destination using the available services.

$ curl -H "Host: sample.mars.com" localhost -i
HTTP/1.1 200 OK
date: Thu, 10 Sep 2020 18:43:14 GMT
content-length: 58
content-type: text/html; charset=utf-8
x-envoy-upstream-service-time: 0
server: envoy
<h1>This request was processed by host: 792123c0e13f</h1>
$

You will see a response indicating which Docker container processed the request.

Additional HTTP Headers

An additional HTTP header appears within the response header of a valid request. The header shows the time spent processing the request by the upstream host in milliseconds.

x-envoy-upstream-service-time: 0
server: envoy

--

--

Maciej

DevOps Consultant. I’m strongly focused on automation, security, and reliability.