Introduction
Kubernetes is currently in my development environment, but I haven’t created a service mesh because the number of services is still small. It seems that it is better to consider creating a service mesh when the number of services increases in the future.
I looking As a service for creating a service mesh, there are Istio
, AWS app meshetc
etc. ,but these are envoy functions for controlling the proxy, and it seems that enovy
actually controls the communication. For that reason, envoy
I would like to study what elementary people can do.
There Try Envoy
is learning content on the Envoy site, so I would like to continue studying with it.
Try Envoy
will be embedded with Katacoda learning content that is browser-based.
Getting Started with Envoy
1. Create Proxy Config
Envoy is configured using YAML definition file to control proxy behavior. In this step, you set it using the static configuration API. This means that all settings are predefined in the definition file.
Envoy also supports dynamic configuration. This also allows the setting to be detected via an external source.
Resources
The first line of conf defines the API configuration used. This time we want to set up a static API, so the first line static_resources
must be.
static_resources:
Listeners
Next, define the listener. Listeners are network-related settings, such as the IP address and port on which Envoy listens for requests. Envoy runs inside a Docker container, so 0.0.0.0
it needs to listen on an IP address . In the following set, Envoy is port 10000
in listens.
listeners:
- name: listener_0
address:
socket_address: { address: 0.0.0.0, port_value: 10000 }
Filter Chains and Filters
Next, define how to handle the request. Each listener has a set of filters, and each listener can have a different set of filters.
In this example, we proxy all traffic to Google.com. So when you request the Envoy endpoint, you should see the Google homepage with the URL still at the Envoy endpoint. (Reverse proxy instead of redirect)
Filtering is filter_chains
defined using. The purpose of each filter is to find a match for the squirt request and match it to the target destination.
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: { prefix: "/" }
route: { host_rewrite: www.google.com, cluster: service_google }
http_filters:
- name: envoy.router
The filter uses envoy.http_connection_manager , which is a built-in filter designed for HTTP connections .
Clusters
If the request matches the filter, the request is passed to the cluster.
The cluster shown below defines that the host is google.com running over HTTPS. Envoy Round robin strategy will run if multiple hosts are defined .
clusters:
- name: service_google
connect_timeout: 0.25s
type: LOGICAL_DNS
dns_lookup_family: V4_ONLY
lb_policy: ROUND_ROBIN
hosts: [{ socket_address: { address: google.com, port_value: 443 }}]
tls_context: { sni: www.google.com }
Admin
Finally we need an admin section. The administration section is described in detail in the next step.
admin:
access_log_path: /tmp/admin_access.log
address:
socket_address: { address: 0.0.0.0, port_value: 9901 }
This structure defines the boilerplate for Envoy Static Configuration. The listener defines the Envoy port and IP address. The listener has a set of filters that match incoming requests. If the request matches, it will be forwarded to the cluster.
2. Start Proxy
Start Envoy
Create Proxy Configenvoy.yaml
Start Envoy bound to port 80 using the one created in .
docker run --name=proxy -d \
-p 80:10000 \
-v $(pwd)/envoy/envoy.yaml:/etc/envoy/envoy.yaml \
envoyproxy/envoy:latest
View Envoy
If curl localhost
you tap in the terminal on katacoda, the Google.com source will be displayed. (I'm not sure because it's HTML...)
You can also hit the URL issued by katacoda from the browser of your own PC. Hit it and you’ll see that the request was proxied to Google.com as configured.
3. Admin View
Envoy provides an administrative view to view configuration, statistics, logs, and other internal Envoy data. Administrators can be defined by adding an additional resource definition that defines the Administrator view port.
The port must not conflict with other listener settings.
admin:
access_log_path: /tmp/admin_access.log
address:
socket_address: { address: 0.0.0.0, port_value: 9901 }
Start Admin
This Docker container exposes its management port to the outside world. The above resource settings expose the admin view to the public. Use it for demonstration purposes only. See the documentation on how to secure the Admin Portal.
To publish the Management Portal, run the following command:
docker run --name=proxy-with-admin -d \
-p 9901:9901 \
-p 10000:10000 \
-v $(pwd)/envoy/envoy.yaml:/etc/envoy/envoy.yaml \
envoyproxy/envoy:latest
In addition to being able to perform destructive operations (such as shutting down the server), the current form of the administration interface may also expose private information (statistics, cluster name, certificate information, etc.). Access is important. Access to the management interface is only allowed over a secure network
When I opened the administrator view from the browser, the following screen was displayed. You can get various information from here.
4. Route to Docker Containers
The last example uses Envoy to proxy traffic to various Python services based on the requested URL path.
Configuration
The application configuration is defined as a Docker Compose file. I want to run multiple containers at the same time so I use a Docker Compose file. One for proxies and one for each individual service.
version: '2'
services:
front-envoy:
build:
context: .
dockerfile: Dockerfile-frontenvoy
volumes:
- ./front-envoy.yaml:/etc/front-envoy.yaml
networks:
- envoymesh
expose:
- "80"
- "8001"
ports:
- "8000:80"
- "8001:8001"
service1:
build:
context: .
dockerfile: Dockerfile-service
volumes:
- ./service-envoy.yaml:/etc/service-envoy.yaml
networks:
envoymesh:
aliases:
- service1
environment:
- SERVICE_NAME=1
expose:
- "80"
service2:
build:
context: .
dockerfile: Dockerfile-service
volumes:
- ./service-envoy.yaml:/etc/service-envoy.yaml
networks:
envoymesh:
aliases:
- service2
environment:
- SERVICE_NAME=2
expose:
- "80"
networks:
envoymesh: {}
Application
This service is a Python web application that uses Envoy inside the container to forward traffic to your Python application. You don’t need to have Envoy in front of your application.
from flask import Flask
from flask import request
import socket
import os
import sys
import requests
app = Flask(__name__)
TRACE_HEADERS_TO_PROPAGATE = [
'X-Ot-Span-Context',
'X-Request-Id',
# Zipkin headers
'X-B3-TraceId',
'X-B3-SpanId',
'X-B3-ParentSpanId',
'X-B3-Sampled',
'X-B3-Flags',
# Jaeger header (for native client)
"uber-trace-id"
]
@app.route('/service/<service_number>')
def hello(service_number):
return ('Hello from behind Envoy (service {})! hostname: {} resolved'
'hostname: {}\n'.format(os.environ['SERVICE_NAME'],
socket.gethostname(),
socket.gethostbyname(socket.gethostname())))
@app.route('/trace/<service_number>')
def trace(service_number):
headers = {}
# call service 2 from service 1
if int(os.environ['SERVICE_NAME']) == 1 :
for header in TRACE_HEADERS_TO_PROPAGATE:
if header in request.headers:
headers[header] = request.headers[header]
ret = requests.get("http://localhost:9000/trace/2", headers=headers)
return ('Hello from behind Envoy (service {})! hostname: {} resolved'
'hostname: {}\n'.format(os.environ['SERVICE_NAME'],
socket.gethostname(),
socket.gethostbyname(socket.gethostname())))
if __name__ == "__main__":
app.run(host='127.0.0.1', port=8080, debug=True)
Envoy Frontend Proxy
Envoy proxy settings are as follows.
static_resources:
listeners:
- address:
socket_address:
address: 0.0.0.0
port_value: 80
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: backend
domains:
- "*"
routes:
- match:
prefix: "/service/1"
route:
cluster: service1
- match:
prefix: "/service/2"
route:
cluster: service2
http_filters:
- name: envoy.router
config: {}
clusters:
- name: service1
connect_timeout: 0.25s
type: strict_dns
lb_policy: round_robin
http2_protocol_options: {}
hosts:
- socket_address:
address: service1
port_value: 80
- name: service2
connect_timeout: 0.25s
type: strict_dns
lb_policy: round_robin
http2_protocol_options: {}
hosts:
- socket_address:
address: service2
port_value: 80
admin:
access_log_path: "/dev/null"
address:
socket_address:
address: 0.0.0.0
port_value: 8001
routes
Matches based on the URL of the request.
routes:
- match:
prefix: "/service/1"
route:
cluster: service1
- match:
prefix: "/service/2"
route:
cluster: service2
The envoy settings will forward traffic to the endpoints service1 and service2. These are DNS entries provided by the Docker Network configured with Docker Compose.
Deploy Envoy
docker-compose -f ~/envoy/examples/front-proxy/docker-compose.yml up -d
Admin View
You can check various information by looking at the management view.
- https:// URL for management screen and port/ — Management interface
- https:// URL for management screen and port/config_dump — You can view the routing configuration as JSON.
- https:// URL for management screen and port/clusters — Additional information such as available clusters and their metrics
- https:// URL for management screen and port/stats — Various metrics
5. Application Routing
Envoy is listening on port 8000. Therefore, when curl localhos:8080
you tap in the terminal on katacoda , various services respond according to the settings based on the added URL.
Conclusion
For now, you’re Getting Started with Envoy
done. You could reverse proxy the request to an external service, or you could do path-based routing to multiple applications. Well, it's only basic settings, so it's like "I can do it with Nginx." I will definitely continue with Envoy as I can see that it is a powerful tool
Try Envoy to work to find out how good Envoy is !