Hello, Habr! I bring to your attention the translation of the post: Migration from Nginx to Envoy Proxy .
Envoy is a high-performance distributed proxy server (written in C ++) designed for individual services and applications, it is also a communication bus and a “universal data plane” designed for large microservice “service mesh” architectures. When creating it, solutions to problems that arose during the development of such servers as NGINX, HAProxy, hardware load balancers, and cloud load balancers were taken into account. Envoy works with every application and abstracts the network, providing common functions regardless of platform. When all of the office traffic in the infrastructure passes through the Envoy grid, it becomes easy to visualize problem areas with consistent observability, tuning of overall performance and adding basic functions at a specific location.
This script uses a specially created nginx.conf file, based on a complete example from the NGINX Wiki . You can view the configuration in the editor by opening nginx.conf
Source nginx config
user www www; pid /var/run/nginx.pid; worker_processes 2; events { worker_connections 2000; } http { gzip on; gzip_min_length 1100; gzip_buffers 4 8k; gzip_types text/plain; log_format main '$remote_addr - $remote_user [$time_local] ' '"$request" $status $bytes_sent ' '"$http_referer" "$http_user_agent" ' '"$gzip_ratio"'; log_format download '$remote_addr - $remote_user [$time_local] ' '"$request" $status $bytes_sent ' '"$http_referer" "$http_user_agent" ' '"$http_range" "$sent_http_content_range"'; upstream targetCluster { 172.18.0.3:80; 172.18.0.4:80; } server { listen 8080; server_name one.example.com www.one.example.com; access_log /var/log/nginx.access_log main; error_log /var/log/nginx.error_log info; location / { proxy_pass http://targetCluster/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; } } }
NGINX configurations usually have three key elements:
Not all configuration will be applied to Envoy Proxy, and you do not need to configure some settings. Envoy Proxy has four key types that support the underlying infrastructure offered by NGINX. The core is:
We will use these four components to create the Envoy Proxy configuration to match the specific NGINX configuration. Envoy's goal is working with the API and dynamic configuration. In this case, the basic configuration will use static, hard-coded parameters from NGINX.
The first part of nginx.conf defines some of the internal NGINX components that need to be configured.
The configuration below determines the number of work processes and connections. This indicates how NGINX will scale to meet demand.
worker_processes 2; events { worker_connections 2000; }
Envoy Proxy manages workflows and connections differently.
Envoy creates a workflow for each hardware thread in the system. Each worker thread runs a non-blocking event loop that is responsible for
All further connection processing is fully processed in the workflow, including any forwarding behavior.
For each workflow in Envoy, there is a connection in the pool. Thus, HTTP / 2 connection pools establish only one connection for each external host at a time; if there are four worker threads, there will be four HTTP / 2 connections for each external host in a stable state. By storing everything in one workflow, almost all code can be written without locking, as if it were single-threaded. If more than necessary workflows are allocated, this can lead to inefficient use of memory, the creation of a large number of idle connections and a decrease in the number of connections returned to the pool.
For more information, visit the Envoy Proxy blog .
The following NGINX configuration block defines HTTP settings, such as:
You can configure these aspects using filters in Envoy Proxy, which we will discuss later.
In the HTTP configuration block, the NGINX configuration instructs you to listen on port 8080 and respond to incoming requests for the domains one.example.com and www.one.example.com .
server { listen 8080; server_name one.example.com www.one.example.com;
Inside Envoy, the Listeners control it.
The most important aspect of getting started with Envoy Proxy is identifying listeners. You need to create a configuration file that describes how you want to run an Envoy instance.
The snippet below will create a new listener and associate it with port 8080. The configuration tells Envoy Proxy which ports it should be bound to for incoming requests.
Envoy Proxy uses YAML notation for its configuration. To familiarize yourself with this notation, see the link here.
Copy to Editorstatic_resources: listeners: - name: listener_0 address: socket_address: { address: 0.0.0.0, port_value: 8080 }
There is no need to define server_name , as Envoy Proxy filters can handle this.
When a request arrives at NGINX, the location block determines how to process and where to direct traffic. In the following fragment, all traffic to the site is transmitted to an upstream (translator's note: upstream is usually an application server) cluster named targetCluster . The upstream cluster defines the nodes that should process the request. We will discuss this in the next step.
location / { proxy_pass http://targetCluster/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; }
At Envoy, Filters does this.
For a static configuration, filters determine how to handle incoming requests. In this case, we set filters that match server_names in the previous step. When incoming requests arrive that correspond to specific domains and routes, traffic is directed to the cluster. This is the equivalent of the NGINX upstream configuration.
Copy to Editor filter_chains: - filters: - name: envoy.http_connection_manager config: codec_type: auto stat_prefix: ingress_http route_config: name: local_route virtual_hosts: - name: backend domains: - "one.example.com" - "www.one.example.com" routes: - match: prefix: "/" route: cluster: targetCluster http_filters: - name: envoy.router
The name envoy.http_connection_manager is a built-in filter in Envoy Proxy. Other filters include Redis , Mongo , TCP . You can find the full list in the documentation .
For more information about other load balancing policies, visit the Envoy Documentation .
In NGINX, the upstream configuration defines the set of target servers that will handle traffic. In this case, two clusters were assigned.
upstream targetCluster { 172.18.0.3:80; 172.18.0.4:80; }
In Envoy, this is cluster managed.
The equivalent of upstream is defined as clusters. In this case, the hosts that will serve the traffic were identified. A method of accessing hosts, such as a timeout, is defined as a cluster configuration. This allows you to more accurately control the graininess of aspects such as latency and load balancing.
Copy to Editor clusters: - name: targetCluster connect_timeout: 0.25s type: STRICT_DNS dns_lookup_family: V4_ONLY lb_policy: ROUND_ROBIN hosts: [ { socket_address: { address: 172.18.0.3, port_value: 80 }}, { socket_address: { address: 172.18.0.4, port_value: 80 }} ]
When using STRICT_DNS service discovery, Envoy will continuously and asynchronously resolve the specified DNS targets. Each returned IP address as a result of DNS will be considered an explicit host in the upstream cluster. This means that if the request returns two IP addresses, Envoy will assume that there are two hosts in the cluster, and both must be load balanced. If the host is removed from the result, Envoy assumes that it no longer exists and will select traffic from any existing connection pools.
For more information, see Envoy Proxy Documentation .
The final configuration is registration. Instead of transferring error logs to disk, Envoy Proxy uses a cloud-based approach. All application logs are displayed in stdout and stderr .
When users make a request, access logs are optional and are disabled by default. To enable access logs for HTTP requests, enable the access_log configuration for the HTTP Connection Manager. The path can be either a device, such as stdout , or a file on disk, depending on your requirements.
The following configuration will redirect all access logs to stdout (translator's note - stdout is necessary for using envoy inside docker. If you use without docker, replace / dev / stdout with the path to the regular log file). Copy the snippet to the configuration section for the connection manager:
Copy to Clipboardaccess_log: - name: envoy.file_access_log config: path: "/dev/stdout"
The results should look like this:
- name: envoy.http_connection_manager config: codec_type: auto stat_prefix: ingress_http access_log: - name: envoy.file_access_log config: path: "/dev/stdout" route_config:
By default, Envoy has a format string that includes the details of the HTTP request:
[%START_TIME%] "%REQ(:METHOD)% %REQ(X-ENVOY-ORIGINAL-PATH?:PATH)% %PROTOCOL%" %RESPONSE_CODE% %RESPONSE_FLAGS% %BYTES_RECEIVED% %BYTES_SENT% %DURATION% %RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)% "%REQ(X-FORWARDED-FOR)%" "%REQ(USER-AGENT)%" "%REQ(X-REQUEST-ID)%" "%REQ(:AUTHORITY)%" "%UPSTREAM_HOST%"\n
The result of this format string:
[2018-11-23T04:51:00.281Z] "GET / HTTP/1.1" 200 - 0 58 4 1 "-" "curl/7.47.0" "f21ebd42-6770-4aa5-88d4-e56118165a7d" "one.example.com" "172.18.0.4:80"
The contents of the output can be customized by setting the format field. For example:
access_log: - name: envoy.file_access_log config: path: "/dev/stdout" format: "[%START_TIME%] "%REQ(:METHOD)% %REQ(X-ENVOY-ORIGINAL-PATH?:PATH)% %PROTOCOL%" %RESPONSE_CODE% %RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)% "%REQ(X-REQUEST-ID)%" "%REQ(:AUTHORITY)%" "%UPSTREAM_HOST%"\n"
The log string can also be output in JSON format by setting the json_format field. For example:
access_log: - name: envoy.file_access_log config: path: "/dev/stdout" json_format: {"protocol": "%PROTOCOL%", "duration": "%DURATION%", "request_method": "%REQ(:METHOD)%"}
For more information on envoy registration techniques, visit
Logging is not the only way to get an idea of working with Envoy Proxy. It has built-in advanced features for tracing and metrics. You can learn more in the trace documentation or through the Interactive Trace Script .
Now you have transferred the configuration from NGINX to Envoy Proxy. The final step is to run an instance of Envoy Proxy to test it.
At the top of the NGINX configuration, the line user www www; indicates that NGINX has been launched as a user with low privileges to increase security.
Envoy Proxy takes a cloud-based approach to managing who owns the process. When we run Envoy Proxy through the container, we can specify a user with a low privilege level.
The command below will launch Envoy Proxy through the Docker container on the host. This command provides Envoy with the ability to listen for incoming requests through port 80. However, as indicated in the listener configuration, Envoy Proxy listens for incoming traffic through port 8080. This allows the process to run as a user with low privileges.
docker run --name proxy1 -p 80:8080 --user 1000:1000 -v /root/envoy.yaml:/etc/envoy/envoy.yaml envoyproxy/envoy
With proxies running, tests can now be made and processed. The following cURL command issues a request with the host header defined in the proxy configuration.
curl -H "Host: one.example.com" localhost -i
An HTTP request will result in error 503 . This is due to the fact that upstream connections do not work and they are not available. Thus, Envoy Proxy does not have any available target destinations for the request. The following command will launch a series of HTTP services that match the configuration defined for Envoy.
docker run -d katacoda/docker-http-server; docker run -d katacoda/docker-http-server;
With the services available, Envoy can successfully proxy traffic to its destination.
curl -H "Host: one.example.com" localhost -i
You should see a response indicating which Docker container has processed the request. In Envoy Proxy logs, you should also see the displayed access string.
You will see additional HTTP headers in the response headers of the actual request. The header displays the time that the upstream host spent processing the request. It is expressed in milliseconds. This is useful if the client wants to determine the service time compared to network latency.
x-envoy-upstream-service-time: 0 server: envoy
static_resources: listeners: - name: listener_0 address: socket_address: { address: 0.0.0.0, port_value: 8080 } filter_chains: - filters: - name: envoy.http_connection_manager config: codec_type: auto stat_prefix: ingress_http route_config: name: local_route virtual_hosts: - name: backend domains: - "one.example.com" - "www.one.example.com" routes: - match: prefix: "/" route: cluster: targetCluster http_filters: - name: envoy.router clusters: - name: targetCluster connect_timeout: 0.25s type: STRICT_DNS dns_lookup_family: V4_ONLY lb_policy: ROUND_ROBIN hosts: [ { socket_address: { address: 172.18.0.3, port_value: 80 }}, { socket_address: { address: 172.18.0.4, port_value: 80 }} ] admin: access_log_path: /tmp/admin_access.log address: socket_address: { address: 0.0.0.0, port_value: 9090 }
Envoy Proxy installation instructions can be found at https://www.getenvoy.io/
By default in rpm there is no systemd service config.
Add the systemd service config /etc/systemd/system/envoy.service:
[Unit] Description=Envoy Proxy Documentation=https://www.envoyproxy.io/ After=network-online.target Requires=envoy-auth-server.service Wants=nginx.service [Service] User=root Restart=on-failure ExecStart=/usr/bin/envoy --config-path /etc/envoy/config.yaml [Install] WantedBy=multi-user.target
You need to create the directory / etc / envoy / and put the config.yaml config there.
According to envoy proxy there is a telegram chat: https://t.me/envoyproxy_ru
Envoy Proxy does not support static content distribution. So who can vote for feature: https://github.com/envoyproxy/envoy/issues/378