Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

X-Request-Id gets overwritten by Istio (Envoy) #10875

Closed
shovelend opened this issue Jan 10, 2019 · 14 comments
Closed

X-Request-Id gets overwritten by Istio (Envoy) #10875

shovelend opened this issue Jan 10, 2019 · 14 comments

Comments

@shovelend
Copy link

Describe the bug

When implementing request tracing in our clusters we noticed that x-request-ids generated by nginx get overwritten by the Envoy proxy.

Our internal nginx gateway(outside the k8s cluster) proxies requests to Istio's 31380 port with all the request headers, the x-forwarded-for with nginx's ip appended and a generated x-request-id by nginx gateway.

Having checked Envoy's header sanitizing we are trying to modify the proxy configuration for our Istio ingress and egress gateways so requests coming from our nginx gateway are recognised as internal requests and nginx, Istio and Envoy use the same x-request-id for a request and can be traced easily. Once we get them right, we'll move onto the sidecar proxies.

After having a play with EnvoyFilters we can see that the specified filter gets appended to existing ones and can be seen with:

istioctl proxy-config listeners istio-ingressgateway-6755b9bbf6-mx6f8 -n istio-system -ojson

When issuing the command we see that another filter is created next to the default http_connection_manager.

We can also see that the config option use_remote_address for envoy in the ingress gateway is set to true by Istio.(couldn't find anything in the repo that'd set it though)

Is there a way to amend the ingress gateway proxy's http_connection_manager filter config without creating a new one with the same name with custom k8s objects provided by Istio or is there only a more messy way to modify the envoy config? We've found the envoy-rev0.json under /etc/istio/proxy inside the ingress gateway container, but it looks a bit different to the output of the istioctl proxy-config listeners command so we suppose envoy gets its configuration from somewhere else (too?).

Expected behavior
Envoy proxy configuration is easily configurable via Istio objects. Requests coming from an internal nginx gateway outside the k8s cluster can be recognised as internal requests so x-request-ids don't get overwritten.

Steps to reproduce the bug
Apply the following manifest to modify the envoy config used by the ingress gateway:

apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
  name: gateway-filter
  namespace: istio-system
spec:
  workloadLabels:
    app: istio-ingressgateway
  filters:
  - filterName: envoy.http_connection_manager
    filterType: NETWORK
    listenerMatch:
      portNumber: 80
      listenerType: GATEWAY
    filterConfig:
      stat_prefix: "0.0.0.0_80"
      generate_request_id: false
      use_remote_address: false
      rds: 
        config_source: 
          ads: {}
        route_config_name: http.80

use the following to see the listeners:

$ istioctl proxy-config listeners istio-ingressgateway-6755b9bbf6-mx6f8 -n istio-system -ojson
[
    {
        "name": "0.0.0.0_80",
        "address": {
            "socketAddress": {
                "address": "0.0.0.0",
                "portValue": 80
            }
        },
        "filterChains": [
            {
                "filters": [
                    {
                        "name": "envoy.http_connection_manager",
                        "config": {
                            "generate_request_id": false,
                            "rds": {
                                "config_source": {
                                    "ads": {}
                                },
                                "route_config_name": "http.80"
                            },
                            "stat_prefix": "0.0.0.0_80",
                            "use_remote_address": false
                        }
                    },
                    {
                        "name": "envoy.http_connection_manager",
                        "config": {
                            "access_log": [
                                {
                                    "config": {
                                        "format": "[%START_TIME%] \"%REQ(:METHOD)% %REQ(X-ENVOY-ORIGINAL-PATH?:PATH)%%PROTOCOL%\" %RESPONSE_CODE% %RESPONSE_FLAGS% %BYTES_RECEIVED% %BYTES_SENT% %DURATION% %RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)% \"%REQ(X-FORWARDED-FOR)%\" \"%REQ(USER-AGENT)%\" \"%REQ(X-REQUEST-ID)%\" \"%REQ(:AUTHORITY)%\" \"%UPSTREAM_HOST%\" %UPSTREAM_CLUSTER% %UPSTREAM_LOCAL_ADDRESS% %DOWNSTREAM_LOCAL_ADDRESS% %DOWNSTREAM_REMOTE_ADDRESS%\n",
                                        "path": "/dev/stdout"
                                    },
                                    "name": "envoy.file_access_log"
                                }
                            ],
                            "generate_request_id": true,
                            "http_filters": [
                                {
                                    "config": {
                                        "default_destination_service": "default",
                                        "forward_attributes": {
                                            "attributes": {
                                                "source.uid": {
                                                    "string_value": "kubernetes://istio-ingressgateway-6755b9bbf6-mx6f8.istio-system"
                                                }
                                            }
                                        },
                                        "mixer_attributes": {
                                            "attributes": {
                                                "context.reporter.kind": {
                                                    "string_value": "outbound"
                                                },
                                                "context.reporter.uid": {
                                                    "string_value": "kubernetes://istio-ingressgateway-6755b9bbf6-mx6f8.istio-system"
                                                },
                                                "source.namespace": {
                                                    "string_value": "istio-system"
                                                },
                                                "source.uid": {
                                                    "string_value": "kubernetes://istio-ingressgateway-6755b9bbf6-mx6f8.istio-system"
                                                }
                                            }
                                        },
                                        "service_configs": {
                                            "default": {}
                                        },
                                        "transport": {
                                            "attributes_for_mixer_proxy": {
                                                "attributes": {
                                                    "source.uid": {
                                                        "string_value": "kubernetes://istio-ingressgateway-6755b9bbf6-mx6f8.istio-system"
                                                    }
                                                }
                                            },
                                            "check_cluster": "outbound|9091||istio-policy.istio-system.svc.cluster.local",
                                            "network_fail_policy": {
                                                "policy": "FAIL_CLOSE"
                                            },
                                            "report_cluster": "outbound|9091||istio-telemetry.istio-system.svc.cluster.local"
                                        }
                                    },
                                    "name": "mixer"
                                },
                                {
                                    "name": "envoy.cors"
                                },
                                {
                                    "name": "envoy.fault"
                                },
                                {
                                    "name": "envoy.router"
                                }
                            ],
                            "rds": {
                                "config_source": {
                                    "ads": {}
                                },
                                "route_config_name": "http.80"
                            },
                            "stat_prefix": "0.0.0.0_80",
                            "stream_idle_timeout": "0.000s",
                            "tracing": {
                                "client_sampling": {
                                    "value": 100
                                },
                                "operation_name": "EGRESS",
                                "overall_sampling": {
                                    "value": 100
                                },
                                "random_sampling": {
                                    "value": 5
                                }
                            },
                            "upgrade_configs": [
                                {
                                    "upgrade_type": "websocket"
                                }
                            ],
                            "use_remote_address": true
                        }
                    }
                ]
            }
        ]
    }
]

Check the proxy-status and see its output:

$ istioctl proxy-status istio-ingressgateway-6755b9bbf6-mx6f8.istio-system
Stderr when execute [/usr/local/bin/pilot-discovery request GET /debug/config_dump?proxyID=istio-ingressgateway-6755b9bbf6-mx6f8.istio-system ]: gc 1 @0.327s 0%: 0.033+1.7+1.6 ms clock, 0.13+0/1.5/1.3+6.6 ms cpu, 4->4->1 MB, 5 MB goal, 4 P
gc 2 @0.340s 1%: 0.008+1.7+1.8 ms clock, 0.035+0.11/1.3/1.5+7.4 ms cpu, 4->4->2 MB, 5 MB goal, 4 P

Clusters Match
Listeners Match
Routes Match

The pilot's discovery container doesn't log out any warnings.
Observe that requests to your cluster hang and report a stream timeout after a while.

Version

$ istioctl version
Version: 1.0.5
GitRevision: c1707e45e71c75d74bf3a5dec8c7086f32f32fad
User: root@6f6ea1061f2b
Hub: docker.io/istio
GolangVersion: go1.10.4
BuildStatus: Clean
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:39:04Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:31:33Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}

Installation

istio_version=1.0.5
tracing_enabled=true
trace_sampling_rate=5
istio_proxy_include_ip_ranges=our internal nginx ip address

helm template /tmp/istio-{{ istio_version }}/install/kubernetes/helm/istio --name istio --namespace istio-system --set tracing.enabled={{ tracing_enabled }} --set pilot.traceSampling={{ trace_sampling_rate }} --set servicegraph.enabled={{ tracing_enabled }}} --set global.proxy.includeIPRanges="{{ istio_proxy_include_ip_ranges }}" > $HOME/istio.yaml
kubectl apply -f $HOME/istio.yaml

Environment
Kubernetes cluster running on VMs hosted internally.

Cluster state
istio-dump.zip

@gerasym
Copy link

gerasym commented Jan 15, 2019

Found the similar behavior with deployment on aws using kops. The reason was that by default kops uses 10.64.0.0/10 for pod network, which is not compatible with https://www.envoyproxy.io/docs/envoy/latest/configuration/http_conn_man/headers#x-forwarded-for

For your case i think the problem is quite the same:

XFF is what Envoy uses to determine whether a request is internal origin or external origin. If use_remote_address is set to true, the request is internal if and only if the request contains no XFF and the immediate downstream node’s connection to Envoy has an internal (RFC1918 or RFC4193) source address. If use_remote_address is false, the request is internal if and only if XFF contains a single RFC1918 or RFC4193 address.

So if you want envoy to not overwrite your xff and x-request-id you have to set use_remote_address=false and ensure that your nginx ip address is in RFC1918 or RFC4193

@shovelend
Copy link
Author

Thank you for your answer @gerasym ! That's what we're trying to do here: setting use_remote_address to false for the default http connection manager filter of envoy. However, we didn't find a way, an interface where we could do that. How did you manage to modify the use_remote_address in your case?

@dacappo
Copy link

dacappo commented Jan 24, 2019

We are running in the same problem and cluster behaviour when setting the server_name within the Istio gateway.

@defat
Copy link

defat commented Feb 11, 2019

We hit the same problem after the decision to generate x-request-id on the client side. This approach enables timed out requests tracing having client generated request id.

@stale
Copy link

stale bot commented May 12, 2019

This issue has been automatically marked as stale because it has not had activity in the last 90 days. It will be closed in the next 30 days unless it is tagged "help wanted" or other activity occurs. Thank you for your contributions.

@ShannonHickey
Copy link

I believe what is needed is a way to set Envoy's generate_request_id property:
https://www.envoyproxy.io/docs/envoy/v1.5.0/api-v2/filter/network/http_connection_manager.proto

@stale
Copy link

stale bot commented Aug 19, 2019

This issue has been automatically marked as stale because it has not had activity in the last 90 days. It will be closed in the next 30 days unless it is tagged "help wanted" or other activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Aug 19, 2019
@RemusRD
Copy link

RemusRD commented Aug 27, 2019

We hit the same problem after the decision to generate x-request-id on the client side. This approach enables timed out requests tracing having client generated request id.

We are facing the same problem, we are going to map that header to another that does not get removed by the Istio ingress... We are looking forward for another solution because we really need that for idempotency purposes huraay PSD2.

@PedroMsTavares
Copy link

If you define the Trace Span template in Istio you can define rewriteClientSpanId: false

Example:

kind: instance
metadata:
  name: default
  namespace: istio-system
spec:
  compiledTemplate: tracespan
  params:
    traceId: request.headers["x-b3-traceid"]
    spanId: request.headers["x-b3-spanid"] | ""
    parentSpanId: request.headers["x-b3-parentspanid"] | ""
    spanName: request.path | "/"
    startTime: request.time
    endTime: response.time
    clientSpan: (context.reporter.kind | "inbound") == "outbound"
    rewriteClientSpanId: false
    spanTags:
      http.method: request.method | ""
      http.status_code: response.code | 200
      http.url: request.path | ""
      request.size: request.size | 0
      response.size: response.size | 0
      source.principal: source.principal | ""
      source.version: source.labels["version"] | ""

You can find more info in the Istio docs in the following link.
https://istio.io/docs/reference/config/policy-and-telemetry/templates/tracespan/

@elvizlai
Copy link

elvizlai commented Sep 10, 2019

ping, and x-client-trace-id also not work

@cdmurph32
Copy link

Repeating my comment from #12549:

I was able to preserve the external request id with the following EnvoyFilter. I'm running Istio 1.3.0.

apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
  name: ingressgateway-settings
  namespace: istio-system
spec:
  configPatches:
  - applyTo: NETWORK_FILTER
    match:
      context: GATEWAY
      listener:
        filterChain:
          filter:
            name: envoy.http_connection_manager
    patch:
      operation: MERGE
      value:
        typed_config:
          '@type': type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager
          preserve_external_request_id: true

Thanks to envoyproxy/envoy#7140

@rshriram
Copy link
Member

Thanks @cdmurph32 . Also, folks on this thread, very sorry about the trouble caused by our settings in the gateway - but these were meant as a general purpose setting. The whole reason for creating the new EnvoyFilter API (not the old one specified in the OPs comment) is to overcome some of the issues. As @cdmurph32 points out, with the new API, you can customize the generated configuration to your hearts content. If there are deficiencies in the API (like being unable to set certain fields in the generated config), please file an issue and we would be happy to address them.

@elvizlai
Copy link

elvizlai commented Nov 5, 2019

Is there any guide to config HttpConnectionManager or how to set preserve_external_request_id when using istio 1.2.x?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests