Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HTTP2 traffic on port 8080 via envoy-proxy is being forwarded as HTTP1.1 #17102

Open
sanamsarath opened this issue Sep 15, 2019 · 17 comments
Open

HTTP2 traffic on port 8080 via envoy-proxy is being forwarded as HTTP1.1 #17102

sanamsarath opened this issue Sep 15, 2019 · 17 comments

Comments

@sanamsarath
Copy link

@sanamsarath sanamsarath commented Sep 15, 2019

Bug description
In our deployment cluster(deployed with istio 1.1.11) all the traffic to/from the application container is routed through the envoy-proxy side car. Following issue is observed, HTTP2 traffic on port 8080 originated from application container towards upstream servers(e.g, foo.com) is being forwarded by envoy-proxy as HTTP1.1. We don't have any service port defined in the cluster for the destination server.

Observation
We see that the HTTP2 request to destination port 8080 is forwarded to listener "0.0.0.0_8080" in Envoy, which has some filters defined to support istio internal services(in this case mixer). suspicion is that one of the attribute in the http filters defined is websocket upgrade, which is modifying HTTP2 header to HTTP. I have attached the config dump of the listener 8080 below.

Affected product area (please put an X in all that apply)

[ ] Configuration Infrastructure
[ ] Docs
[ ] Installation
[X] Networking
[ ] Performance and Scalability
[ ] Policies and Telemetry
[ ] Security
[ ] Test and Release
[ ] User Experience
[ ] Developer Infrastructure

Expected behavior
The HTTP2 traffic for the upstream via proxy sidecar should't be downgrade/upgrade to Http unless the destination end point has a service port defined with name http*.

Steps to reproduce the bug
Deploy istio 1.1.4 and above, try curl http2 request to any server running on port 8080 or any other port that has matching http listener defined for istio internal servicing purposes(e.g. u can try 8085,15010,9901,...).

Version (include the output of istioctl version --remote and kubectl version)
1.1.11

How was Istio installed?
Using Helm Charts

Environment where bug was observed (cloud vendor, OS, etc)
Bare metal, Kubernetes , CentOS

config dump from envoy-sidecar:
listener 8080:

[kuberadmin@sarath-reddy-k8-master-1-a00031-4b2c09f7b029c1b8 ~]$ istioctl proxy-config listeners -n sleep sleep-778f6b54f9-spc5w --port 8080 -o json[
    {
        "name": "0.0.0.0_8080",
        "address": {
            "socketAddress": {
                "address": "0.0.0.0",
                "portValue": 8080
            }
        },
        "filterChains": [
            {
                "filters": [
                    {
                        "name": "envoy.http_connection_manager",
                        "config": {
                            "access_log": [
                                {
                                    "config": {
                                        "format": "[%START_TIME%] \"%REQ(:METHOD)% %REQ(X-ENVOY-ORIGINAL-PATH?:PATH)% %PROTOCOL%\" %RESPONSE_CODE% %RESPONSE_FLAGS% \"%DYNAMIC_METADATA(istio.mixer:status)%\" %BYTES_RECEIVED% %BYTES_SENT% %DURATION% %RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)% \"%REQ(X-FORWARDED-FOR)%\" \"%REQ(USER-AGENT)%\" \"%REQ(X-REQUEST-ID)%\" \"%REQ(:AUTHORITY)%\" \"%UPSTREAM_HOST%\" %UPSTREAM_CLUSTER% %UPSTREAM_LOCAL_ADDRESS% %DOWNSTREAM_LOCAL_ADDRESS% %DOWNSTREAM_REMOTE_ADDRESS% %REQUESTED_SERVER_NAME%\n",
                                        "path": "/dev/stdout"
                                    },
                                    "name": "envoy.file_access_log"
                                }
                            ],
                            "generate_request_id": true,
                            "http_filters": [
                                {
                                    "config": {
                                        "default_destination_service": "default",
                                        "forward_attributes": {
                                            "attributes": {
                                                "source.uid": {
                                                    "string_value": "kubernetes://sleep-778f6b54f9-spc5w.sleep"
                                                }
                                            }
                                        },
                                        "mixer_attributes": {
                                            "attributes": {
                                                "context.reporter.kind": {
                                                    "string_value": "outbound"
                                                },
                                                "context.reporter.uid": {
                                                    "string_value": "kubernetes://sleep-778f6b54f9-spc5w.sleep"
                                                },
                                                "source.namespace": {
                                                    "string_value": "sleep"
                                                },
                                                "source.uid": {
                                                    "string_value": "kubernetes://sleep-778f6b54f9-spc5w.sleep"
                                                }
                                            }
                                        },
                                        "service_configs": {
                                            "default": {
                                                "disable_check_calls": true
                                            }
                                        },
                                        "transport": {
                                            "check_cluster": "outbound|9091||istio-policy.fed-istio.svc.cluster.local",
                                            "network_fail_policy": {
                                                "base_retry_wait": "0.080s",
                                                "max_retry_wait": "1s",
                                                "policy": "FAIL_CLOSE"
                                            },
                                            "report_cluster": "outbound|9091||istio-telemetry.fed-istio.svc.cluster.local"
                                        }
                                    },
                                    "name": "mixer"
                                },
                                {
                                    "name": "envoy.cors"
                                },
                                {
                                    "name": "envoy.fault"
                                },
                                {
                                    "name": "envoy.router"
                                }
                            ],
                            "rds": {
                                "config_source": {
                                    "ads": {}
                                },
                                "route_config_name": "8080"
                            },
                            "stat_prefix": "0.0.0.0_8080",
                            "stream_idle_timeout": "0s",
                            "tracing": {
                                "client_sampling": {
                                    "value": 100
                                },
                                "operation_name": "EGRESS",
                                "overall_sampling": {
                                    "value": 100
                                },
                                "random_sampling": {
                                    "value": 100
                                }
                            },
                            "upgrade_configs": [
                                {
                                    "upgrade_type": "websocket"
                                }
                            ],
                            "use_remote_address": false
                        }
                    }
                ]
            }
        ],
        "deprecatedV1": {
            "bindToPort": false
        }
    }
]

route_config 8080:

    {
     "version_info": "2019-09-05T20:05:10Z/29",
     "route_config": {
      "name": "8080",
      "virtual_hosts": [
       {
        "name": "httpbin.httpbin.svc.cluster.local:8080",
        "domains": [
         "httpbin.httpbin.svc.cluster.local",
         "httpbin.httpbin.svc.cluster.local:8080",
         "httpbin.httpbin",
         "httpbin.httpbin:8080",
         "httpbin.httpbin.svc.cluster",
         "httpbin.httpbin.svc.cluster:8080",
         "httpbin.httpbin.svc",
         "httpbin.httpbin.svc:8080",
         "10.109.158.59",
         "10.109.158.59:8080"
        ],
        "routes": [
         {
          "match": {
           "prefix": "/"
          },
          "route": {
           "cluster": "outbound|8080||httpbin.httpbin.svc.cluster.local",
           "timeout": "0s",
           "retry_policy": {
            "retry_on": "connect-failure,refused-stream,unavailable,cancelled,resource-exhausted,retriable-status-codes",
            "num_retries": 2,
            "retry_host_predicate": [
             {
              "name": "envoy.retry_host_predicates.previous_hosts"
             }
            ],
            "host_selection_retry_max_attempts": "3",
            "retriable_status_codes": [
             503
            ]
           },
           "max_grpc_timeout": "0s"
          },
          "decorator": {
           "operation": "httpbin.httpbin.svc.cluster.local:8080/*"
          },
          "per_filter_config": {
           "mixer": {
            "forward_attributes": {
             "attributes": {
              "destination.service.namespace": {
               "string_value": "httpbin"
              },
              "destination.service.name": {
               "string_value": "httpbin"
              },
              "destination.service.host": {
               "string_value": "httpbin.httpbin.svc.cluster.local"
              },
              "destination.service.uid": {
               "string_value": "istio://httpbin/services/httpbin"
              }
             }
            },
            "mixer_attributes": {
             "attributes": {
              "destination.service.host": {
               "string_value": "httpbin.httpbin.svc.cluster.local"
              },
              "destination.service.uid": {
               "string_value": "istio://httpbin/services/httpbin"
              },
              "destination.service.namespace": {
               "string_value": "httpbin"
              },
              "destination.service.name": {
               "string_value": "httpbin"
              }
             }
            },
            "disable_check_calls": true
           }
          }
         }
        ]
       },
       {
        "name": "istio-pilot.fed-istio.svc.cluster.local:8080",
        "domains": [
         "istio-pilot.fed-istio.svc.cluster.local",
         "istio-pilot.fed-istio.svc.cluster.local:8080",
         "istio-pilot.fed-istio",
         "istio-pilot.fed-istio:8080",
         "istio-pilot.fed-istio.svc.cluster",
         "istio-pilot.fed-istio.svc.cluster:8080",
         "istio-pilot.fed-istio.svc",
         "istio-pilot.fed-istio.svc:8080",
         "10.106.24.160",
         "10.106.24.160:8080"
        ],
        "routes": [
         {
          "match": {
           "prefix": "/"
          },
          "route": {
           "cluster": "outbound|8080||istio-pilot.fed-istio.svc.cluster.local",
           "timeout": "0s",
           "retry_policy": {
            "retry_on": "connect-failure,refused-stream,unavailable,cancelled,resource-exhausted,retriable-status-codes",
            "num_retries": 2,
            "retry_host_predicate": [
             {
              "name": "envoy.retry_host_predicates.previous_hosts"
             }
            ],
            "host_selection_retry_max_attempts": "3",
            "retriable_status_codes": [
             503
            ]
           },
           "max_grpc_timeout": "0s"
          },
          "decorator": {
           "operation": "istio-pilot.fed-istio.svc.cluster.local:8080/*"
          },
          "per_filter_config": {
           "mixer": {
            "disable_check_calls": true,
            "forward_attributes": {
             "attributes": {
              "destination.service.host": {
               "string_value": "istio-pilot.fed-istio.svc.cluster.local"
              },
              "destination.service.uid": {
               "string_value": "istio://fed-istio/services/istio-pilot"
              },
              "destination.service.namespace": {
               "string_value": "fed-istio"
              },
              "destination.service.name": {
               "string_value": "istio-pilot"
              }
             }
            },
            "mixer_attributes": {
             "attributes": {
              "destination.service.namespace": {
               "string_value": "fed-istio"
              },
              "destination.service.name": {
               "string_value": "istio-pilot"
              },
              "destination.service.host": {
               "string_value": "istio-pilot.fed-istio.svc.cluster.local"
              },
              "destination.service.uid": {
               "string_value": "istio://fed-istio/services/istio-pilot"
              }
             }
            }
           }
          }
         }
        ]
       },
       {
        "name": "allow_any",
        "domains": [
         "*"
        ],
        "routes": [
         {
          "match": {
           "prefix": "/"
          },
          "route": {
           "cluster": "PassthroughCluster"
          },
          "per_filter_config": {
           "mixer": {
            "forward_attributes": {
             "attributes": {}
            },
            "mixer_attributes": {
             "attributes": {}
            },
            "disable_check_calls": true
           }
          }
         }
        ]
       }
      ],
      "validate_clusters": false
     },
     "last_updated": "2019-09-06T19:30:57.819Z"
    }
@hzxuzhonghu

This comment has been minimized.

Copy link
Member

@hzxuzhonghu hzxuzhonghu commented Sep 16, 2019

Try #17073 to see if it works.

@hzxuzhonghu

This comment has been minimized.

Copy link
Member

@hzxuzhonghu hzxuzhonghu commented Sep 16, 2019

The current implementation of upgrade headers does not work with HTTP/2 upstreams. Should not relate to it.

`

	// Allow websocket upgrades
	websocketUpgrade := &http_conn.HttpConnectionManager_UpgradeConfig{UpgradeType: "websocket"}
	connectionManager.UpgradeConfigs = []*http_conn.HttpConnectionManager_UpgradeConfig{websocketUpgrade}

@sanamsarath

This comment has been minimized.

Copy link
Author

@sanamsarath sanamsarath commented Sep 16, 2019

@hzxuzhonghu
The listener '0.0.0.0_8080' was defined with http filter 'web socket upgrade' by istio to support control plane traffic to istio mixer from the envoy sidecars. Currently, this filter is generalized to all the traffic(istio control plane + application) running on port 8080. Probably this filter should have a match condition (e.g, match: {domain : "istio*"}) to differentiate istio traffic from application traffic.

Will try #17073 and update this thread with the results. Thanks!!

@rshriram

This comment has been minimized.

Copy link
Member

@rshriram rshriram commented Sep 16, 2019

The listener '0.0.0.0_8080' was defined with http filter 'web socket upgrade' by istio to support control plane traffic to istio mixer from the envoy sidecars. Currently, this filter
websocket upgrade should not impact anything. its there to support websocket connections on the listener in case it gets one.

@yxue

This comment has been minimized.

Copy link
Member

@yxue yxue commented Sep 16, 2019

@sanamsarath could you please paste the config of outbound|8080||httpbin.httpbin.svc.cluster.local (or the target cluster h2 traffic is going to hit)? I think the cluster definition misses the http2 protocol options.

I don't think Istio 1.1.x should depend on the pr #17073 to generate correct upstream cluster.

@sanamsarath

This comment has been minimized.

Copy link
Author

@sanamsarath sanamsarath commented Sep 16, 2019

@yxue
the traffic is forwarded via "PassthroughCluster".

PassthroughCluster config:

    {
     "version_info": "2019-09-05T20:05:10Z/29",
     "cluster": {
      "name": "PassthroughCluster",
      "type": "ORIGINAL_DST",
      "connect_timeout": "10s",
      "lb_policy": "ORIGINAL_DST_LB"
     },
     "last_updated": "2019-09-05T20:05:12.543Z"
    }

Additional Info: Instead of service name, currently we are using hard coded ip address. Curl request looks like below:

 curl -H "Content-Type: application/json" -X PUT "http:/10.71.33.249:8080/nudm-uecm/v1/imsi-456123000000586/registrations/smf-registrations/15" -d '{"dnn":"dnn1.att","pduSessionId":15,"plmnId":{"mcc":"456","mnc":"123"},"singleNssai":{"sst":1},"smfInstanceId":"46bb3328-41da-4662-8523-e1b6b84ee19a","supportedFeatures":"1"}' --http2-prior-knowledge -vvv
@yxue

This comment has been minimized.

Copy link
Member

@yxue yxue commented Sep 18, 2019

@sanamsarath The problem is the PassthroughCluster. PassthroughCluster uses HTTP1 as the HTTP proxy. Envoy will create the connection pool according to the feature of the cluster

https://github.com/envoyproxy/envoy/blob/90419781c39d6576206f0b2e21a8fe3db7be874f/source/common/router/router.cc#L540

The PassthroughCluster doesn't have the HTTP2 option, default HTTP1 codec client will be used to encode the request. That's why you saw HTTP1 request on the receiver side.

@yxue

This comment has been minimized.

Copy link
Member

@yxue yxue commented Sep 18, 2019

BTW, if you define the service in the mesh, the route_config 8080 should contain one entry whose virtual host contains the IP and host name of the service. You can check the Pilot log to see if the virtual host is generated correctly.

@sanamsarath

This comment has been minimized.

Copy link
Author

@sanamsarath sanamsarath commented Sep 18, 2019

@sanamsarath The problem is the PassthroughCluster. PassthroughCluster uses HTTP1 as the HTTP proxy. Envoy will create the connection pool according to the feature of the cluster

https://github.com/envoyproxy/envoy/blob/90419781c39d6576206f0b2e21a8fe3db7be874f/source/common/router/router.cc#L540

The PassthroughCluster doesn't have the HTTP2 option, default HTTP1 codec client will be used to encode the request. That's why you saw HTTP1 request on the receiver side.

@yxue
Hmmm... I tried the HTTP2 requests on some random ports that have no listeners defined for in the envoy proxy and they seem to work fine, even though they are hitting the "PassthroughCluster".
for example I tried HTTP2 traffic on port 15021 and the request has not been downgraded to HTTP1.1.

The call flow for this traffic is

listener "0.0.0.0_15001" -> "PassthroughCluster" -> upstream server
@yxue

This comment has been minimized.

Copy link
Member

@yxue yxue commented Sep 18, 2019

@sanamsarath because when you tried on port 15021, the traffic will be forwarded as TCP proxy.

---> 15001 (listener with TCP proxy) ---> PassthroughCluster ---> TCP connection pool --> upstream server
---> 8080 (listener with HCM) ---> PassthroughCluster (no h2 option) ---> HTTP1 connection pool ---> upstream server
@sanamsarath sanamsarath reopened this Sep 18, 2019
@sanamsarath

This comment has been minimized.

Copy link
Author

@sanamsarath sanamsarath commented Sep 18, 2019

BTW, if you define the service in the mesh, the route_config 8080 should contain one entry whose virtual host contains the IP and host name of the service. You can check the Pilot log to see if the virtual host is generated correctly.

Adding a service in the mesh will definitely create a route_config, but our use case is that we don't know the url or destination address of the application beforehand to create the service, application
learns it during runtime.

@sanamsarath

This comment has been minimized.

Copy link
Author

@sanamsarath sanamsarath commented Sep 18, 2019

---> 8080 (listener with HCM) ---> PassthroughCluster (no h2 option) ---> HTTP1 connection pool ---> upstream server

@yxue Is it possible to add a match condition in the listener "0.0.0.0_8080" to apply http filters only to istio traffic or or any other known service defined in the mesh for port 8080, and for all other traffic let it go via "PassthroughCluster" as TCP proxy connections??
and same can be done with other listeners that are created for the istio internal service purposes.

Because, applications running HTTP2 traffic on port 8080 is a general requirement and there is no guarantee that destination address of upstream server is know while service mesh is being deployed.

@hzxuzhonghu

This comment has been minimized.

Copy link
Member

@hzxuzhonghu hzxuzhonghu commented Sep 19, 2019

The allow_any is to allow access any unknown service, maybe those mesh external ones.

@sanamsarath

This comment has been minimized.

Copy link
Author

@sanamsarath sanamsarath commented Sep 19, 2019

@hzxuzhonghu

I have allow_any enabled in my config. The issue is that the requests are being forwarded as HTTP proxy instead of TCP proxy by proxy sidecar, which is causing the downgrade of HTTP2 to HTTP1.1

@baracoder

This comment has been minimized.

Copy link

@baracoder baracoder commented Feb 19, 2020

Is there a way to enable http2 for the PassthroughCluster? I have a workload which requires direct pod-to-pod communication using gRPC and stops working when istio is enabled, as the requests are sent using http/1.1. It works though if I declare the port name as tcp-.. instead of grpc-..

@hobbytp

This comment has been minimized.

Copy link

@hobbytp hobbytp commented Mar 21, 2020

@baracoder do you mean "port name" instead of "pod name" in your last statement?

@baracoder

This comment has been minimized.

Copy link

@baracoder baracoder commented Mar 21, 2020

@hobbytp sorry, yes, that should have been "port" instead of "pod"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
7 participants
You can’t perform that action at this time.