Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TLS Origination does not require ServiceEntry for external host over port 80 to work #21914

Closed
Janesee3 opened this issue Mar 6, 2020 · 4 comments
Labels
area/networking kind/need more info Need more info or followup from the issue reporter lifecycle/automatically-closed Indicates a PR or issue that has been closed automatically. lifecycle/stale Indicates a PR or issue hasn't been manipulated by an Istio team member for a while

Comments

@Janesee3
Copy link

Janesee3 commented Mar 6, 2020

Bug description
In our current setup, we are doing TLS origination directly from the proxy sidecar instead of the egress gateway. We have two external hosts that we want to do TLS origination for -edition.cnn.com and github.com.
We found out that TLS origination can work as long as we have at least one ServiceEntry/Kubernetes Service that listens on port 80 (regardless of hostname).

Steps to reproduce

  1. Create the required ServiceEntry, VirtualService and DestinationRules for TLS origination of each host:

For edition.cnn.com:

apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
  name: cnn-egress
spec:
  hosts:
  - edition.cnn.com
  ports:
  # For purpose to bug reproduction, we leave out the port definition for port 80 (HTTP protocol)
  - number: 443
     name: https
     protocol: HTTPS
  resolution: DNS
  location: MESH_EXTERNAL
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: originate-tls-for-cnn
spec:
  host: edition.cnn.com
  trafficPolicy:
    portLevelSettings:
      - port:
          number: 443
        tls:
          mode: SIMPLE
          sni: edition.cnn.com
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: cnn-egress-through-sidecar
spec:
  hosts:
    - edition.cnn.com
  gateways:
    - mesh
  http:
    - match:
        - gateways:
            - mesh
          port: 80
      route:
        - destination:
            host: edition.cnn.com
            port:
              number: 443

For github.com:

apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
  name: github-egress
spec:
  hosts:
  - github.com
  ports:
  # For purpose to bug reproduction, we leave out the port definition for port 80 (HTTP protocol)
  - number: 443
     name: https
     protocol: HTTPS
  resolution: DNS
  location: MESH_EXTERNAL
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: originate-tls-for-github
spec:
  host: github.com
  trafficPolicy:
    portLevelSettings:
      - port:
          number: 443
        tls:
          mode: SIMPLE
          sni: github.com
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: github-egress-through-sidecar
spec:
  hosts:
    - github.com
  gateways:
    - mesh
  http:
    - match:
        - gateways:
            - mesh
          port: 80
      route:
        - destination:
            host: github.com
            port:
              number: 443
  1. If we create a curl pod (that has the proxy sidecar) and run a curl command against http://edition.cnn.com or http://github.com, we will receive a curl: (56) Recv failure: Connection reset by peer. error.

  2. Run istioctl pc cluster curl-pod.my-namespace, and we can see that the cluster for the two external hosts exists:

...
edition.cnn.com                                    443       -              outbound      STRICT_DNS
github.com                                        443       -              outbound      STRICT_DNS
...

However, looking through the proxy config dump of this curl pod, we cannot find a dynamic active listener for 0.0.0.0:80 an no routes for routing edition.cnn.com:80 and github.com:80 to the two clusters listed above.

  1. Now, add the port 80 definition to ONE OF THE ServiceEntry above. For this example, we add it to the cnn-egress ServiceEntry:
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
  name: cnn-egress
spec:
  hosts:
  - edition.cnn.com
  ports:
  - number: 80
     name: http
     protocol: HTTP
  - number: 443
     name: https
     protocol: HTTPS
  resolution: DNS
  location: MESH_EXTERNAL
  1. If we were to check the config dump of the curl pod again, we can see these new addition:

Listener for 0.0.0.0:80 is added:

"dynamic_active_listeners": [
  {
    "version_info": "2020-03-04T02:31:20Z/13",
    "listener": {
     "name": "0.0.0.0_80",
     "address": {
      "socket_address": {
       "address": "0.0.0.0",
       "port_value": 80
      }
     },
     "filter_chains": [
      {
       "filter_chain_match": {
        "prefix_ranges": [
         {
          "address_prefix": "172.23.26.185",
          "prefix_len": 32
         }
        ]
       },
       "filters": [
        {
         "name": "envoy.tcp_proxy",
         "typed_config": {
          "@type": "type.googleapis.com/envoy.config.filter.network.tcp_proxy.v2.TcpProxy",
          "stat_prefix": "BlackHoleCluster",
          "cluster": "BlackHoleCluster"
         }
        }
       ]
      },
      {
       "filters": [
        {
         "name": "envoy.http_connection_manager",
         "typed_config": {
          "@type": "type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager",
          "stat_prefix": "outbound_0.0.0.0_80",
          "http_filters": [
           {
            "name": "mixer",
            "typed_config": {
             "@type": "type.googleapis.com/istio.mixer.v1.config.client.HttpClientConfig",
             "transport": {
              "network_fail_policy": {
               "policy": "FAIL_CLOSE",
               "base_retry_wait": "0.080s",
               "max_retry_wait": "1s"
              },
              "check_cluster": "outbound|15004||istio-policy.istio-system.svc.cluster.local",
              "report_cluster": "outbound|15004||istio-telemetry.istio-system.svc.cluster.local",
              "report_batch_max_entries": 100,
              "report_batch_max_time": "1s"
             },
             "service_configs": {
              "default": {
               "disable_check_calls": true
              }
             },
             "default_destination_service": "default",
             "mixer_attributes": {
              "attributes": {
               "context.proxy_version": {
                "string_value": "1.4.6"
               },
               "context.reporter.kind": {
                "string_value": "outbound"
               },
               "context.reporter.uid": {
                "string_value": "kubernetes://my-test-pod.my-namespace"
               },
               "source.namespace": {
                "string_value": "my-namespace"
               },
               "source.uid": {
                "string_value": "kubernetes://my-test-pod.my-namespace"
               }
              }
             },
             "forward_attributes": {
              "attributes": {
               "source.uid": {
                "string_value": "kubernetes://my-test-pod.my-namespace"
               }
              }
             }
            }
           },
           {
            "name": "envoy.cors"
           },
           {
            "name": "envoy.fault"
           },
           {
            "name": "envoy.router"
           }
          ],
          "tracing": {
           "operation_name": "EGRESS",
           "client_sampling": {
            "value": 100
           },
           "random_sampling": {
            "value": 100
           },
           "overall_sampling": {
            "value": 100
           }
          },
          "access_log": [
           {
            "name": "envoy.file_access_log",
            "typed_config": {
             "@type": "type.googleapis.com/envoy.config.accesslog.v2.FileAccessLog",
             "path": "/dev/stdout",
             "json_format": {
              "authority": "%REQ(:AUTHORITY)%",
              "bytes_received": "%BYTES_RECEIVED%",
              "bytes_sent": "%BYTES_SENT%",
              "downstream_local_address": "%DOWNSTREAM_LOCAL_ADDRESS%",
              "downstream_remote_address": "%DOWNSTREAM_REMOTE_ADDRESS%",
              "duration": "%DURATION%",
              "istio_policy_status": "%DYNAMIC_METADATA(istio.mixer:status)%",
              "method": "%REQ(:METHOD)%",
              "path": "%REQ(X-ENVOY-ORIGINAL-PATH?:PATH)%",
              "protocol": "%PROTOCOL%",
              "request_id": "%REQ(X-REQUEST-ID)%",
              "requested_server_name": "%REQUESTED_SERVER_NAME%",
              "response_code": "%RESPONSE_CODE%",
              "response_flags": "%RESPONSE_FLAGS%",
              "route_name": "%ROUTE_NAME%",
              "start_time": "%START_TIME%",
              "upstream_cluster": "%UPSTREAM_CLUSTER%",
              "upstream_host": "%UPSTREAM_HOST%",
              "upstream_local_address": "%UPSTREAM_LOCAL_ADDRESS%",
              "upstream_service_time": "%RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)%",
              "upstream_transport_failure_reason": "%UPSTREAM_TRANSPORT_FAILURE_REASON%",
              "user_agent": "%REQ(USER-AGENT)%",
              "x_forwarded_for": "%REQ(X-FORWARDED-FOR)%"
             }
            }
           }
          ],
          "use_remote_address": false,
          "generate_request_id": true,
          "upgrade_configs": [
           {
            "upgrade_type": "websocket"
           }
          ],
          "stream_idle_timeout": "0s",
          "normalize_path": true,
          "rds": {
           "config_source": {
            "ads": {}
           },
           "route_config_name": "80"
          }
         }
        }
       ]
      }
     ],
     "deprecated_v1": {
      "bind_to_port": false
     },
     "listener_filters_timeout": "0.100s",
     "traffic_direction": "OUTBOUND",
     "continue_on_listener_filters_timeout": true
    },
    "last_updated": "2020-03-04T02:31:20.481Z"
   }
  ]
 },
]

The missing dynamic routes for port 80 are also present now, including the one for github.com, which we didn't declare port 80 for in its ServiceEntry:

"dynamic_route_configs": [
  {
    "name": "edition.cnn.com:80",
    "domains": [
      "edition.cnn.com",
      "edition.cnn.com:80"
    ],
    "routes": [
      {
      "match": {
        "prefix": "/",
        "case_sensitive": true
      },
      "route": {
        "cluster": "outbound|443||edition.cnn.com",
        "timeout": "0s",
        "retry_policy": {
        "retry_on": "connect-failure,refused-stream,unavailable,cancelled,resource-exhausted,retriable-status-codes",
        "num_retries": 2,
        "retry_host_predicate": [
          {
          "name": "envoy.retry_host_predicates.previous_hosts"
          }
        ],
        "host_selection_retry_max_attempts": "5",
        "retriable_status_codes": [
          503
        ]
        },
        "max_grpc_timeout": "0s"
      },
      "metadata": {
        "filter_metadata": {
        "istio": {
          "config": "/apis/networking/v1alpha3/namespaces/my-namespace/virtual-service/cnn-egress-through-sidecar"
        }
        }
      },
      "decorator": {
        "operation": "edition.cnn.com:443/*"
      },
      "typed_per_filter_config": {
        "mixer": {
        "@type": "type.googleapis.com/istio.mixer.v1.config.client.ServiceConfig",
        "disable_check_calls": true,
        "mixer_attributes": {},
        "forward_attributes": {}
        }
      }
      }
    ]
  },
  {
    "name": "github.com:80",
    "domains": [
      "github.com",
      "github.com:80"
    ],
    "routes": [
      {
      "match": {
        "prefix": "/",
        "case_sensitive": true
      },
      "route": {
        "cluster": "outbound|443||github.com",
        "timeout": "0s",
        "retry_policy": {
        "retry_on": "connect-failure,refused-stream,unavailable,cancelled,resource-exhausted,retriable-status-codes",
        "num_retries": 2,
        "retry_host_predicate": [
          {
          "name": "envoy.retry_host_predicates.previous_hosts"
          }
        ],
        "host_selection_retry_max_attempts": "5",
        "retriable_status_codes": [
          503
        ]
        },
        "max_grpc_timeout": "0s"
      },
      "metadata": {
        "filter_metadata": {
        "istio": {
          "config": "/apis/networking/v1alpha3/namespaces/my-namespace/virtual-service/github-egress-through-sidecar"
        }
        }
      },
      "decorator": {
        "operation": "github.com:443/*"
      },
      "typed_per_filter_config": {
        "mixer": {
        "@type": "type.googleapis.com/istio.mixer.v1.config.client.ServiceConfig",
        "disable_check_calls": true,
        "mixer_attributes": {},
        "forward_attributes": {}
        }
      }
      }
    ]
  }
]
  1. TLS origination now works fine when we curl http://edition.cnn.com and http://github.com from the curl pod.

Expected Behaviour
After adding the port 80 definition for ServiceEntry of cnn-egress, only the TLS origination for http://edition.cnn.com should work. github.com should continue to be rejected.

Also, instead of declaring port 80 on one of the ServiceEntry, adding a dummy Kubernetes Service that listens on port 80 also resulted in the same outcome.

Version (include the output of istioctl version --remote and kubectl version and helm version if you used Helm)
Istio 1.4.6

~ kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.1", GitCommit:"d647ddbd755faf07169599a625faf302ffc34458", GitTreeState:"clean", BuildDate:"2019-10-02T17:01:15Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.9-eks-502bfb", GitCommit:"502bfb383169b124d87848f89e17a04b9fc1f6f0", GitTreeState:"clean", BuildDate:"2020-02-07T01:31:02Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}

How was Istio installed?
Via the Istio helm charts

Environment where bug was observed (cloud vendor, OS, etc)
AWS EKS

@hobbytp
Copy link

hobbytp commented Mar 17, 2020

@Janesee3 I tried your example step 1 with my istio 1.5, it can work, and 0.0.0.0_443 listener is also generated, after I remove these 2 SE and 2 DR, the listener is gone.

kubectl -n hobby exec -it $SOURCE_POD -c sleep -- curl -sL -o /dev/null -D - http://edition.cnn.com/politics
HTTP/1.1 200 OK

kubectl -n hobby exec -it $SOURCE_POD -c sleep -- curl -sL -o /dev/null -D - http://github.com/
HTTP/1.1 200 OK

@Janesee3
Copy link
Author

Hi @hobbytp , do you mean that the 0.0.0.0_80 listener is created even just with the resources in Step 1? Have you made sure that you do not have any Kubernetes Service or ServiceEntry resources that listens on 80 in the cluster?

Also, if just the resources that I've listed in Step 1 is sufficient for TLS origination, why does the Istio TLS orgination documentation example state that the ServiceEntry should also include a port 80 declaration?

apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
  name: edition-cnn-com
spec:
  hosts:
  - edition.cnn.com
  ports:
  - number: 80
    name: http-port
    protocol: HTTP
  - number: 443
    name: https-port-for-tls-origination
    protocol: HTTPS
  resolution: DNS

@howardjohn
Copy link
Member

github.com should continue to be rejected.

Why do you think this is the desired behavior? You have defined a route for port 80 github.com (opaque hostname) to ServiceEntry github port 443. If anything, it seems the expected behavior is actually that it works for both even if there is no config on port 80 at all

@howardjohn howardjohn added the kind/need more info Need more info or followup from the issue reporter label Jul 28, 2020
@istio-policy-bot istio-policy-bot added the lifecycle/stale Indicates a PR or issue hasn't been manipulated by an Istio team member for a while label Oct 26, 2020
@istio-policy-bot
Copy link

🚧 This issue or pull request has been closed due to not having had activity from an Istio team member since 2020-07-28. If you feel this issue or pull request deserves attention, please reopen the issue. Please see this wiki page for more information. Thank you for your contributions.

Created by the issue and PR lifecycle manager.

@istio-policy-bot istio-policy-bot added the lifecycle/automatically-closed Indicates a PR or issue that has been closed automatically. label Nov 10, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/networking kind/need more info Need more info or followup from the issue reporter lifecycle/automatically-closed Indicates a PR or issue that has been closed automatically. lifecycle/stale Indicates a PR or issue hasn't been manipulated by an Istio team member for a while
Projects
None yet
Development

No branches or pull requests

4 participants