Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Istio 0.8.0 VirtualService missmatch hosts when port is added #6469

Closed
prune998 opened this issue Jun 21, 2018 · 65 comments
Closed

Istio 0.8.0 VirtualService missmatch hosts when port is added #6469

prune998 opened this issue Jun 21, 2018 · 65 comments

Comments

@prune998
Copy link
Contributor

Describe the bug
When using the IngressGateway and defining a VirtualService, the hosts list makes difference between <host> and <host>:<port>

Expected behavior
By RFC, the Host: header of HTTP can be the plain <host> or <host>:<port>. The VirtualService should consider the two as beeing the same.

Steps to reproduce the bug
I started with the howto at https://istio.io/docs/tasks/traffic-management/ingress/

my domain name is authd.test.run1.k8s.xx.ca
I defined a Gateway :

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: authd-gateway
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "authd.test.run1.k8s.xxx.ca"
  - port:
      number: 443
      name: https
      protocol: HTTPS
    tls:
      mode: SIMPLE
      serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
      privateKey: /etc/istio/ingressgateway-certs/tls.key
    hosts:
    - "authd.test.run1.k8s.xxx.ca"

And a VirtualService :

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
  name: authd-http
  namespace: test
spec:
  gateways:
  - authd-gateway
  hosts:
  - authd.test.run1.k8s.xxx.ca
  http:
  - match:
    - uri:
        prefix: /
    route:
    - destination:
        host: authd-http
        port:
          number: 1080

Than try with curl :

curl -kvs -HHost:authd.test.run1.k8s.xxx.ca https://authd.test.run1.k8s.xxx.ca -I
HTTP/1.1 200 OK

while :

curl -kvs -HHost:authd.test.run1.k8s.xxx.ca:80 https://authd.test.run1.k8s.xxx.ca -I
HTTP/1.1 404 Not Found

In the 2nd case, I see in the logs :

ingressgateway [2018-06-21T15:57:28.956Z] "HEAD / HTTP/1.1" 404 NR 0 0 2 - "10.132.0.5" "curl/7.47.0" "98c6f630-3d4e-93a1-870b-6b620b50504a" "authd.test.run1.k8s.xxx.ca:80" "-"

To make it work I had to change the VirtualService to :

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
  name: authd-http
  namespace: test
spec:
  gateways:
  - authd-gateway
  hosts:
  - authd.test.run1.k8s.xxx.ca
  - authd.test.run1.k8s.xxx.ca:80
  - authd.test.run1.k8s.xxx.ca:443
  http:
  - match:
    - uri:
        prefix: /
    route:
    - destination:
        host: authd-http
        port:
          number: 1080

Either there is another way to match the hosts (like using * ) or the documentation should warn about that.
Note that Go http/gRPC seem to always send the port along the hostname.

Version
k8s 1.10.2
Istio 0.8.0

Is Istio Auth enabled or not?
No mTLS active

Environment
GKE

@sakshigoel12
Copy link
Contributor

cc @andraxylia

@wansuiye
Copy link

wansuiye commented Jun 24, 2018

i met the same problem ,but i canot make it work by add host "host:port" in virtualservice hosts section, when create virtual service, err message is
"Error: configuration is invalid: 2 errors occurred:

  • domain name "xxx.xxx:9090" invalid (label "io:9090" invalid)
  • xxx.xxx:9090 is not a valid IP"

Environment
Kubernetes

@prune998
Copy link
Contributor Author

@wansuiye please give the full virtualService Manifest.

@wansuiye
Copy link

@prune998
virtual service:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: 
   consul
spec:
  hosts:
  - "config.xxx.yyy.io"
  - "config.xxx.yyy.io:80"
  gateways:
  - config-gateway
  - mesh
  http:
  - route:
    - destination:
        port:
          number: 9090
        host: config

when create it,the err msg is:

# istioctl create -f config.yml
Error: configuration is invalid: 2 errors occurred:

* domain name "xxx.yyy.io:80" invalid (label "io:80" invalid)
* config.xxx.yyy.io:80 is not a valid IP

env:
kubernetes 1.9.5
istio 0.8.0

@prune998
Copy link
Contributor Author

It's working with kubectl apply -f config.yml but not with istioctl create -f config.yml
It may be a bug then... but not related to my issue.
Please, open another issue.

@Richard87
Copy link

Richard87 commented Jul 11, 2018

Hi everyone, I was also hit by this bug when following the httpbin tutorial... I ended up with this config that works for me (Also, I don't have a loadbalancer, so I'm accessing the ingress-gateway on the NodePort)

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: httpbin
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: httpbin
        version: v1
    spec:
      containers:
      - image: docker.io/citizenstig/httpbin
        imagePullPolicy: IfNotPresent
        name: httpbin
        ports:
        - containerPort: 8000
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: httpbin-gateway
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "httpbin.example.no"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: httpbin
spec:
  gateways:
    - httpbin-gateway
  hosts:
    - "httpbin.example.no"
    - "httpbin.example.no:31380"
  http:
  - route:
    - destination:
        port:
          number: 8000
        host: httpbin
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: httpbin
  name: httpbin
spec:
  selector:
    app: httpbin
  ports:
  - name: http
    port: 8000
    protocol: TCP
    targetPort: 8000
  type: NodePort

Without the extra hosts I would receive this error in ingressgateway-pod:

[2018-07-11T12:02:34.478Z] "GET / HTTP/1.1" 404 - 0 0 0 - "10.244.1.1" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36" "6825cfdb-2699-9d1f-b6c4-07387fb0d32e" "httpbin.example.no:31380" "-"

(notice the requested hostname "httpbin.example.no:31380")

Edit:
Also, if I modify the Host header with Curl, it works without the extra host in VirtualService:
curl -v -k -H Host:httpbin.example.no http://httpbin.example.no:31380

@prune998
Copy link
Contributor Author

@andraxylia (as you were named for this bug), I'm just testing the latest release, using images release-1.0-latest-daily.

What I see now is the exact same behaviour as the one described first in this issue.
Here are the istio-ingressgateway logs for 2 tests, first without the port, second with the port :

[2018-07-19T13:57:39.038Z] "GET / HTTP/1.1" 307 - 0 42 3 1 "10.246.0.17" "curl/7.47.0" "d570e950-4d58-9b2e-8de9-44bb57f0a91d" "authd.dev.cluster2.k8s.xx.ca" "10.20.25.27:1080"

[2018-07-19T13:57:44.131Z] "GET / HTTP/1.1" 404 NR 0 0 2 - "10.246.0.4" "curl/7.47.0" "1d6866c5-48a4-95ef-9ebb-165e036aad51" "authd.dev.cluster2.k8s.xx.ca:443" "-"

BUT

I can't add a host that contains a port number anymore (<host>:<port>)

So this does not work anymore :

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: authd-vs-ingress
spec:
  hosts:
  - "authd.dev.cluster2.k8s.xxx.ca"
  - "authd.dev.cluster2.k8s.xxx.ca:443"
  gateways:
  - ingress-gateway
  http:
  - match:
    - uri:
        prefix: /
    route:
    - destination:
        port:
          number: 1080
        host: authd-http

The error is now :

Error from server: error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"networking.istio.io/v1alpha3\",\"kind\":\"VirtualService\",\"metadata\":{\"annotations\":{},\"name\":\"authd-vs-ingress\",\"namespace\":\"dev\"},\"spec\":{\"gateways\":[\"ingress-gateway\"],\"hosts\":[\"authd.dev.cluster2.k8s.xxx.ca\",\"authd.dev.cluster2.k8s.xxx.ca:443\"],\"http\":[{\"match\":[{\"uri\":{\"prefix\":\"/\"}}],\"route\":[{\"destination\":{\"host\":\"authd-http\",\"port\":{\"number\":1080}}}]}]}}\n"}},"spec":{"hosts":["authd.dev.cluster2.k8s.xxx.ca","authd.dev.cluster2.k8s.xxx.ca:443"]}}
to:
Resource: "networking.istio.io/v1alpha3, Resource=virtualservices", GroupVersionKind: "networking.istio.io/v1alpha3, Kind=VirtualService"
Name: "authd-vs-ingress", Namespace: "dev"
Object: &{map["apiVersion":"networking.istio.io/v1alpha3" "kind":"VirtualService" "metadata":map["annotations":map["kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"networking.istio.io/v1alpha3\",\"kind\":\"VirtualService\",\"metadata\":{\"annotations\":{},\"name\":\"authd-vs-ingress\",\"namespace\":\"dev\"},\"spec\":{\"gateways\":[\"ingress-gateway\"],\"hosts\":[\"authd.dev.cluster2.k8s.xxx.ca\"],\"http\":[{\"match\":[{\"uri\":{\"prefix\":\"/\"}}],\"route\":[{\"destination\":{\"host\":\"authd-http\",\"port\":{\"number\":1080}}}]}]}}\n"] "resourceVersion":"89496739" "uid":"7995d76a-8b57-11e8-971f-42010a8e0011" "clusterName":"" "creationTimestamp":"2018-07-19T13:27:25Z" "generation":'\x01' "name":"authd-vs-ingress" "namespace":"dev" "selfLink":"/apis/networking.istio.io/v1alpha3/namespaces/dev/virtualservices/authd-vs-ingress"] "spec":map["http":[map["match":[map["uri":map["prefix":"/"]]] "route":[map["destination":map["host":"authd-http" "port":map["number":'\u0438']]]]]] "gateways":["ingress-gateway"] "hosts":["authd.dev.cluster2.k8s.xxx.ca"]]]}
for: "/tmp/kube_deploy/dev-authd-virtualservice.yml": admission webhook "pilot.validation.istio.io" denied the request: configuration is invalid: domain name "dev.cluster2.k8s.xx.ca:443" invalid (label "ca:443" invalid)

So :

  • the bug is not resolved
  • the way to circumvent the bug does not work anymore

The conclusion is you can't use the istio-ingressgateway with gRPC microservices (using the GO library at least)

@prune998
Copy link
Contributor Author

I tried removing the admission webhook, which allows me to define a host as <hostname>:<port>, but there may be some side effects...

We really need to investigate this issue

@denniseffing
Copy link

I stumpled upon the same issue when playing around with Istio on my local machine. I am using minikube, so I access my ingress gateway via the node port 31380.

With this local setup, any client that doesn't support specifying the host header (e.g. a web browser without extensions like Chrome Header Hacker) can not be used to access services in my service mesh. I am currently using the workaround mentioned by @prune998 which allows me to set an additional port in my gateway/virtualservice hosts configuration by using kubectl instead of istioctl. A fix would be much appreciated.

@prune998
Copy link
Contributor Author

@denniseffing can you please provide your K8s and Istio versions ?
Did you tried any snapshot version ?

@denniseffing
Copy link

denniseffing commented Jul 25, 2018

@prune998 I am currently using Istio 0.8.0 and K8s 1.10

I didn't try any snapshot version because of your comment that the latest snapshot release uses a new admission hook that also disables the currently used workaround with kubectl.

Do you want me to try a current snapshot version regardless?

@prune998
Copy link
Contributor Author

ok thanks @denniseffing so there's nothing I can do more to help... waiting for Itio Team (@sakshigoel12 or @andraxylia ) to take over :)

@wansuiye
Copy link

in istio 1.10, kubectl is also not work to set the host with a port...

@ronakpandya7
Copy link

Hello @prune998 and @wansuiye

We have tried using istio 1.0, but host:port is not working using "kubectl" as well as using "istioctl"

Any Help?

@denniseffing
Copy link

@ronakpandya7 Nothing you can do except waiting for the Istio team to respond to this issue.

@ronakpandya7
Copy link

ronakpandya7 commented Aug 2, 2018

Hello @denniseffing ,

So how can we force them to make them look at this issue?
Can we add a feature slug in this issue?

@denniseffing
Copy link

Current milestone is v1.1, so I'd expect a fix in about a month if they keep up the monthly release schedule.

@ronakpandya7
Copy link

Hello @denniseffing,

Thanks for your help, we have to wait.
And can you please look into issue #7325 if possible..

@prune998
Copy link
Contributor Author

prune998 commented Aug 2, 2018

@ronakpandya7 I replied to your other issue, which from my point of view is not an issue but a missconfiguration.
I would really URGE the Istio devs to look into it, as this issue just break the gRPC workflow using Istio !
I installed 1.0.0-release yesterday and will give it a try, but 1.0.0-snapshot-2 was still having the same issue, and even worse, Galley was blocking the declaration of a host with a :port attached...

@prune998
Copy link
Contributor Author

prune998 commented Aug 2, 2018

I think I got it working with the 1.0.0 release, with a little twist...

I added the gateway as :

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: ingressgateway
spec:
  selector:
    istio: ingressgateway
  servers:
  - hosts:
    - '*'
    port:
      name: http
      number: 80
      protocol: HTTP2
  - hosts:
    - '*'
    port:
      name: https-default
      number: 443
      protocol: HTTPS
    tls:
      mode: SIMPLE
      privateKey: /etc/istio/ingressgateway-certs/tls.key
      serverCertificate: /etc/istio/ingressgateway-certs/tls.crt

and the Virtualservice as :

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: hello-vs-ingress
spec:
  gateways:
  - ingressgateway
  hosts:
  - hello.test.xxx.ca
  - hello.test.xxx.ca:443
  - hello.test.xxx.ca:80
  http:
  - match:
    - uri:
        prefix: /
    route:
    - destination:
        host: greeter-server
        port:
          number: 7788

I'm still not 100% sure it's OK as my gRPC connexion keeps closing... but I have to ensure it's not my code...

@prune998
Copy link
Contributor Author

prune998 commented Aug 2, 2018

Ok, sounds everything is working AS LONG AS you don't deploy Galley service. This is the one enforcing the Webhook that denies the <hostname>:<port> scheme.

I'll try adding the ports in the Gateway instead of using *...

@prune998
Copy link
Contributor Author

prune998 commented Aug 2, 2018

I can confirm the <hostname>:<port> scheme works as long as Galley is not installed.

@ronakpandya7
Copy link

ronakpandya7 commented Aug 3, 2018

Hello @prune998 ,
Nice to hear that you found the solution.
So did you create this virtualservice and gateway using kubectl or istioctl ?
And is there any side effects if we do not install galley ?

@ronakpandya7
Copy link

ronakpandya7 commented Aug 3, 2018

Hello @prune998 ,
After removing Galley we are able to create gateway and virtualservices with <hostname>:<port>, but using kubectl only.

I am worried that we removed one component from the istio mesh that is Galley, does it make any difference?

@wansuiye
Copy link

it seems that in istio 1.1 the problem is also existed. it can only be solved by disable galley, but the galley is more important such as mcp

@wansuiye
Copy link

@prune998

  • While it may currently be normal for that error to be thrown, i think that the purpose of the aforementioned patch will allow us to access services within the mesh when specifying port. This may be particularly useful for on-prem kubernetes deployments and in development environments.
  • Port 80 on the istio-ingressgateway is mapped to node port 31380, to allow external traffic into the mesh. Port 80 is not opened on the node.

The curl command that you mentioned to try didn't work and i believe that is because HHost:grafana.iddls.com:80 specifies a port, and that doesn't match up to the host key:value pair on the istio gateway/virtualservice configuration.

i met the same problem from istio 0.7, i used a 30080 nodeport which is mapped to 80 in istio-ingressgateway. the only solved way is to close galley, which i tested for every version of istio.
however 1.1 also not solve it.

@mgxian
Copy link

mgxian commented Mar 21, 2019

Istio developer think this behavior is right, gateway is only support dns format host, so I think put a proxy at the front of the ingressgateway in you cluster can solve this problem. This also can help you make you ingressgateway more ha. You can use keeplive + lvs/haproxy/nginx/envoy and so on.

@wansuiye
Copy link

Istio developer think this behavior is right, gateway is only support dns format host, so I think put a proxy at the front of the ingressgateway in you cluster can solve this problem. This also can help you make you ingressgateway more ha. You can use keeplive + lvs/haproxy/nginx/envoy and so on.

however the http protocol's host header include port, and adding a gateway in front of ingressgateway just to convert header's host is much complicated. and the proxy need support multi protocols

@mgxian
Copy link

mgxian commented Mar 21, 2019

Istio developer think this behavior is right, gateway is only support dns format host, so I think put a proxy at the front of the ingressgateway in you cluster can solve this problem. This also can help you make you ingressgateway more ha. You can use keeplive + lvs/haproxy/nginx/envoy and so on.

however the http protocol's host header include port, and adding a gateway in front of ingressgateway just to convert header's host is much complicated. and the proxy need support multi protocols

Your front proxy only need support TCP protocol, you can proxy port 80 and 443 to the ingressgateway port 30080 or other port. If your client call http service by proxy and not carry the port number in the host header this will work fine.

@wansuiye
Copy link

@mgxian it's feasible.
however port 80 is not opened in our node, and it's just a very little change.

@mgxian
Copy link

mgxian commented Mar 21, 2019

@wansuiye suppose you ingressgateway opened at nodeport 30080, there are two ways to deploy your proxy:

  1. deploy it in the k8s cluster, and expose it on the k8s node's port 80 by using specify hostNetwork or hostPort in the k8s yaml file.
  2. deploy it out of the k8s cluster, listening on port 80. proxy tcp traffic to node's 30080 port.

@ghost
Copy link

ghost commented Mar 21, 2019

@wansuiye @mgxian This is actually how we got around the issue. We have a daemon set proxy deployment, that its only job is to accept port 80 and 443 traffic and pass it to the ingress gateway at the specific node port. While it would be nice to not have to deploy that extra piece, i guess its just how it has to be done for on-prem deployments.

Its an interesting thing that the gateway/proxies only support DNS format. Have you ever tried to use Prometheus-operator in a istio deployment? We had to create specific regex changes so that Prometheus would scrap using DNS instead of IP, because istio could not route IP. But i think this is more of a issue with Prometheus then istio.

@prune998
Copy link
Contributor Author

Hi there.
Issue is closed as it's solved in 1.0.3, and we're now at 1.1. You should clearly upgrade instead of adding workaround...

@denniseffing
Copy link

@prune998 Can you confirm that you are able to create a VirtualService with hosts that include a port when using Istio 1.1? I am asking because wansuiye said that this is still not possible and mgxian said that this is indeed intended by the Istio team and therefore won't be fixed.

@prune998
Copy link
Contributor Author

it's not possible, but it's not needed.
hosts like www.your-domain.com and www.your-domain.com:443 are now treated the same, so you don't need to add the :port section

@denniseffing
Copy link

Thanks for clearing that up! I guess that works fine too.

@mgxian
Copy link

mgxian commented Mar 21, 2019

@prune998 Is other port except port 80 and 443 supported ? If I create a virtualservice with hosts www.your-domain.com referer to a gateway with hosts www.your-domain.com listen on port 80, but gateway actually listening on nodeport 30080. So can I access the service use curl www.your-domain.com:30080 command ?

@prune998
Copy link
Contributor Author

Gateway filters the host/port and forward to the right Virtualservice.
Virtualservice does not care of the port part. The fact that it had-to was a bug which is now corrected.

@mgxian your question is more "can I access the gateway using a nodeport". I would say yes, it should work, but I never tried it, so you've better check by yourself. As long as the Gateway is matched and forward the request to your VirtualService, you can define the Virtualservice without port in the host matching pattern

@blaketastic2
Copy link

I have confirmed that this is still an issue with non-standard ports(8443) on both 1.0.7 and 1.1.5. It's very easy for me to replicate with the following:

(In this example, we'll just terminate SSL at the ELB)

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: gateway
  namespace: foo
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 80
      name: http-80
      protocol: HTTP
    hosts:
    - api.xxxx.com
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: my-virtual-service
  namespace: foo
spec:
  hosts:
  - api.xxxx.com
  gateways:
  - gateway
  http:
  - route:
    - destination:
        host: my-service.foo.svc.cluster.local
        subset: release

curl -I -H "Host: api.xxxx.com" https://api.xxxx.com:8443/foo
Returns a 404
curl -I -H "Host: api.xxxx.com" https://api.xxxx.com/foo
Returns a 200

Next, change the "hosts" to "*" on the VirtualService:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: my-virtual-service
  namespace: foo
spec:
  hosts:
  - "*"
  gateways:
  - gateway
  http:
  - route:
    - destination:
        host: my-service.foo.svc.cluster.local
        subset: release

curl -I -H "Host: api.xxxx.com" https://api.xxxx.com:8443/foo
Returns a 200
curl -I -H "Host: api.xxxx.com" https://api.xxxx.com/foo
Returns a 404

@prune998
Copy link
Contributor Author

@blaketastic2 I'm not sure I clearly understand your setup.
If your gateway only listen on port 80, you clearly can't connect to it on port 8443, whatever you define on your VirtualService. Maybe you should show your full Istio setup so we clearly understand.

@blaketastic2
Copy link

blaketastic2 commented May 16, 2019

As I mentioned, for this example, we're doing TLS termination on the ELB, so 8443 and 443 map to 80.

@blaketastic2
Copy link

Here's the config from the helm chart:

gateways:
  istio-ingressgateway:
    enabled: true
    ports:
    - port: 80
      name: http
      targetPort: 80
    - port: 443
      name: https
      targetPort: 80
    - port: 8443
      name: https-app
      targetPort: 80

@darkedges
Copy link

darkedges commented May 25, 2019

I am experiencing the same with envy-1.10.0

If I use the following

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: myapp
spec:
  hosts:
  - "api.xxxx.com"
  gateways:
  - istio-system/istio-autogenerated-k8s-ingress
  - mesh
  http:
  - match:
    - uri:
        prefix: /hello
    route:
    - destination:
        host: helloworld

Then in my helloworld pod istio-proxy instance config I get the entry

{
     "version_info": "2019-05-25T23:13:09Z/72",
     "route_config": {
      "name": "80",
      "virtual_hosts": [
       {
        "name": "api.xxxx.com:80",
        "domains": [
         "api.xxxx.com",
         "api.xxxx.com:80"
        ],
        "routes": [
         {
          "match": {
           "prefix": "/hello"
          },
          "route": {
           "cluster": "outbound|80||helloworld.default.svc.cluster.local",
           "timeout": "0s",
           "retry_policy": {
            "retry_on": "connect-failure,refused-stream,unavailable,cancelled,resource-exhausted,retriable-status-codes",
            "num_retries": 2,
            "retry_host_predicate": [
             {
              "name": "envoy.retry_host_predicates.previous_hosts"
             }
            ],
            "host_selection_retry_max_attempts": "3",
            "retriable_status_codes": [
             503
            ]
           },
           "max_grpc_timeout": "0s"
          },
          "metadata": {
           "filter_metadata": {
            "istio": {
             "config": "/apis/networking/v1alpha3/namespaces/default/virtual-service/myapp"
            }
           }
          },
          "decorator": {
           "operation": "helloworld.default.svc.cluster.local:80/hello*"
          },
          "per_filter_config": {
           "mixer": {
            "forward_attributes": {
             "attributes": {
              "destination.service.host": {
               "string_value": "helloworld.default.svc.cluster.local"
              },
              "destination.service.uid": {
               "string_value": "istio://default/services/helloworld"
              },
              "destination.service.namespace": {
               "string_value": "default"
              },
              "destination.service.name": {
               "string_value": "helloworld"
              }
             }
            },
            "mixer_attributes": {
             "attributes": {
              "destination.service.namespace": {
               "string_value": "default"
              },
              "destination.service.name": {
               "string_value": "helloworld"
              },
              "destination.service.uid": {
               "string_value": "istio://default/services/helloworld"
              },
              "destination.service.host": {
               "string_value": "helloworld.default.svc.cluster.local"
              }
             }
            },
            "disable_check_calls": true
           }
          }
         }
        ]
       },

But I cannot access the resource

If i use the following

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: myapp
spec:
  hosts:
  - "api.xxxx.com:8433"
  gateways:
  - istio-system/istio-autogenerated-k8s-ingress
  - mesh
  http:
  - match:
    - uri:
        prefix: /hello
    route:
    - destination:
        host: helloworld

then no entry is generated in the pods config.

I am using CloudFlare to proxy requests to my a development instance only accessible via port 8433

@darkedges
Copy link

If I create an entry with kubectl edit gateway -n istio-system

  - hosts:
    - '*'
    port:
      name: https-default
      number: 8443
      protocol: HTTPS
    tls:
      credentialName: ingress-cert
      mode: SIMPLE
      privateKey: sds
      serverCertificate: sds

and entry via kubectl edit svc istio-ingressgateway -n istio-system

  - name: https
    nodePort: 31390
    port: 443
    protocol: TCP
    targetPort: 443
  - name: https2
    nodePort: 31391
    port: 8443
    protocol: TCP
    targetPort: 8443

and use

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: myapp
spec:
  hosts:
  - "api.xxxx.com"
  gateways:
  - istio-system/istio-autogenerated-k8s-ingress
  http:
  - match:
    - uri:
        prefix: /hello
    route:
    - destination:
        host: helloworld

Still does not work if i use api.xxxx.com:8443

It is a workaround to my configuration use case. i,e port forward router: 8443 to istio-ingressgateway: 443.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests