New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Specifying multiple HTTP match uri in Istio Canary deployment via Flagger #434
Comments
Flagger doesn't have support for multiple matches, the prometheus.io/path: "/myservice/monitor"
prometheus.io/port: "9090"
prometheus.io/scrape: "true" Here is an example: https://github.com/weaveworks/flagger/blob/master/kustomize/podinfo/deployment.yaml#L20-L22 |
@stefanprodan Thanks for reply. Then how can I connect to prometheus? which url would it be? match:
- uri:
prefix: /myservice
- uri:
prefix: /api/myservice
rewrite:
uri: /myservice
route:
- destination:
host: myservice
port:
number: 8080
corsPolicy:
allowOrigin:
- "*"
allowMethods:
- POST
- PATCH
allowCredentials: false
allowHeaders:
- X-Tenant-Identifier
- Content-Type
- authorization
maxAge: "24h" can I have two prefixes? |
Prometheus is pull based, it discoverers the pods based on annotations and scapes the app not the other way around. You don't want to load balance the Prometheus scraping since you'll get inconsistent metrics. My advice is to add the Prometheus annotations inside the deployments and remove the matchs for port 9090. By the way, your Canary yaml is invalid, there is no corsPolicy under match. Please see the docs how a Canary definition looks. |
yeah it has extra space in corspolicy, I'll fix that. Is it possible to use several prefixes in Canary? |
Yes that should work |
It seems it is not working with this way that I tried, it just deploys the last prefix: match:
- uri:
prefix: /api/myservice
prefix: /myservice |
That's not a valid Istio match. Use:
|
thanks that worked. But the service gives 503 error service unavailable with external DNS, that means the routing is not working fine but it's working fine by port-forwarding. You mentioned that Istio uses Port Discovery and it's not needed to specify ports for virtualservice that's made by Canary, how can I make sure it is doing the right routing? |
You can look at the ClusterIPs, virtual service and destination rules generated by Flagger to make sure it did the right thing. Are you getting 503 for at the gateway level? |
@stefanprodan Yes, it is the gateway issue as port-forwarding localy workes fine. I have the ports in service.yaml spec:
clusterIP: 10.0.140.144
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 8080
- name: debug
port: 8000
protocol: TCP
targetPort: 8000
- name: monitoring
port: 9090
protocol: TCP
targetPort: 9090
selector:
app: myservice-primary and virtualservice looks like below: spec:
gateways:
- mygateway.istio-system.svc.cluster.local
hosts:
- api.mydomain.come
- myservice
http:
- corsPolicy:
allowHeaders:
- X-Tenant-Identifier
- Content-Type
- Authorization
allowMethods:
- GET
- POST
- PATCH
allowOrigin:
- '*'
maxAge: 24h
match:
- uri:
prefix: /api/myservice
- uri:
prefix: /myservice
route:
- destination:
host: myservice-primary
weight: 100
- destination:
host: myservice-canary
weight: 0 and destination rule for myservice-primary is like this: apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
creationTimestamp: "2020-02-11T11:41:41Z"
generation: 1
name: myservice-primary
namespace: default
ownerReferences:
- apiVersion: flagger.app/v1alpha3
blockOwnerDeletion: true
controller: true
kind: Canary
name: myservice
uid: 69ca19aa-4cc3-11ea-979f-76b1ead32f2a
resourceVersion: "1043445"
selfLink: /apis/networking.istio.io/v1alpha3/namespaces/default/destinationrules/myservice-primary
uid: 786a3162-4cc3-11ea-979f-76b1ead32f2a
spec:
host: myservice-primary
trafficPolicy:
tls:
mode: DISABLE I can not see any port under virtualservice that might be because of it I guess but you said istio does port discovery itself. Do you have any idea what it is missing? |
The Istio port discovery works inside the mesh, you could test it by running a pod inside the mesh and curl to 8000, 9090. I think there is no need for these ports to be exposed, Prometheus connect directly to the pods based on the annotations. Set |
That worked by deactivating portDiscovery. But why was that a problem? Should I never activate the portDiscovery? |
If your app is exposed only inside the mesh then you can use portDiscovery. The Istio gateway can't map port 90/443 to more then one port but the internal one called
No, the annotations are only for Prometheus. You should be able to connect to 8000 with |
Thanks a lot for your help :) |
I think Flagger could detect that internal gateway is not used, and set the port in the virtual service. Could you run some tests for me if I make a patch? |
sure, I can try but what kind of patch is that? |
The patch is here ea4d9ba but it takes 15 minutes for CI to build and push an image to Docker Hub. I'll give you a ping when it's ready. Thanks! |
Ok here it is: |
ok, sure, thank you :) I'm gonna test it today. But here is the way that I have installed flagger I'm not sure how I can install the one that you sent me via helm upgrade: helm repo add flagger https://flagger.app kubectl apply -f https://raw.githubusercontent.com/weaveworks/flagger/master/artifacts/flagger/crd.yaml helm upgrade -i flagger flagger/flagger --namespace=istio-system --set crd.create=false --set meshProvider=istio --set metricsServer=http://prometheus:9090 |
Here is the upgrade command:
|
I have tested it but it has some bugs, it doesn't create virtualservice and also it doesn't show any status info under canary: |
Please check Flagger logs |
$ kubectl logs flagger-7c8ccfd59d-6zszr -c flagger -n istio-system |
Ah yes you need to use the Helm chart from the master branch since the RBAC changed.
|
@stefanprodan Nice, that worked. I have one more question. When I update myservice with a new image, canary starts to progress and it fails by this error: Warning Synced 14m flagger Halt advancement myservice-primary.default waiting for rollout to finish: observed deployment generation less then desired generation Although I have flagger-loadtester installed: helm repo add flagger https://flagger.app webhooks:
{{- if .Values.canary.loadtest.enabled }}
- name: load-test-get
url: {{ .Values.canary.loadtest.url }} #http://flagger-loadtester.default/
timeout: 5s
metadata:
cmd: "hey -z 1m -q 5 -c 2 http://myservice.default:8080"
- name: load-test-post
url: {{ .Values.canary.loadtest.url }}
timeout: 5s
metadata:
cmd: "hey -z 1m -q 5 -c 2 -m POST -d '{\"test\": true}' http://myservice.default:8080/echo"
{{- end }} Is there something that I am missing here? I know the image is working fine. |
Exec into the load tester pod and run the hey commands to see if it can reach your app:
My guess is that those routes don't work for your app, do you have an I think your routes should be:
|
no I don't have /echo in myservice . Is /echo api works like a healthcheck api? So for that case do I need an api that gives 200 OK request? or it's status can be anything like 401? |
Yes with the patch I made, it should be possible to use any port in the load test |
I just tested the loadtester once by exec into loadtester which seems working fine:
and once through deploying a new image however I still get the same error with Canary:Failed : Warning Synced 65s (x10 over 3m20s) flagger Halt advancement no values found for istio metric request-success-rate probably myservice.default is not receiving traffic I can only have Get to /myservice/monitor so I deactivated load-tester-post from canary.yaml file, is that alright? Besides when I send a port-forwarding to myservice, I get 200 ok response: However it gives me 401 unauthorized message while I send a Get request to my public DNS: can you help with that? |
The As for the 401, can you post here the gateway yaml and the generated virtual service please. |
@stefanprodan I have installed istio like this: $ curl -L https://git.io/getLatestIstio | ISTIO_VERSION=1.4.3 sh - If I run myservice by usual way and using virtualservice and port routing it works fine however I get 401 unauthorized using canary deployment with public DNS. Here is my gateway: apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"networking.istio.io/v1alpha3","kind":"Gateway","metadata":{"annotations":{},"name":"my-gateway","namespace":"istio-system"},"spec":{"selector":{"istio":"ingressgateway"},"servers":[{"hosts":["*mydomain.com"],"port":{"name":"http","number":80,"protocol":"HTTP"}},{"hosts":["*.mydomain.com"],"port":{"name":"https","number":443,"protocol":"HTTPS"},"tls":{"mode":"SIMPLE","privateKey":"/etc/istio/ingressgateway-certs/tls.key","serverCertificate":"/etc/istio/ingressgateway-certs/tls.crt"}}]}}
creationTimestamp: "2020-01-24T19:02:54Z"
generation: 2
name: my-gateway
namespace: istio-system
resourceVersion: "169719"
selfLink: /apis/networking.istio.io/v1alpha3/namespaces/istio-system/gateways/my-gateway
uid: 205c6ce6-3edc-11ea-bd64-c28eb21a0594
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- '*.mydomain.come'
port:
name: http
number: 80
protocol: HTTP
- hosts:
- '*.mydomain.com'
port:
name: https
number: 443
protocol: HTTPS
tls:
mode: SIMPLE
privateKey: /etc/istio/ingressgateway-certs/tls.key
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt And the virtualservice after Canary deployment looks like here: apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
creationTimestamp: "2020-02-12T11:32:40Z"
generation: 3
name: myservice
namespace: default
ownerReferences:
- apiVersion: flagger.app/v1beta1
blockOwnerDeletion: true
controller: true
kind: Canary
name: myservice
uid: 5649e2e2-4d8b-11ea-8367-2eaee74ab378
resourceVersion: "2907945"
selfLink: /apis/networking.istio.io/v1alpha3/namespaces/default/virtualservices/myservice
uid: 60b4c06e-4d8b-11ea-8367-2eaee74ab378
spec:
gateways:
- my-gateway.istio-system.svc.cluster.local
hosts:
- api.mydomain.com
- myservice
http:
- corsPolicy:
allowHeaders:
- X-Tenant-Identifier
- Content-Type
- Authorization
allowMethods:
- GET
- POST
- PATCH
allowOrigin:
- '*'
maxAge: 24h
match:
- uri:
prefix: /myservice
route:
- destination:
host: myservice-primary
port:
number: 8080
weight: 100
- destination:
host: myservice-canary
port:
number: 8080
weight: 0 I have added annotations for prometheus in deployment.yaml file as you mentioned before. |
Can you please remove CORS from the canary, I suspect the 401 comes form there. |
I just removed the corsPolicy for testing it, it still gives me 401 error. I need to use corsPolicy for port traffic on port 8080. |
Hmm ok, so if you set |
It has the same issue. Both for 401 unauthorized error and error on load-tester by deploying a new image and failed Canary deployment. It seems Flagger is not working for Istio Canary deployment. |
I'm confused, yesterday you said that it worked by deactivating portDiscovery. |
yeah it was my bad, I thought I needed authentication for /myservice/monitor so when I got 500 error message yesterday, I thought it is working as it is supposed to but when I found there is no need for authentication, I started to get 401 unauthorized error. So that means it has never worked. |
Ok can you post the canary yaml here I'll try to reproduce this on my cluster. |
Or you can join Flagger's Slack and give me a ping https://slack.weave.works/ |
I am gonna use automatic Canary deployment so I tried to follow the process via Flagger.
Here was my VirtualService file for routing:
Which the routing part looks like this:
However as I found the Flagger overwites the virtualservice, I have removed this file and modified the canary.yaml file based on my requirements but I get yaml error:
Can anyone help with this issue?
The text was updated successfully, but these errors were encountered: