Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Specifying multiple HTTP match uri in Istio Canary deployment via Flagger #434

Closed
mfarrokhnia opened this issue Feb 11, 2020 · 39 comments · Fixed by #436
Closed

Specifying multiple HTTP match uri in Istio Canary deployment via Flagger #434

mfarrokhnia opened this issue Feb 11, 2020 · 39 comments · Fixed by #436

Comments

@mfarrokhnia
Copy link

mfarrokhnia commented Feb 11, 2020

I am gonna use automatic Canary deployment so I tried to follow the process via Flagger.
Here was my VirtualService file for routing:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: {{ .Values.project }}
  namespace: {{ .Values.service.namespace }}
spec:
  hosts:
    - {{ .Values.subdomain }}
  gateways:
    - mygateway.istio-system.svc.cluster.local
  http:
    {{- range $key, $value := .Values.routing.http }}
    - name: {{ $key }}
{{ toYaml $value | indent 6 }}
    {{- end }}

Which the routing part looks like this:

http:
    r1:
      match:
        - uri:
            prefix: /myservice/monitor
      route:
        - destination:
            host: myservice
            port:
              number: 9090
    r2:
      match:
        - uri:
            prefix: /myservice
      route:
        - destination:
            host: myservice
            port:
              number: 8080
      corsPolicy:
        allowCredentials: false
        allowHeaders:
        - X-Tenant-Identifier
        - Content-Type
        - Authorization
        allowMethods:
        - GET
        - POST
        - PATCH
        allowOrigin:
        - "*"
        maxAge: 24h    `

However as I found the Flagger overwites the virtualservice, I have removed this file and modified the canary.yaml file based on my requirements but I get yaml error:

{{- if .Values.canary.enabled }}
apiVersion: flagger.app/v1alpha3
kind: Canary
metadata:
  name: {{ .Values.project }}
  namespace: {{ .Values.service.namespace }}
  labels:
    app: {{ .Values.project }}
    chart: {{ template "myservice-chart.chart" . }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
spec:
  targetRef:
    apiVersion: apps/v1
    kind: Deployment
    name:  {{ .Values.project }}
  progressDeadlineSeconds: 60
  autoscalerRef:
    apiVersion: autoscaling/v2beta1
    kind: HorizontalPodAutoscaler
    name:  {{ .Values.project }}    
  service:
    port: 8080
    portDiscovery: true
    {{- if .Values.canary.istioIngress.enabled }}
    gateways:
    -  {{ .Values.canary.istioIngress.gateway }}
    hosts:
    - {{ .Values.canary.istioIngress.host }}
    {{- end }}
    trafficPolicy:
      tls:
        # use ISTIO_MUTUAL when mTLS is enabled
        mode: DISABLE
    # HTTP match conditions (optional)
    match:
      - uri:
          prefix: /myservice
    # cross-origin resource sharing policy (optional)
      corsPolicy:
        allowOrigin:
          - "*"
        allowMethods:
          - GET
          - POST
          - PATCH
        allowCredentials: false
        allowHeaders:
          - X-Tenant-Identifier
          - Content-Type
          - Authorization
        maxAge: 24h
      - uri:
          prefix: /myservice/monitor
  canaryAnalysis:
    interval: {{ .Values.canary.analysis.interval }}
    threshold: {{ .Values.canary.analysis.threshold }}
    maxWeight: {{ .Values.canary.analysis.maxWeight }}
    stepWeight: {{ .Values.canary.analysis.stepWeight }}
    metrics:
    - name: request-success-rate
      threshold: {{ .Values.canary.thresholds.successRate }}
      interval: 1m
    - name: request-duration
      threshold: {{ .Values.canary.thresholds.latency }}
      interval: 1m
    webhooks:
      {{- if .Values.canary.loadtest.enabled }}
      - name: load-test-get
        url: {{ .Values.canary.loadtest.url }}
        timeout: 5s
        metadata:
          cmd: "hey -z 1m -q 5 -c 2 http://myservice.default:8080"
      - name: load-test-post
        url: {{ .Values.canary.loadtest.url }}
        timeout: 5s
        metadata:
          cmd: "hey -z 1m -q 5 -c 2 -m POST -d '{\"test\": true}' http://myservice.default:8080/echo"
      {{- end }}  
{{- end }}

Can anyone help with this issue?

@stefanprodan
Copy link
Member

stefanprodan commented Feb 11, 2020

Flagger doesn't have support for multiple matches, the service.match only accepts URIs matches for all ports. Instead of using matches for Prometheus scraping why not add annotations on the deployment?

  prometheus.io/path: "/myservice/monitor"
  prometheus.io/port: "9090"
  prometheus.io/scrape: "true"

Here is an example: https://github.com/weaveworks/flagger/blob/master/kustomize/podinfo/deployment.yaml#L20-L22

@mfarrokhnia
Copy link
Author

mfarrokhnia commented Feb 11, 2020

@stefanprodan Thanks for reply. Then how can I connect to prometheus? which url would it be?
what about this case:

match:
     - uri:
           prefix: /myservice
     - uri:
           prefix: /api/myservice
     rewrite:
       uri: /myservice    
     route:
       - destination:
           host: myservice
           port:
             number: 8080
corsPolicy:
     allowOrigin:
       - "*" 
     allowMethods:
       - POST
       - PATCH
     allowCredentials: false
     allowHeaders:
       - X-Tenant-Identifier
       - Content-Type
       - authorization
     maxAge: "24h"        

can I have two prefixes?

@stefanprodan
Copy link
Member

Then how can I connect to prometheus?

Prometheus is pull based, it discoverers the pods based on annotations and scapes the app not the other way around. You don't want to load balance the Prometheus scraping since you'll get inconsistent metrics.

My advice is to add the Prometheus annotations inside the deployments and remove the matchs for port 9090.

By the way, your Canary yaml is invalid, there is no corsPolicy under match. Please see the docs how a Canary definition looks.

@mfarrokhnia
Copy link
Author

Then how can I connect to prometheus?

Prometheus is pull based, it discoverers the pods based on annotations and scapes the app not the other way around. You don't want to load balance the Prometheus scraping since you'll get inconsistent metrics.

My advice is to add the Prometheus annotations inside the deployments and remove the matchs for port 9090.

By the way, your Canary yaml is invalid, there is no corsPolicy under match. Please see the docs how a Canary definition looks.

yeah it has extra space in corspolicy, I'll fix that. Is it possible to use several prefixes in Canary?

@stefanprodan
Copy link
Member

Is it possible to use several prefixes in Canary?

Yes that should work

@mfarrokhnia
Copy link
Author

mfarrokhnia commented Feb 11, 2020

Is it possible to use several prefixes in Canary?

Yes that should work

It seems it is not working with this way that I tried, it just deploys the last prefix:

match:
      - uri:
          prefix: /api/myservice
          prefix: /myservice

@stefanprodan
Copy link
Member

That's not a valid Istio match. Use:

    match:
      - uri:
          prefix: /api/myservice
      - uri:
          prefix: /myservice

@mfarrokhnia
Copy link
Author

mfarrokhnia commented Feb 11, 2020

That's not a valid Istio match. Use:

    match:
      - uri:
          prefix: /api/myservice
      - uri:
          prefix: /myservice

thanks that worked. But the service gives 503 error service unavailable with external DNS, that means the routing is not working fine but it's working fine by port-forwarding. You mentioned that Istio uses Port Discovery and it's not needed to specify ports for virtualservice that's made by Canary, how can I make sure it is doing the right routing?

@stefanprodan
Copy link
Member

You can look at the ClusterIPs, virtual service and destination rules generated by Flagger to make sure it did the right thing. Are you getting 503 for at the gateway level?

@mfarrokhnia
Copy link
Author

mfarrokhnia commented Feb 11, 2020

@stefanprodan Yes, it is the gateway issue as port-forwarding localy workes fine. I have the ports in service.yaml

spec:
  clusterIP: 10.0.140.144
  ports:
  - name: http
    port: 8080
    protocol: TCP
    targetPort: 8080
  - name: debug
    port: 8000
    protocol: TCP
    targetPort: 8000
  - name: monitoring
    port: 9090
    protocol: TCP
    targetPort: 9090
  selector:
    app: myservice-primary

and virtualservice looks like below:

spec:
  gateways:
  - mygateway.istio-system.svc.cluster.local
  hosts:
  - api.mydomain.come
  - myservice
  http:
  - corsPolicy:
      allowHeaders:
      - X-Tenant-Identifier
      - Content-Type
      - Authorization
      allowMethods:
      - GET
      - POST
      - PATCH
      allowOrigin:
      - '*'
      maxAge: 24h
    match:
    - uri:
        prefix: /api/myservice
    - uri:
        prefix: /myservice
    route:
    - destination:
        host: myservice-primary
      weight: 100
    - destination:
        host: myservice-canary
      weight: 0

and destination rule for myservice-primary is like this:

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  creationTimestamp: "2020-02-11T11:41:41Z"
  generation: 1
  name: myservice-primary
  namespace: default
  ownerReferences:
  - apiVersion: flagger.app/v1alpha3
    blockOwnerDeletion: true
    controller: true
    kind: Canary
    name: myservice
    uid: 69ca19aa-4cc3-11ea-979f-76b1ead32f2a
  resourceVersion: "1043445"
  selfLink: /apis/networking.istio.io/v1alpha3/namespaces/default/destinationrules/myservice-primary
  uid: 786a3162-4cc3-11ea-979f-76b1ead32f2a
spec:
  host: myservice-primary
  trafficPolicy:
    tls:
      mode: DISABLE

I can not see any port under virtualservice that might be because of it I guess but you said istio does port discovery itself. Do you have any idea what it is missing?

@stefanprodan
Copy link
Member

The Istio port discovery works inside the mesh, you could test it by running a pod inside the mesh and curl to 8000, 9090. I think there is no need for these ports to be exposed, Prometheus connect directly to the pods based on the annotations. Set portDiscovery: false and your service should work with the public gateway.

@mfarrokhnia
Copy link
Author

The Istio port discovery works inside the mesh, you could test it by running a pod inside the mesh and curl to 8000, 9090. I think there is no need for these ports to be exposed, Prometheus connect directly to the pods based on the annotations. Set portDiscovery: false and your service should work with the public gateway.

That worked by deactivating portDiscovery. But why was that a problem? Should I never activate the portDiscovery?
and one more thing do I need to use annotation for debug port 8000 as well?

@stefanprodan
Copy link
Member

That worked by deactivating portDiscovery. But why was that a problem? Should I never activate the portDiscovery?

If your app is exposed only inside the mesh then you can use portDiscovery. The Istio gateway can't map port 90/443 to more then one port but the internal one called mesh can.

Do I need to use annotation for debug port 8000 as well?

No, the annotations are only for Prometheus. You should be able to connect to 8000 with kubectl port-forward deploy/app 8000:8000.

@mfarrokhnia
Copy link
Author

That worked by deactivating portDiscovery. But why was that a problem? Should I never activate the portDiscovery?

If your app is exposed only inside the mesh then you can use portDiscovery. The Istio gateway can't map port 90/443 to more then one port but the internal one called mesh can.

Do I need to use annotation for debug port 8000 as well?

No, the annotations are only for Prometheus. You should be able to connect to 8000 with kubectl port-forward deploy/app 8000:8000.

Thanks a lot for your help :)

@stefanprodan
Copy link
Member

I think Flagger could detect that internal gateway is not used, and set the port in the virtual service. Could you run some tests for me if I make a patch?

@mfarrokhnia
Copy link
Author

mfarrokhnia commented Feb 11, 2020

I think Flagger could detect that internal gateway is not used, and set the port in the virtual service. Could you run some tests for me if I make a patch?

sure, I can try but what kind of patch is that?

@stefanprodan
Copy link
Member

The patch is here ea4d9ba but it takes 15 minutes for CI to build and push an image to Docker Hub. I'll give you a ping when it's ready. Thanks!

@stefanprodan
Copy link
Member

Ok here it is: weaveworks/flagger:istio-gateway-port-ea4d9ba. Deploy this image of Flagger and enable portDiscovery, the gateway should work now. Thanks a lot for testing this.

@mfarrokhnia
Copy link
Author

mfarrokhnia commented Feb 12, 2020

Deploy this image of Flagger and enable portDiscovery, the gateway should work now. Thanks a lot for testing this.

ok, sure, thank you :) I'm gonna test it today. But here is the way that I have installed flagger I'm not sure how I can install the one that you sent me via helm upgrade:
Can you tell me how I can do it?

helm repo add flagger https://flagger.app

kubectl apply -f https://raw.githubusercontent.com/weaveworks/flagger/master/artifacts/flagger/crd.yaml

helm upgrade -i flagger flagger/flagger --namespace=istio-system --set crd.create=false --set meshProvider=istio --set metricsServer=http://prometheus:9090

@stefanprodan
Copy link
Member

Here is the upgrade command:

kubectl apply -f https://raw.githubusercontent.com/weaveworks/flagger/master/artifacts/flagger/crd.yaml

helm upgrade -i flagger flagger/flagger \
--namespace=istio-system \
--set crd.create=false \
--set meshProvider=istio \
--set metricsServer=http://prometheus:9090 \
--set image.tag=istio-gateway-port-ea4d9ba

@mfarrokhnia
Copy link
Author

kubectl apply -f https://raw.githubusercontent.com/weaveworks/flagger/master/artifacts/flagger/crd.yaml

I have tested it but it has some bugs, it doesn't create virtualservice and also it doesn't show any status info under canary:
$ kubectl get canary
NAME STATUS WEIGHT LASTTRANSITIONTIME
myservice

@stefanprodan
Copy link
Member

Please check Flagger logs

@mfarrokhnia
Copy link
Author

Please check Flagger logs

$ kubectl logs flagger-7c8ccfd59d-6zszr -c flagger -n istio-system
{"level":"info","ts":"2020-02-12T09:05:52.420Z","caller":"flagger/main.go:108","msg":"Starting flagger version 0.23.0 revision ea4d9ba mesh provider istio"}
{"level":"fatal","ts":"2020-02-12T09:05:52.457Z","caller":"flagger/main.go:338","msg":"MetricTemplate CRD is not registered metrictemplates.flagger.app is forbidden: User "system:serviceaccount:istio-system:flagger" cannot list resource "metrictemplates" in API group "flagger.app" at the cluster scope","stacktrace":"main.verifyCRDs\n\t/home/circleci/build/cmd/flagger/main.go:338\nmain.main\n\t/home/circleci/build/cmd/flagger/main.go:130\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:203"}

@stefanprodan
Copy link
Member

Ah yes you need to use the Helm chart from the master branch since the RBAC changed.

git clone https://github.com/weaveworks/flagger
cd flagger

helm upgrade -i flagger ./charts/flagger \
--namespace=istio-system \
--set crd.create=false \
--set meshProvider=istio \
--set metricsServer=http://prometheus:9090 \
--set image.tag=istio-gateway-port-ea4d9ba

@mfarrokhnia
Copy link
Author

mfarrokhnia commented Feb 12, 2020

@stefanprodan Nice, that worked. I have one more question. When I update myservice with a new image, canary starts to progress and it fails by this error:

Warning Synced 14m flagger Halt advancement myservice-primary.default waiting for rollout to finish: observed deployment generation less then desired generation
Normal Synced 14m flagger Initialization done! myservice.default
Normal Synced 4m24s flagger New revision detected! Scaling up myservice.default
Warning Synced 4m9s flagger Halt advancement myservice.default waiting for rollout to finish: 0 of 1 updated replicas are available
Normal Synced 3m54s flagger Starting canary analysis for myservice.default
Normal Synced 3m54s flagger Advance myservice.default canary weight 5
Warning Synced 84s (x10 over 3m39s) flagger Halt advancement no values found for istio metric request-success-rate probably myservice.default is not receiving traffic
Warning Synced 69s flagger Rolling back myservice.default failed checks threshold reached 10
Warning Synced 69s flagger Canary failed! Scaling down myservice.default

Although I have flagger-loadtester installed:

helm repo add flagger https://flagger.app
helm upgrade -i flagger-loadtester flagger/loadtester

webhooks:
      {{- if .Values.canary.loadtest.enabled }}
      - name: load-test-get
        url: {{ .Values.canary.loadtest.url }} #http://flagger-loadtester.default/
        timeout: 5s
        metadata:
          cmd: "hey -z 1m -q 5 -c 2 http://myservice.default:8080"
      - name: load-test-post
        url: {{ .Values.canary.loadtest.url }}
        timeout: 5s
        metadata:
          cmd: "hey -z 1m -q 5 -c 2 -m POST -d '{\"test\": true}' http://myservice.default:8080/echo"
      {{- end }}

Is there something that I am missing here? I know the image is working fine.

@stefanprodan
Copy link
Member

Exec into the load tester pod and run the hey commands to see if it can reach your app:

kubectl exec -it deploy/flagger-loadtester bash

> hey -z 10s -q 5 -c 2 http://myservice.default:8080
> hey -z 10s -q 5 -c 2 -m POST -d '{"test": true}' http://myservice.default:8080/echo

My guess is that those routes don't work for your app, do you have an echo route? are you using podinfo or some other app?

I think your routes should be:

http://myservice.default:8080/myservice
http://myservice.default:8080/api/myservice/something

@mfarrokhnia
Copy link
Author

mfarrokhnia commented Feb 12, 2020

hey -z 10s -q 5 -c 2 -m POST -d '{"test": true}' http://myservice.default:8080/echo

no I don't have /echo in myservice . Is /echo api works like a healthcheck api? So for that case do I need an api that gives 200 OK request? or it's status can be anything like 401?
can I use port 9090 for loadtester? like http://myservice.default:9090/api/myservice/monitor

@stefanprodan
Copy link
Member

Yes with the patch I made, it should be possible to use any port in the load test

@mfarrokhnia
Copy link
Author

mfarrokhnia commented Feb 12, 2020

Yes with the patch I made, it should be possible to use any port in the load test

I just tested the loadtester once by exec into loadtester which seems working fine:
kubectl exec -it deploy/flagger-loadtester bash

hey -z 10s -q 5 -c 2 http://myservice.default:9090/myservice/monitor
....
Status code distribution:
[200] 100 responses

and once through deploying a new image however I still get the same error with Canary:Failed :

Warning Synced 65s (x10 over 3m20s) flagger Halt advancement no values found for istio metric request-success-rate probably myservice.default is not receiving traffic
Warning Synced 50s flagger Rolling back myservice.default failed checks threshold reached 10
Warning Synced 50s flagger Canary failed! Scaling down myservice.default

I can only have Get to /myservice/monitor so I deactivated load-tester-post from canary.yaml file, is that alright?

Besides when I send a port-forwarding to myservice, I get 200 ok response:
$ kubectl port-forward mypod 9090:9090
Then send a Get request to localhost:9090/myservice/monitor

However it gives me 401 unauthorized message while I send a Get request to my public DNS:
api.mydomain.com/myservice/monitor
This seems gateway is still not working fine.

can you help with that?

@stefanprodan
Copy link
Member

The no values found for istio metric means that Istio telemetry is not collecting data from the Envoy sidecars. How did you installed istio? what version and what components are running in the istio-system?

As for the 401, can you post here the gateway yaml and the generated virtual service please.

@mfarrokhnia
Copy link
Author

mfarrokhnia commented Feb 12, 2020

@stefanprodan I have installed istio like this:

$ curl -L https://git.io/getLatestIstio | ISTIO_VERSION=1.4.3 sh -
$ helm template istio-1.4.3/install/kubernetes/helm/istio-init --name istio-init --namespace istio-system | kubectl apply -f -
$ kubectl get crds | grep 'istio.io' | wc -l
$ helm template istio-1.4.3/install/kubernetes/helm/istio --name istio --namespace istio-system --set grafana.enabled=true --set prometheus.enabled=true --set tracing.enabled=true --set kiali.enabled=true --set gateways.istio-ingressgateway.loadBalancerIP= ${AKS_LB_IP} | kubectl apply -f -
$ kubectl apply -f istio-1.4.3/install/kubernetes/istio-demo.yaml
##Activating Istio injection on default namespace
$ kubectl label namespace default istio-injection=enabled

If I run myservice by usual way and using virtualservice and port routing it works fine however I get 401 unauthorized using canary deployment with public DNS.

Here is my gateway:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"networking.istio.io/v1alpha3","kind":"Gateway","metadata":{"annotations":{},"name":"my-gateway","namespace":"istio-system"},"spec":{"selector":{"istio":"ingressgateway"},"servers":[{"hosts":["*mydomain.com"],"port":{"name":"http","number":80,"protocol":"HTTP"}},{"hosts":["*.mydomain.com"],"port":{"name":"https","number":443,"protocol":"HTTPS"},"tls":{"mode":"SIMPLE","privateKey":"/etc/istio/ingressgateway-certs/tls.key","serverCertificate":"/etc/istio/ingressgateway-certs/tls.crt"}}]}}
  creationTimestamp: "2020-01-24T19:02:54Z"
  generation: 2
  name: my-gateway
  namespace: istio-system
  resourceVersion: "169719"
  selfLink: /apis/networking.istio.io/v1alpha3/namespaces/istio-system/gateways/my-gateway
  uid: 205c6ce6-3edc-11ea-bd64-c28eb21a0594
spec:
  selector:
    istio: ingressgateway
  servers:
  - hosts:
    - '*.mydomain.come'
    port:
      name: http
      number: 80
      protocol: HTTP
  - hosts:
    - '*.mydomain.com'
    port:
      name: https
      number: 443
      protocol: HTTPS
    tls:
      mode: SIMPLE
      privateKey: /etc/istio/ingressgateway-certs/tls.key
      serverCertificate: /etc/istio/ingressgateway-certs/tls.crt

And the virtualservice after Canary deployment looks like here:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  creationTimestamp: "2020-02-12T11:32:40Z"
  generation: 3
  name: myservice
  namespace: default
  ownerReferences:
  - apiVersion: flagger.app/v1beta1
    blockOwnerDeletion: true
    controller: true
    kind: Canary
    name: myservice
    uid: 5649e2e2-4d8b-11ea-8367-2eaee74ab378
  resourceVersion: "2907945"
  selfLink: /apis/networking.istio.io/v1alpha3/namespaces/default/virtualservices/myservice
  uid: 60b4c06e-4d8b-11ea-8367-2eaee74ab378
spec:
  gateways:
  - my-gateway.istio-system.svc.cluster.local
  hosts:
  - api.mydomain.com
  - myservice
  http:
  - corsPolicy:
      allowHeaders:
      - X-Tenant-Identifier
      - Content-Type
      - Authorization
      allowMethods:
      - GET
      - POST
      - PATCH
      allowOrigin:
      - '*'
      maxAge: 24h
    match:
    - uri:
        prefix: /myservice
    route:
    - destination:
        host: myservice-primary
        port:
          number: 8080
      weight: 100
    - destination:
        host: myservice-canary
        port:
          number: 8080
      weight: 0

I have added annotations for prometheus in deployment.yaml file as you mentioned before.

@stefanprodan
Copy link
Member

Can you please remove CORS from the canary, I suspect the 401 comes form there.

@mfarrokhnia
Copy link
Author

Can you please remove CORS from the canary, I suspect the 401 comes form there.

I just removed the corsPolicy for testing it, it still gives me 401 error. I need to use corsPolicy for port traffic on port 8080.

@stefanprodan
Copy link
Member

Hmm ok, so if you set portDiscovery: false and add CORS then it works right?

@mfarrokhnia
Copy link
Author

mfarrokhnia commented Feb 12, 2020

Hmm ok, so if you set portDiscovery: false and add CORS then it works right?

It has the same issue. Both for 401 unauthorized error and error on load-tester by deploying a new image and failed Canary deployment. It seems Flagger is not working for Istio Canary deployment.

@stefanprodan
Copy link
Member

I'm confused, yesterday you said that it worked by deactivating portDiscovery.

@mfarrokhnia
Copy link
Author

mfarrokhnia commented Feb 12, 2020

I'm confused, yesterday you said that it worked by deactivating portDiscovery.

yeah it was my bad, I thought I needed authentication for /myservice/monitor so when I got 500 error message yesterday, I thought it is working as it is supposed to but when I found there is no need for authentication, I started to get 401 unauthorized error. So that means it has never worked.

@stefanprodan
Copy link
Member

Ok can you post the canary yaml here I'll try to reproduce this on my cluster.

@stefanprodan
Copy link
Member

Or you can join Flagger's Slack and give me a ping https://slack.weave.works/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants