Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Linkerd proxy randomly failing to start on a fresh linkerd install #5681

Closed
Patanouk opened this issue Feb 8, 2021 · 11 comments
Closed

Linkerd proxy randomly failing to start on a fresh linkerd install #5681

Patanouk opened this issue Feb 8, 2021 · 11 comments

Comments

@Patanouk
Copy link

Patanouk commented Feb 8, 2021

Bug Report

What is the issue?

Installation method : linkerd install, helm install and local helm install with the fetched chart all trigger the same behaviour

The linkerd-proxy containers are randomly failing their readiness check
The linkerd-proxy containers have two endpoints for readiness : /live and /ready

  • /live always returns a 200 status code
  • /ready returns a 503 status code for some of the pods

See below for the pods in the linkerd namespace
Doing a rollout restart of the pods with a non-started proxy doesn't help
The pods with 2/2 containers running are not always the same ones, but the linkerd-identity pod always has a correctly started proxy

NAME                                      READY   STATUS    RESTARTS   AGE
linkerd-controller-6f678766f-dkpbq        2/2     Running   0          15m
linkerd-destination-84b4fff497-9mj2n      1/2     Running   0          15m
linkerd-grafana-85bd755cf9-bqx9q          1/2     Running   0          15m
linkerd-identity-596bc7448-rfm4q          2/2     Running   0          15m
linkerd-prometheus-54fdcb4b76-wmq8v       2/2     Running   0          15m
linkerd-proxy-injector-6556f4c98b-dvjrk   2/2     Running   0          15m
linkerd-sp-validator-6cb94444b8-fxm9h     2/2     Running   0          15m
linkerd-tap-778f7c4c5-9ggcc               1/2     Running   0          15m
linkerd-web-7b79ccc68b-2kw5q              1/2     Running   0          15m

How can it be reproduced?

Hard to say, considering the installation is working fine locally with the same helm chart.
It seems to be related to the startup speed of the pod. The slowest pods get a non-functional linkerd-proxy

Logs, error output, etc

  • Logs of the proxy for the controller : logs
  • Logs of the proxy for the 'web' : logs
  • Logs of the proxy for the identity : logs
  • Logs of the proxy for Grafana : logs

linkerd check output

kubernetes-api
--------------
√ can initialize the client
√ can query the Kubernetes API

kubernetes-version
------------------
√ is running the minimum Kubernetes API version
√ is running the minimum kubectl version

linkerd-existence
-----------------
√ 'linkerd-config' config map exists
√ heartbeat ServiceAccount exist
√ control plane replica sets are ready
√ no unschedulable pods
√ controller pod is running
√ can initialize the client
√ can query the control plane API

linkerd-config
--------------
√ control plane Namespace exists
√ control plane ClusterRoles exist
√ control plane ClusterRoleBindings exist
√ control plane ServiceAccounts exist
√ control plane CustomResourceDefinitions exist
√ control plane MutatingWebhookConfigurations exist
√ control plane ValidatingWebhookConfigurations exist
√ control plane PodSecurityPolicies exist

linkerd-identity
----------------
√ certificate config is valid
√ trust anchors are using supported crypto algorithm
√ trust anchors are within their validity period
√ trust anchors are valid for at least 60 days
√ issuer cert is using supported crypto algorithm
√ issuer cert is within its validity period
√ issuer cert is valid for at least 60 days
√ issuer cert is issued by the trust anchor

linkerd-webhooks-and-apisvc-tls
-------------------------------
√ tap API server has valid cert
√ tap API server cert is valid for at least 60 days
√ proxy-injector webhook has valid cert
√ proxy-injector cert is valid for at least 60 days
√ sp-validator webhook has valid cert
√ sp-validator cert is valid for at least 60 days

linkerd-api
-----------
- pod/linkerd-web-84d87bd994-ksswf container linkerd-proxy is not ready <-- Hanging there, which makes sense

Environment

  • Kubernetes Version: v1.16.9 (Same behaviour observed on a 1.18.8 cluster)
  • Cluster Environment: Alicloud container Kubernetes
  • Host OS:
  • Linkerd version: stable-2.9.2

Possible solution

The identity component fails to validate the identity for some of the linkerd-proxy
Here is a line of log from the linkerd-proxy container of the controller pod

[11.801650s]  INFO ThreadId(02) daemon:identity: linkerd2_app: Certified identity: linkerd-controller.linkerd.serviceaccount.identity.linkerd.cluster.local

Output of grep -q "Certified identity" matches the status of the proxy (Pods with this line of log have a correctly started linkerd-proxy)

Certified : linkerd-controller-6f678766f-dkpbq.txt
Certified : linkerd-identity-596bc7448-rfm4q.txt
Certified : linkerd-prometheus-54fdcb4b76-wmq8v.txt
Certified : linkerd-proxy-injector-6556f4c98b-dvjrk.txt
Certified : linkerd-sp-validator-6cb94444b8-fxm9h.txt
Non certified : linkerd-destination-84b4fff497-9mj2n.txt
Non certified : linkerd-grafana-85bd755cf9-bqx9q.txt
Non certified : linkerd-tap-778f7c4c5-9ggcc.txt
Non certified : linkerd-web-7b79ccc68b-2kw5q.txt

Additional context

The issue seems related to the startup speed of the pods.
According to my non-scientific tests, the linkerd-proxy are correctly started if the pod have a 'fast' startup (e.g. less than 10 seconds?)

I also tried to fiddle with the initialDelaySeconds values of the livenessProbe checks in the helm chart, but that didn't seems to help

@Patanouk
Copy link
Author

Patanouk commented Feb 9, 2021

Adding more logs from a fresh install
kubectl logs linkerd-identity -c identity

Alicloud install

time="2021-02-09T02:47:32Z" level=info msg="running version stable-2.9.2"
time="2021-02-09T02:47:32Z" level=debug msg="Loaded issuer cert: -----BEGIN CERTIFICATE-----\nMIIBszCCAVmgAwIBAgIRAPcMT3qjPFkrAikHGCMIL/swCgYIKoZIzj0EAwIwJTEj\nMCEGA1UEAxMacm9vdC5saW5rZXJkLmNsdXN0ZXIubG9jYWwwHhcNMjEwMjA0MDUz\nMTA3WhcNMjIwMjA0MDUzMTA3WjApMScwJQYDVQQDEx5pZGVudGl0eS5saW5rZXJk\nLmNsdXN0ZXIubG9jYWwwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAAS2AIk5aQPE\n0b+U4gX+R67y7/uksMnIQ5y4mMf8SxL6KVLdeGy4gIZEOXIMmkUe4rbPFH1WLTGV\nz4SH6xO70YHXo2YwZDAOBgNVHQ8BAf8EBAMCAQYwEgYDVR0TAQH/BAgwBgEB/wIB\nADAdBgNVHQ4EFgQUNbIe7Jlwm/NlrbqU6eEKqrFp9vcwHwYDVR0jBBgwFoAUyVCI\nGECkbYniOYLNXX7Lufo9zeQwCgYIKoZIzj0EAwIDSAAwRQIgYNlbYKdpPtSHgC9h\n+y7twW4ndk4FT51I7vHTjzqBZNMCIQDxIuczKv0lC2hudff0UNesmJpG+INUc+86\ncnTem0ygHQ==\n-----END CERTIFICATE-----\n"
time="2021-02-09T02:47:32Z" level=debug msg="Issuer has been updated"
time="2021-02-09T02:47:32Z" level=info msg="starting admin server on :9990"
time="2021-02-09T02:47:32Z" level=info msg="starting gRPC server on :8080"
time="2021-02-09T02:47:33Z" level=debug msg="Validating token for linkerd-identity.linkerd.serviceaccount.identity.linkerd.cluster.local"
time="2021-02-09T02:47:33Z" level=info msg="certifying linkerd-identity.linkerd.serviceaccount.identity.linkerd.cluster.local until 2021-02-10 02:47:53 +0000 UTC"

Local(Minikube) install

time="2021-02-09T03:08:45Z" level=info msg="running version stable-2.9.2"
time="2021-02-09T03:08:46Z" level=debug msg="Loaded issuer cert: -----BEGIN CERTIFICATE-----\nMIIBszCCAVmgAwIBAgIRAPcMT3qjPFkrAikHGCMIL/swCgYIKoZIzj0EAwIwJTEj\nMCEGA1UEAxMacm9vdC5saW5rZXJkLmNsdXN0ZXIubG9jYWwwHhcNMjEwMjA0MDUz\nMTA3WhcNMjIwMjA0MDUzMTA3WjApMScwJQYDVQQDEx5pZGVudGl0eS5saW5rZXJk\nLmNsdXN0ZXIubG9jYWwwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAAS2AIk5aQPE\n0b+U4gX+R67y7/uksMnIQ5y4mMf8SxL6KVLdeGy4gIZEOXIMmkUe4rbPFH1WLTGV\nz4SH6xO70YHXo2YwZDAOBgNVHQ8BAf8EBAMCAQYwEgYDVR0TAQH/BAgwBgEB/wIB\nADAdBgNVHQ4EFgQUNbIe7Jlwm/NlrbqU6eEKqrFp9vcwHwYDVR0jBBgwFoAUyVCI\nGECkbYniOYLNXX7Lufo9zeQwCgYIKoZIzj0EAwIDSAAwRQIgYNlbYKdpPtSHgC9h\n+y7twW4ndk4FT51I7vHTjzqBZNMCIQDxIuczKv0lC2hudff0UNesmJpG+INUc+86\ncnTem0ygHQ==\n-----END CERTIFICATE-----\n"
time="2021-02-09T03:08:46Z" level=debug msg="Issuer has been updated"
time="2021-02-09T03:08:46Z" level=info msg="starting admin server on :9990"
time="2021-02-09T03:08:46Z" level=info msg="starting gRPC server on :8080"
time="2021-02-09T03:08:46Z" level=debug msg="Validating token for linkerd-identity.linkerd.serviceaccount.identity.linkerd.cluster.local"
time="2021-02-09T03:08:46Z" level=info msg="certifying linkerd-identity.linkerd.serviceaccount.identity.linkerd.cluster.local until 2021-02-10 03:09:06 +0000 UTC"
time="2021-02-09T03:09:11Z" level=debug msg="Validating token for linkerd-controller.linkerd.serviceaccount.identity.linkerd.cluster.local"
time="2021-02-09T03:09:11Z" level=debug msg="Validating token for linkerd-tap.linkerd.serviceaccount.identity.linkerd.cluster.local"
time="2021-02-09T03:09:11Z" level=debug msg="Validating token for linkerd-web.linkerd.serviceaccount.identity.linkerd.cluster.local"
time="2021-02-09T03:09:11Z" level=debug msg="Validating token for linkerd-destination.linkerd.serviceaccount.identity.linkerd.cluster.local"
time="2021-02-09T03:09:11Z" level=info msg="certifying linkerd-tap.linkerd.serviceaccount.identity.linkerd.cluster.local until 2021-02-10 03:09:31 +0000 UTC"
time="2021-02-09T03:09:11Z" level=info msg="certifying linkerd-web.linkerd.serviceaccount.identity.linkerd.cluster.local until 2021-02-10 03:09:31 +0000 UTC"
time="2021-02-09T03:09:11Z" level=info msg="certifying linkerd-controller.linkerd.serviceaccount.identity.linkerd.cluster.local until 2021-02-10 03:09:31 +0000 UTC"
time="2021-02-09T03:09:11Z" level=info msg="certifying linkerd-destination.linkerd.serviceaccount.identity.linkerd.cluster.local until 2021-02-10 03:09:31 +0000 UTC"
time="2021-02-09T03:09:12Z" level=debug msg="Validating token for linkerd-proxy-injector.linkerd.serviceaccount.identity.linkerd.cluster.local"
time="2021-02-09T03:09:12Z" level=info msg="certifying linkerd-proxy-injector.linkerd.serviceaccount.identity.linkerd.cluster.local until 2021-02-10 03:09:32 +0000 UTC"
time="2021-02-09T03:09:13Z" level=debug msg="Validating token for linkerd-grafana.linkerd.serviceaccount.identity.linkerd.cluster.local"
time="2021-02-09T03:09:13Z" level=info msg="certifying linkerd-grafana.linkerd.serviceaccount.identity.linkerd.cluster.local until 2021-02-10 03:09:33 +0000 UTC"
time="2021-02-09T03:09:14Z" level=debug msg="Validating token for linkerd-sp-validator.linkerd.serviceaccount.identity.linkerd.cluster.local"
time="2021-02-09T03:09:14Z" level=info msg="certifying linkerd-sp-validator.linkerd.serviceaccount.identity.linkerd.cluster.local until 2021-02-10 03:09:34 +0000 UTC"
time="2021-02-09T03:09:15Z" level=debug msg="Validating token for linkerd-prometheus.linkerd.serviceaccount.identity.linkerd.cluster.local"
time="2021-02-09T03:09:15Z" level=info msg="certifying linkerd-prometheus.linkerd.serviceaccount.identity.linkerd.cluster.local until 2021-02-10 03:09:35 +0000 UTC"

So the proxy fail to get certified in Alicloud. I don't see any ERROR or WARN in the logs of any of the linkerd pods

@cpretzer
Copy link
Contributor

cpretzer commented Feb 9, 2021

@Patanouk I'm not familiar with alicloud container kubernetes. If you have access to the kubernetes api-server logs, can you share those?

It might also help to see the output from kubernetes events in the linkerd and kube-system namespaces

@Patanouk
Copy link
Author

Patanouk commented Feb 9, 2021

I tried on an AWS cluster. The fresh install works out of the box

Here is my current linkerd pods in Alicloud

NAME                                    READY   STATUS    RESTARTS   AGE
linkerd-controller-7fd676b57c-jfbpd     1/2     Running   0          14m
linkerd-destination-5b987c797f-8944p    1/2     Running   0          14m
linkerd-grafana-595b8f95b-5mxk5         1/2     Running   0          14m
linkerd-identity-7698cc6b64-hstpl       2/2     Running   0          14m
linkerd-prometheus-674695458c-j7kch     1/2     Running   0          14m
linkerd-proxy-injector-d5c75475-p8fpb   1/2     Running   0          14m
linkerd-sp-validator-8f794b4fd-bt7fn    1/2     Running   0          14m
linkerd-tap-744784cf94-n2zvv            2/2     Running   0          14m
linkerd-web-7c86967466-mn8br            2/2     Running   0          14m

Here are the kubectl get events -n linkerd
Nothing usual here. I also saw the 503 in the events of the linkerd namespace in AWS
I'm not sure how to get the apiserver logs in Alicloud. I opened a ticket, so I should have access to them soon

We also have istio running in the Alicloud cluster -> Is there any known case of conflict if both are deployed in the same cluster?
Istio is opt-in by default (with a namespace annotation) in our cluster, so it normally shoudln't affect anything in the linkerd namespace

@adleong
Copy link
Member

adleong commented Feb 10, 2021

Thanks for the report, @Patanouk! At a glance, this might be related to another issue we are investigating #5599 which also reports issues connecting to pods which start slowly.

@hawkw does this look related to you? do you think this might be reproducible by adding a sleep to the control plane pods?

@hawkw
Copy link
Member

hawkw commented Feb 10, 2021

@Patanouk do you happen to have logs from the proxy in the linkerd-identity pod?

@Patanouk
Copy link
Author

Patanouk commented Feb 10, 2021

Yes, I do
Here is the result from a helm install in a new Alicloud cluster. Nothing other than linkerd was on the cluster
I did the fresh install a couple of times, but the same behavior is occuring

NAME                                      READY   STATUS    RESTARTS   AGE
linkerd-controller-5ff77c5995-7hmcz       1/2     Running   0          2m30s
linkerd-destination-7cdb897457-9rng9      1/2     Running   0          2m30s
linkerd-grafana-65b4fd7846-2px2b          1/2     Running   0          2m30s
linkerd-identity-78bbfc85d-d2hdv          2/2     Running   0          2m30s
linkerd-prometheus-84bcd9658f-qpczq       1/2     Running   0          2m30s
linkerd-proxy-injector-6c667dbdbc-fv9kb   1/2     Running   0          2m30s
linkerd-sp-validator-545c6dcf78-h9v7n     1/2     Running   0          2m30s
linkerd-tap-7cb59cfb4b-cd8cr              1/2     Running   0          2m30s
linkerd-web-5b47c97548-h6d8s              1/2     Running   0          2m30s

Logs from the identity container :

time="2021-02-10T03:19:30Z" level=info msg="running version stable-2.9.2"
time="2021-02-10T03:19:30Z" level=debug msg="Loaded issuer cert: -----BEGIN CERTIFICATE-----\nMIIBszCCAVmgAwIBAgIRAPcMT3qjPFkrAikHGCMIL/swCgYIKoZIzj0EAwIwJTEj\nMCEGA1UEAxMacm9vdC5saW5rZXJkLmNsdXN0ZXIubG9jYWwwHhcNMjEwMjA0MDUz\nMTA3WhcNMjIwMjA0MDUzMTA3WjApMScwJQYDVQQDEx5pZGVudGl0eS5saW5rZXJk\nLmNsdXN0ZXIubG9jYWwwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAAS2AIk5aQPE\n0b+U4gX+R67y7/uksMnIQ5y4mMf8SxL6KVLdeGy4gIZEOXIMmkUe4rbPFH1WLTGV\nz4SH6xO70YHXo2YwZDAOBgNVHQ8BAf8EBAMCAQYwEgYDVR0TAQH/BAgwBgEB/wIB\nADAdBgNVHQ4EFgQUNbIe7Jlwm/NlrbqU6eEKqrFp9vcwHwYDVR0jBBgwFoAUyVCI\nGECkbYniOYLNXX7Lufo9zeQwCgYIKoZIzj0EAwIDSAAwRQIgYNlbYKdpPtSHgC9h\n+y7twW4ndk4FT51I7vHTjzqBZNMCIQDxIuczKv0lC2hudff0UNesmJpG+INUc+86\ncnTem0ygHQ==\n-----END CERTIFICATE-----\n"
time="2021-02-10T03:19:30Z" level=debug msg="Issuer has been updated"
time="2021-02-10T03:19:30Z" level=info msg="starting admin server on :9990"
time="2021-02-10T03:19:30Z" level=info msg="starting gRPC server on :8080"
time="2021-02-10T03:19:32Z" level=debug msg="Validating token for linkerd-identity.linkerd.serviceaccount.identity.linkerd.cluster.local"
time="2021-02-10T03:19:32Z" level=info msg="certifying linkerd-identity.linkerd.serviceaccount.identity.linkerd.cluster.local until 2021-02-11 03:19:52 +0000 UTC"

Logs from the identity proxy :

Adding the logs from the controller proxy (non-ready proxy) if that helps :

@olix0r olix0r self-assigned this Feb 10, 2021
@olix0r
Copy link
Member

olix0r commented Feb 10, 2021

@Patanouk thanks for sharing detailed logs

It appears that the controller proxy is unable to resolve the SRV record via DNS. For instance:

[     0.441077s] DEBUG ThreadId(01) identity: trust_dns_proto::xfer: enqueueing message: [Query { name: Name { is_fqdn: false, labels: [linkerd-identity-headless, linkerd, svc, cluster, local] }, query_type: SRV, query_class: IN }]
[     0.441085s] TRACE ThreadId(01) identity: trust_dns_resolver::name_server::connection_provider: polling response inner
[     0.441107s] TRACE ThreadId(01) trust_dns_resolver::name_server::connection_provider: polling response inner
[     0.441121s] DEBUG ThreadId(01) trust_dns_proto::xfer::dns_multiplexer: sending message id: 4827
[     0.441131s] DEBUG ThreadId(01) trust_dns_proto::tcp::tcp_stream: sending message len: 80 to: 10.0.0.10:53
[     0.441180s] TRACE ThreadId(01) identity: trust_dns_resolver::name_server::connection_provider: polling response inner
[     0.441192s] TRACE ThreadId(01) trust_dns_resolver::name_server::connection_provider: polling response inner
[     0.480973s] TRACE ThreadId(01) trust_dns_resolver::name_server::connection_provider: polling response inner
[     0.480993s] DEBUG ThreadId(01) trust_dns_proto::tcp::tcp_stream: in ReadTcpState::LenBytes: 0
[     0.480997s] DEBUG ThreadId(01) trust_dns_proto::tcp::tcp_stream: got length: 173
[     0.481002s] DEBUG ThreadId(01) trust_dns_proto::tcp::tcp_stream: move ReadTcpState::Bytes: 173
[     0.481008s] DEBUG ThreadId(01) trust_dns_proto::tcp::tcp_stream: in ReadTcpState::Bytes: 173
[     0.481012s] DEBUG ThreadId(01) trust_dns_proto::tcp::tcp_stream: reset ReadTcpState::LenBytes: 0
[     0.481016s] DEBUG ThreadId(01) trust_dns_proto::tcp::tcp_stream: returning bytes
[     0.481019s] DEBUG ThreadId(01) trust_dns_proto::tcp::tcp_stream: returning buffer
[     0.481028s] DEBUG ThreadId(01) trust_dns_proto::rr::record_data: reading SOA
[     0.481051s] TRACE ThreadId(01) identity: trust_dns_resolver::name_server::connection_provider: polling response inner
[     0.481060s] DEBUG ThreadId(01) identity: trust_dns_resolver::name_server::name_server: Nameserver responded with NXDomain

I don't know enough about alicloud's default DNS configuration to guess why we'd be getting NXDomain responses for these SRV record lookups, but this is definitely the problem.

@olix0r
Copy link
Member

olix0r commented Feb 11, 2021

Do you know if your cluster has a custom domain? I can reproduce this issue by creating a cluster with a custom domain (i.e. other than cluster.local) and doing a normal linkerd install.

Otherwise, I'd suggest trying to run dig -t SRV linkerd-identity-headless.linkerd.svc.cluster.local from within the cluster... I suspect that it doesn't return the IP of the identity pod. In order for linkerd to work, we'll need to figure out how to get these DNS lookups to succeed.

@olix0r olix0r removed their assignment Feb 11, 2021
@Patanouk
Copy link
Author

Thanks everyone for the help. It's Chinese new year here, so I will check back next Thursday

@olix0r I Don't think the cluster has a custom domain name. I already checked that, since I saw other tickets open relative to a custom domain name
I will double check again on Thursday

@Patanouk
Copy link
Author

Patanouk commented Mar 3, 2021

Quick update here
I see some NXDomain in a local successful install as well

[     5.003435s] DEBUG ThreadId(01) trust_dns_proto::xfer::dns_multiplexer: sending message id: 33817
[     5.003449s] DEBUG ThreadId(01) trust_dns_proto::tcp::tcp_stream: sending message len: 106 to: 10.96.0.10:53
[     5.003531s] TRACE ThreadId(01) identity: trust_dns_resolver::name_server::connection_provider: polling response inner
[     5.003553s] TRACE ThreadId(01) trust_dns_resolver::name_server::connection_provider: polling response inner
[     5.003747s] TRACE ThreadId(01) trust_dns_resolver::name_server::connection_provider: polling response inner
[     5.003771s] DEBUG ThreadId(01) trust_dns_proto::tcp::tcp_stream: in ReadTcpState::LenBytes: 0
[     5.003777s] DEBUG ThreadId(01) trust_dns_proto::tcp::tcp_stream: got length: 199
[     5.003786s] DEBUG ThreadId(01) trust_dns_proto::tcp::tcp_stream: move ReadTcpState::Bytes: 199
[     5.003794s] DEBUG ThreadId(01) trust_dns_proto::tcp::tcp_stream: in ReadTcpState::Bytes: 199
[     5.003804s] DEBUG ThreadId(01) trust_dns_proto::tcp::tcp_stream: reset ReadTcpState::LenBytes: 0
[     5.003808s] DEBUG ThreadId(01) trust_dns_proto::tcp::tcp_stream: returning bytes
[     5.003813s] DEBUG ThreadId(01) trust_dns_proto::tcp::tcp_stream: returning buffer
[     5.003827s] DEBUG ThreadId(01) trust_dns_proto::rr::record_data: reading SOA
[     5.003864s] TRACE ThreadId(01) identity: trust_dns_resolver::name_server::connection_provider: polling response inner
[     5.003881s] DEBUG ThreadId(01) identity: trust_dns_resolver::name_server::name_server: Nameserver responded with NXDomain

A dig -t SRV linkerd-identity-headless.linkerd.svc.cluster.local returns me the IP adress of the identity POD

The issue is probably something related to Alicloud, but I'm not knowledgeable enough here to try to debug the issue here :/

We ultimately went with Istio (sorry), so this ticket can most likely be closed
Still a bit frustrating, but I cannot spend more time on this (and I have no idea where to start as well haha)

Thx everyone for your help here

@adleong
Copy link
Member

adleong commented Mar 13, 2021

Appreciate the update @Patanouk!

@adleong adleong closed this as completed Mar 13, 2021
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Jul 16, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants