-
Notifications
You must be signed in to change notification settings - Fork 2.9k
-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't provision clusters in Rancher single node docker install with bring your own valid certs #28605
Comments
Main issue: #28279 |
Rancher version: I'm still experiencing this error even using
|
There is a new fix going out for main issue that should resolve the above issue also. DNS wasn't working so that would explain why ssh tunnel failed. |
Back in test, but may want to wait for main issue to resolve first. |
Rancher version I see core-dns in the
But I'm still experiencing the issue mentioned here #28605 (comment) |
No chance to provision any combination of api and authorized cluster endpoint on/off via docker run commands. Always stuck on API not being ready. Kubeapi server logs always brings this log: |
Re-tested on Rancher version Coverage: Rancher single node install then RKE DO provisioned cluster. The only certificate install option that failed for me was Build Your Own with Valid Certs Symptoms in the BYO-Valid:
Kubernetes version used in Downstream cluster: |
Rancher version upgrade I was not able to reproduce the issue reported here: #28605 (comment)
But that upgrade issue looks pre-existing #29131 |
@superseb dug into this and and I was using the After a fresh Rancher Option C states this clearly in this section of the docs: https://rancher.com/docs/rancher/v2.x/en/installation/other-installation-methods/single-node-docker/#option-c-bring-your-own-certificate-signed-by-a-recognized-ca |
Closing, all four certificates installation methods worked in Rancher |
@izaac whats so hard to reproduce?
not working! |
@slash1387 this is a different error related to custom clusters #28836 If the cluster doesn't recover after some time, could you please open an issue and provide the logs ? Thanks for reporting this issue. |
I tried v2.5.1 what I get is when trying to import cluster is this. (Importedv1.14.9-eks)
I can inspect the cluster but the message is still present there. -- |
For me it helped when upgrading rancher from v2.4.5 to v2.5.9. Then add a new node to the cluster and remove it afterwards. |
Hi [hrvatskibogmars], Did you got any solution for this TLS issue while importing cluster. |
No. had to provision a new Rancher cluster from scratch. |
If i will provision new rancher also then to my rancher is running on secure domain which is using AWS ACM certificate, not passing certificate as a secret in ingress part (This is the scenario ) , then to getting same error. Thanks for your response. |
What kind of request is this:
bug
Steps to reproduce:
Result:
Node gets created but the kubernetes cluster won't get active
Cluster must have at least one etcd plane host: failed to connect to the following etcd host(s) [x.x.x.x]
Other details that may be helpful:
Environment information
master-head (08/27/2020)
5e1b21bgzrancher/rancher#12756
The text was updated successfully, but these errors were encountered: