-
-
Notifications
You must be signed in to change notification settings - Fork 141
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stuck on Pending and CrashLoopBackOff #12
Comments
No, you don't need any external dependency as the tool installs K3s with etcd as datastore. As for the issue, what do you see in the cloud controller manager pod's log? |
Oooh, I see! I'm also a bit confused on the persistent volume part haha. Most of the logs have the events I posted above, some others have these:
When I run
And it's the same for the workers. |
I think you're having the same problem someone else reported in #7 Which OS are you using and how did you install Ruby? Can you please try with the Docker image instead of the gem directly? See instructions in the README. |
I get the same errors when I run it using Docker. I'm running Manjaro which uses Arch Once that was done I ran the hetzner-k3s command and it all started without any errors (except those about master which was noted in the README to ignore when starting HA). |
Well this is weird. With Docker it should just work :p But not sure if I understand, did you get it working in the end or still the same problem after you installed Ruby etc with RVM? Did you just try to update an existing cluster or did you try creating a new one? |
Nope it's still not working, the logs I sent earlier (45 min ago) is from a newly created Project then after that I haven't tried again just trying to figure it out. |
I don't understand why you're having problems with Docker, but anyway can you try installing Ruby as described in #7 (comment)? The reason I've added the Docker image is exactly so that you wouldn't have to deal with Ruby 🤔 |
Sure I'll try that, here's my config again in the meantime just in case.
|
If this is the config you used with the Docker image, you should set the kubeconfig path to /cluster/kubeconfig as mentioned in the README. Or is this the config you are using with the gem directly? |
It's the config I've used with both. I missed the part with /cluster/kubeconfig but I'll try that before finding the targets for #7 (comment). Since it's different from arch and ubuntu. |
Hey I got the same problem with your config and I suspect it's got to do with the uppercase name for the cluster, since that's used to generate the names of the resources. I am now trying with a lowercase string to see if that's the problem. Can you try as well? Maybe with a new project so you start clean. |
Yes with your config but lowercase cluster name it works fine. I will add a validation to enforce lowercase letters. Please try that and let me know. |
Oh lol, alright I'll try that first then. |
So it seems that the Hetzner Cloud Controller Manager looks for servers with lowercase names, so it doesn't find the servers with uppercase text in the name. |
Eyy they're all running now! God damnet 🤣 Last night I even thought about the uppercase when I was staring at the configs trying to see if I had messed up somewhere, but I just thought "Nah, it can't be that easy" and left it. Thanks for the help 😄 About the persistent volumes, do I just follow the guide in the git for Hetzner CSI excluding the installation since that was done automatically with this script. |
Yeah I am surprised too that the cloud controller doesn't like uppercase characters :D I just released 0.3.7 with more validation on the cluster name and am about to push the Docker image v0.3.7 as well. As for the CSI you don't need to do anything, you're ready to go about creating volumes. The single storage class |
@Rinnray Do you mind giving the latest Docker image v0.3.7 a try? |
Oh so I just create a volume normally on Hetzner and that's it?
Sure I'll do it in a minute. |
You don't have to create it yourself if that's what you mean, you just create a normal Kubernetes persistent volume claim resource and the volume will be created automatically :) |
Ooooh even easier then 😄 I just tested the docker and I got the warning about using lowercase, so that works nicely! |
Perfect, thanks for your help with this! :) I guess I can close now if all looks good? |
Yap! All the issues I had is now resolved, now I just need to figure out why I can't access rancher haha. |
Np. Have fun :) |
More issues haha.
I follow the guide and I showed my yaml in the other issue I opened.
When I run the command to get all pods in all namespaces, this is the result:
It stays like that and it's the same if I install cert-manager, it just stays pending. The output in the codeblock above is from newely created Cluster, the one I created very first after fixing the last issue, is where I saw that it's been like this since it was created.
When I run the command to describe the pods, this is the message most of them says (give or take a few changes like the ready numbers):
Not really sure how to fix this.
Another question I got since I've seen k3s with external database, is that something I still need to setup using this way?
I'm still fairly new with all of this 😅
The text was updated successfully, but these errors were encountered: