-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
get microk8s set up #73
Comments
Good comic from Justin, I had read this before I knew anything about k8s (years ago) but it makes a lot more sense now, and still likely serves as a good reference point: https://cloud.google.com/kubernetes-engine/kubernetes-comic |
One of the other things that I'm still struggling some with is ingress ports and network sharing within k8s. This seems like a very good cheat sheet to use: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#port-forward |
Got everything set up, looks like I hadn't set up my microk8s config correctly. I was getting k8s not found errors, but they seem to have been resolved by running this: cat $HOME/.kube/config | grep microk8s |
Got the helm chart for hassio deployed, ran into some problems I wanted to flush out here. When I installed the helm chart, I got the message: root@vigeeking:/home/tim# helm install hassio billimek/home-assistant --version 1.1.0
The export POD_name failed, and I don't know why. I was able to get the pod name manually (kubectl get pods), but now the pod is listed as pending, which means I can't forward. I looked into the issue further. I am going to be disposing of this instance pretty soon, so I am not going to sanitize this output (but I would normally sanitice name and anything that looks like hash values): Warning FailedScheduling 3m1s (x1066 over 26h) default-scheduler running "VolumeBinding" filter plugin for pod "hassio-home-assistant-d89cb6fc8-5l9jh": pod has unbound immediate PersistentVolumeClaims It looks like this may be a known issue for microk8s, and I will next be trying this workaround, since it appears to be a persistent volume claim issue: kubernetes/minikube#7828 (comment) |
If you didn't do it, you need to enable storage: "microk8s enable storage" Also, that export looks like it didn't work, because the pod doesn't have the labels it is filtering on. Seems like a bug in that chart's NOTE.txt file. I assume that is what this is doing: -l "app=home-assistant,release=hassio" The Pod labels you show are: Labels: |
Good call on the enabling storage, ty for that. For the other, I believe it was failing because the pod kept crashing because of the volume issue, which then ties back into the enabling storage. I'm going to let it sit for a bit and see if it self heals, but if not I'll get more info up here within the next hour or two. |
I'm still having storage issues. For whatever reason, I just can't get storage in helm working the way I'd like. I've kind of run out of ideas for this one, so I'm going to close it. If need be I can reopen this one, otherwise I'll assume I've passed any batons on so there is no reason to leave this open. I think this is a quantum issue. |
|
I've been tinkering around with this one long enough as part of the media replication story (issue #3 ) that it really should be it's own story. I've worked with k8s in isolation before, but it's always been as a very narrow "do this thing" approach. I had originally planned on doing k8s the hard way (https://github.com/kelseyhightower/kubernetes-the-hard-way) but after talking with Justin, I think I'm just gonna stop off briefly to make sure I understand each of the individual components. But I wanted to keep a log of what I had done in case I run into any of these problems again, or if I ever wanted to brush up. This task is done when I have launched my first application from a helm chart as part of my pipeline.
The text was updated successfully, but these errors were encountered: