Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

get microk8s set up #73

Closed
vigeeking opened this issue Aug 12, 2020 · 8 comments · Fixed by #71
Closed

get microk8s set up #73

vigeeking opened this issue Aug 12, 2020 · 8 comments · Fixed by #71
Assignees

Comments

@vigeeking
Copy link
Owner

I've been tinkering around with this one long enough as part of the media replication story (issue #3 ) that it really should be it's own story. I've worked with k8s in isolation before, but it's always been as a very narrow "do this thing" approach. I had originally planned on doing k8s the hard way (https://github.com/kelseyhightower/kubernetes-the-hard-way) but after talking with Justin, I think I'm just gonna stop off briefly to make sure I understand each of the individual components. But I wanted to keep a log of what I had done in case I run into any of these problems again, or if I ever wanted to brush up. This task is done when I have launched my first application from a helm chart as part of my pipeline.

@vigeeking
Copy link
Owner Author

Good comic from Justin, I had read this before I knew anything about k8s (years ago) but it makes a lot more sense now, and still likely serves as a good reference point: https://cloud.google.com/kubernetes-engine/kubernetes-comic

@vigeeking
Copy link
Owner Author

One of the other things that I'm still struggling some with is ingress ports and network sharing within k8s. This seems like a very good cheat sheet to use: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#port-forward

@vigeeking
Copy link
Owner Author

Got everything set up, looks like I hadn't set up my microk8s config correctly. I was getting k8s not found errors, but they seem to have been resolved by running this: cat $HOME/.kube/config | grep microk8s
, found from https://webcloudpower.com/use-kubernetics-locally-with-microk8s/ Now having some problems with helm, but at least progress is being made finally.

@vigeeking
Copy link
Owner Author

Got the helm chart for hassio deployed, ran into some problems I wanted to flush out here. When I installed the helm chart, I got the message:

root@vigeeking:/home/tim# helm install hassio billimek/home-assistant --version 1.1.0
NAME: hassio
LAST DEPLOYED: Tue Aug 18 11:42:29 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:

  1. Get the application URL by running these commands:
    export POD_NAME=$(kubectl get pods --namespace default -l "app=home-assistant,release=hassio" -o jsonpath="{.items[0].metadata.name}")
    echo "Visit http://127.0.0.1:8080 to use your application"
    kubectl port-forward $POD_NAME 8080:80

The export POD_name failed, and I don't know why. I was able to get the pod name manually (kubectl get pods), but now the pod is listed as pending, which means I can't forward. I looked into the issue further. I am going to be disposing of this instance pretty soon, so I am not going to sanitize this output (but I would normally sanitice name and anything that looks like hash values):
root@vigeeking:/home/tim# kubectl describe pods hassio-home-assistant-d89cb6fc8-5l9jh
Name: hassio-home-assistant-d89cb6fc8-5l9jh
Namespace: default
Priority: 0
Node:
Labels: app.kubernetes.io/instance=hassio
app.kubernetes.io/name=home-assistant
pod-template-hash=d89cb6fc8
Annotations:
Status: Pending
IP:
IPs:
Controlled By: ReplicaSet/hassio-home-assistant-d89cb6fc8
Containers:
home-assistant:
Image: homeassistant/home-assistant:0.113.3
Port: 8123/TCP
Host Port: 0/TCP
Liveness: http-get http://:api/ delay=60s timeout=10s period=10s #success=1 #failure=5
Readiness: http-get http://:api/ delay=60s timeout=10s period=10s #success=1 #failure=5
Environment:
Mounts:
/config from config (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-fd2g6 (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
config:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: hassio-home-assistant
ReadOnly: false
default-token-fd2g6:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-fd2g6
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message


Warning FailedScheduling 3m1s (x1066 over 26h) default-scheduler running "VolumeBinding" filter plugin for pod "hassio-home-assistant-d89cb6fc8-5l9jh": pod has unbound immediate PersistentVolumeClaims

It looks like this may be a known issue for microk8s, and I will next be trying this workaround, since it appears to be a persistent volume claim issue: kubernetes/minikube#7828 (comment)

@jwhollingsworth
Copy link
Collaborator

If you didn't do it, you need to enable storage: "microk8s enable storage"

Also, that export looks like it didn't work, because the pod doesn't have the labels it is filtering on. Seems like a bug in that chart's NOTE.txt file.

I assume that is what this is doing: -l "app=home-assistant,release=hassio"

The Pod labels you show are:

Labels:
app.kubernetes.io/instance=hassio
app.kubernetes.io/name=home-assistant

@vigeeking
Copy link
Owner Author

Good call on the enabling storage, ty for that. For the other, I believe it was failing because the pod kept crashing because of the volume issue, which then ties back into the enabling storage. I'm going to let it sit for a bit and see if it self heals, but if not I'll get more info up here within the next hour or two.

@vigeeking
Copy link
Owner Author

I'm still having storage issues. For whatever reason, I just can't get storage in helm working the way I'd like. I've kind of run out of ideas for this one, so I'm going to close it. If need be I can reopen this one, otherwise I'll assume I've passed any batons on so there is no reason to leave this open. I think this is a quantum issue.

@kamyar
Copy link

kamyar commented Jul 5, 2021

export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=home-assistant,app.kubernetes.io/instance=home-assistant" -o jsonpath="{.items[0].metadata.name}")
worked for me it seems to have been fixed in the post help output message I got

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants