We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
After updating to v0.1.11 with kubectl set image the new pods were crashing due to an error with the k8s configuration.
kubectl set image
It seems to be related to the removal of IN_CLUSTER=true in the Dockerfile.
IN_CLUSTER=true
I will perform additional diagnostics and add the info here.
Log:
2020/03/12 13:30:17 ### Kubeview v0.1.11 starting... 2020/03/12 13:30:17 ### Connecting to Kubernetes... 2020/03/12 13:30:17 ### Creating client with config file: /.kube/config panic: stat /.kube/config: no such file or directory goroutine 1 [running]: main.main() /build/cmd/server/main.go:59 +0xc54
PS: Thanks for all the upgrades! Great stuff! 👏
The text was updated successfully, but these errors were encountered:
Fixed by adding
name: IN_CLUSTER value: "true"
Note: " " are required has these are required as per kubernetes/kubernetes#73692
Sorry, something went wrong.
Hi, Sorry yes I felt that having that hard coded in the Dockerfile was a bit hidden and easy to miss. I myself forgot it was there!
I updated the Helm chart to include it but made a typo, you're correct the true should be in quotes
No branches or pull requests
After updating to v0.1.11 with
kubectl set image
the new pods were crashing due to an error with the k8s configuration.It seems to be related to the removal of
IN_CLUSTER=true
in the Dockerfile.I will perform additional diagnostics and add the info here.
Log:
PS: Thanks for all the upgrades! Great stuff! 👏
The text was updated successfully, but these errors were encountered: