Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Installation of Kubecost on Kubernetes 1.25 fails when installing without Helm #1910

Closed
Adam-Stack-PM opened this issue Jan 19, 2023 · 13 comments
Assignees
Labels
bug Something isn't working v1.100

Comments

@Adam-Stack-PM
Copy link

Adam-Stack-PM commented Jan 19, 2023

Installation of Kubecost on Kubernetes 1.25 still fails when installing without Helm:

Proposed solution from @AjayTripathy: Generate that file from a template where we assume we're on helm less than 1.25.

Linked Issues and context: #1773 (comment)

@Adam-Stack-PM Adam-Stack-PM added bug Something isn't working v1.100 labels Jan 19, 2023
@dwbrown2
Copy link
Contributor

@AjayTripathy is this still relevent?

@AjayTripathy
Copy link
Contributor

AjayTripathy commented Jan 31, 2023

Yes, this is still relevant.

We probably need to set
-a, --api-versions strings Kubernetes api versions used for Capabilities.APIVersions

See https://helm.sh/docs/helm/helm_template/

when we run helm template in our buildscripts to generate the unbundled yaml file:
https://github.com/kubecost/release-scripts/blob/2674fbd636e8ddee627373ea2bdac55fdeb6097d/create_release_tags.py#L182

A small release fix like this would normally fall to the buildmaster, maybe you can help triage @teevans ?

@dwbrown2
Copy link
Contributor

How common is it for users to install without helm at this point?

cc @kwombach12

@AjayTripathy
Copy link
Contributor

We have at least one from #1773 :)

I doubt its super common so we perhaps can mark as p1

@zioproto
Copy link

How common is it for users to install without helm at this point?

cc @kwombach12

Hello, I am the person that raised the issue about the installation without Helm still failing in #1773

I discovered this problem because I am doing a content refresh of this Microsoft documentation page, that provides the option to install Kubecost on AKS without Helm:

https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/scenarios/app-platform/aks/cost-governance-with-kubecost

Deprecating the installation without Helm is a possible solution. But if the Kubecost project wants to support the installation by Kubernetes manifest files, then this issue should be fixed.

Please clarify if long term you want to support only the Helm installation method, so I can refresh the Microsoft docs with the correct information.

Thanks

@teevans
Copy link
Member

teevans commented Jan 31, 2023

@zioproto - We'll get this added to our tracking and try to get a fix out as quick as we can!

@michaelmdresser michaelmdresser changed the title Installation of Kubecost on Kubernetes 1.25 still fails when installing without Helm Installation of Kubecost on Kubernetes 1.25 fails when installing without Helm Jan 31, 2023
@michaelmdresser
Copy link
Contributor

michaelmdresser commented Jan 31, 2023

Repro with error output:

k3d cluster create --image rancher/k3s:v1.25.3-rc3-k3s1 1.25
...
kubectl create namespace kubecost
...
kubectl apply -f https://raw.githubusercontent.com/kubecost/cost-analyzer-helm-chart/master/kubecost.yaml --namespace kubecost
serviceaccount/kubecost-grafana created
serviceaccount/kubecost-kube-state-metrics created
serviceaccount/kubecost-prometheus-node-exporter created
serviceaccount/kubecost-prometheus-server created
serviceaccount/kubecost-cost-analyzer created
secret/kubecost-grafana created
configmap/kubecost-grafana-config-dashboards created
configmap/kubecost-grafana created
configmap/kubecost-prometheus-server created
configmap/kubecost-cost-analyzer created
configmap/nginx-conf created
configmap/attached-disk-metrics-dashboard created
configmap/cluster-metrics-dashboard created
configmap/cluster-utilization-dashboard created
configmap/deployment-utilization-dashboard created
configmap/label-cost-dashboard created
configmap/namespace-utilization-dashboard created
configmap/node-utilization-dashboard created
configmap/pod-utilization-dashboard created
configmap/prom-benchmark-dashboard created
persistentvolumeclaim/kubecost-prometheus-server created
persistentvolumeclaim/kubecost-cost-analyzer created
clusterrole.rbac.authorization.k8s.io/kubecost-grafana-clusterrole created
clusterrole.rbac.authorization.k8s.io/kubecost-kube-state-metrics created
clusterrole.rbac.authorization.k8s.io/kubecost-prometheus-server created
clusterrole.rbac.authorization.k8s.io/kubecost created
clusterrolebinding.rbac.authorization.k8s.io/kubecost-grafana-clusterrolebinding created
clusterrolebinding.rbac.authorization.k8s.io/kubecost-kube-state-metrics created
clusterrolebinding.rbac.authorization.k8s.io/kubecost-prometheus-server created
clusterrolebinding.rbac.authorization.k8s.io/kubecost created
role.rbac.authorization.k8s.io/kubecost-grafana created
role.rbac.authorization.k8s.io/kubecost created
role.rbac.authorization.k8s.io/kubecost-cost-analyzer-psp created
rolebinding.rbac.authorization.k8s.io/kubecost-grafana created
rolebinding.rbac.authorization.k8s.io/kubecost created
rolebinding.rbac.authorization.k8s.io/kubecost-cost-analyzer-psp created
service/kubecost-grafana created
service/kubecost-kube-state-metrics created
service/kubecost-prometheus-node-exporter created
service/kubecost-prometheus-server created
service/kubecost-cost-analyzer created
daemonset.apps/kubecost-prometheus-node-exporter created
deployment.apps/kubecost-grafana created
deployment.apps/kubecost-kube-state-metrics created
deployment.apps/kubecost-prometheus-server created
deployment.apps/kubecost-cost-analyzer created
resource mapping not found for name: "kubecost-grafana" namespace: "" from "https://raw.githubusercontent.com/kubecost/cost-analyzer-helm-chart/master/kubecost.yaml": no matches for kind "PodSecurityPolicy" in version "policy/v1beta1"
ensure CRDs are installed first
resource mapping not found for name: "kubecost-cost-analyzer-psp" namespace: "" from "https://raw.githubusercontent.com/kubecost/cost-analyzer-helm-chart/master/kubecost.yaml": no matches for kind "PodSecurityPolicy" in version "policy/v1beta1"
ensure CRDs are installed first

First of all, there's a problem that no one has brought up yet: we should absolutely not be recommending a flat kubecost.yaml from the master branch, which is now defunct. I've created a PR to fix this (#1936) but we're still left with the PSP problem.

k3d cluster create --image rancher/k3s:v1.25.3-rc3-k3s1 1.25
...
kubectl create namespace kubecost
...
kubectl apply -f https://raw.githubusercontent.com/kubecost/cost-analyzer-helm-chart/develop/kubecost.yaml --namespace kubecost
...
resource mapping not found for name: "kubecost-grafana" namespace: "" from "https://raw.githubusercontent.com/kubecost/cost-analyzer-helm-chart/develop/kubecost.yaml": no matches for kind "PodSecurityPolicy" in version "policy/v1beta1"
ensure CRDs are installed first
resource mapping not found for name: "kubecost-cost-analyzer-psp" namespace: "" from "https://raw.githubusercontent.com/kubecost/cost-analyzer-helm-chart/develop/kubecost.yaml": no matches for kind "PodSecurityPolicy" in version "policy/v1beta1"
ensure CRDs are installed first

The PSP problem seems to be fixed if I template from nightly:

k3d cluster create --image rancher/k3s:v1.25.3-rc3-k3s1 1.25     
...
kubectl create namespace kubecost
...
helm template kubecost kubecost-nightly/cost-analyzer -n kubecost | kubectl apply -f -
serviceaccount/kubecost-grafana created
serviceaccount/kubecost-kube-state-metrics created
serviceaccount/kubecost-prometheus-node-exporter created
serviceaccount/kubecost-prometheus-server created
serviceaccount/kubecost-cost-analyzer created
secret/kubecost-grafana created
configmap/kubecost-grafana-config-dashboards created
configmap/kubecost-grafana created
configmap/kubecost-prometheus-server created
configmap/kubecost-cost-analyzer created
configmap/nginx-conf created
configmap/attached-disk-metrics-dashboard created
configmap/cluster-metrics-dashboard created
configmap/cluster-utilization-dashboard created
configmap/deployment-utilization-dashboard created
configmap/label-cost-dashboard created
configmap/namespace-utilization-dashboard created
configmap/node-utilization-dashboard created
configmap/pod-utilization-dashboard created
configmap/prom-benchmark-dashboard created
persistentvolumeclaim/kubecost-prometheus-server created
persistentvolumeclaim/kubecost-cost-analyzer created
clusterrole.rbac.authorization.k8s.io/kubecost-grafana-clusterrole created
clusterrole.rbac.authorization.k8s.io/kubecost-kube-state-metrics created
clusterrole.rbac.authorization.k8s.io/kubecost-prometheus-server created
clusterrole.rbac.authorization.k8s.io/kubecost-cost-analyzer created
clusterrolebinding.rbac.authorization.k8s.io/kubecost-grafana-clusterrolebinding created
clusterrolebinding.rbac.authorization.k8s.io/kubecost-kube-state-metrics created
clusterrolebinding.rbac.authorization.k8s.io/kubecost-prometheus-server created
clusterrolebinding.rbac.authorization.k8s.io/kubecost-cost-analyzer created
role.rbac.authorization.k8s.io/kubecost-grafana created
role.rbac.authorization.k8s.io/kubecost-cost-analyzer created
role.rbac.authorization.k8s.io/kubecost-cost-analyzer-psp created
rolebinding.rbac.authorization.k8s.io/kubecost-grafana created
rolebinding.rbac.authorization.k8s.io/kubecost-cost-analyzer created
rolebinding.rbac.authorization.k8s.io/kubecost-cost-analyzer-psp created
service/kubecost-grafana created
service/kubecost-kube-state-metrics created
service/kubecost-prometheus-node-exporter created
service/kubecost-prometheus-server created
service/kubecost-cost-analyzer created
daemonset.apps/kubecost-prometheus-node-exporter created
deployment.apps/kubecost-grafana created
deployment.apps/kubecost-kube-state-metrics created
deployment.apps/kubecost-prometheus-server created
deployment.apps/kubecost-cost-analyzer created

Now that our README instructions aren't messed up (#1936), the long term fix is to update the develop branch kubecost.yaml as part of the release process. Ideally we would link to release-specific kubecost.yaml files, like https://raw.githubusercontent.com/kubecost/cost-analyzer-helm-chart/v1.99/kubecost.yaml but unfortunately that makes links for external documentation like Azure unstable.

@teevans
Copy link
Member

teevans commented Jan 31, 2023

@zioproto - We have a fix that's merged in that should resolve this problem. The only item that needs updating is the manifests file on the documentation page. The go forward URL should be

https://raw.githubusercontent.com/kubecost/cost-analyzer-helm-chart/develop/kubecost.yaml

In the future, we'll add a versioned url that will pull the latest. Let me know if there is anything else we can do!

@teevans teevans closed this as completed Jan 31, 2023
@michaelmdresser
Copy link
Contributor

michaelmdresser commented Jan 31, 2023

@teevans not master, that's a bad URL

@teevans
Copy link
Member

teevans commented Jan 31, 2023

Fixed! Good catch!

@dwbrown2
Copy link
Contributor

dwbrown2 commented Feb 1, 2023

Thanks for quick fix, @teevans and @michaelmdresser!

@zioproto is there anything else we can get you at this point? In general, we recommend the helm 3 as the best install path given its ability to easily configure Kubecost in many ways. Would you be open to a contribution on this article?

@zioproto
Copy link

@dwbrown2 the article I refreshed is https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/scenarios/app-platform/aks/cost-governance-with-kubecost

The changes I proposed have been merged already. If you want to further improve this page you can propose a PR on GitHub at this repository: https://github.com/MicrosoftDocs/cloud-adoption-framework/blob/main/docs/scenarios/app-platform/aks/cost-governance-with-kubecost.md

@teevans
Copy link
Member

teevans commented Feb 24, 2023

Thanks @zioproto!!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working v1.100
Projects
None yet
Development

No branches or pull requests

6 participants