-
Notifications
You must be signed in to change notification settings - Fork 7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Couldn't get resource list for metrics error #11772
Comments
I started getting that with kubectl commands, too, with kubernetes 1.25.6. Do you get that with kubectl? |
No - I get this from helm 3.11 in CI and locally (I did not with helm 3.7; haven't tested different versions). |
Also, here's the output of
|
EKS 1.24, kubectl 1.25.4, getting the same error. |
Same error, EKS 1.24, kubectl 1.24.4 |
Same issue, GKE 1.22.15-gke.1000, kubectl 1.24.0 |
Same error, Kubectl: 1.26.1 |
Two notes:
I don't think (as @joejulian mentions) this is a helm issue. If I had to guess, a new version of a common metrics provider has removed its |
I get the error when running |
Yes, using the k8s.io/client-go library for interfacing with the kubernetes API is the standard way of doing it. I was just asking the question to see if I could find any commonality between what you're seeing with helm and what I'm seeing with kubectl. They both use client-go. |
Issue fixed. |
I am running into a likewise issue, the helm version is v3.11.0 and the installation process was completed successfully after throwing these warnings.
Will this cause any problems in the future, so I have to roll back to helm version v3.10.3 and re-install? |
We are running in the same issues - a lot of warnings, no real impact in the end. Is there a context where these warnings are meaningful? Is there a way to keep using 3.11.0 but not get these warnings? Down-pining will solve the issue, but is there anything less temporary we can do? |
Those errors are from Stderr, if you look closely, the helm chart did get installed successfully. You can set to ignore Stderr on your pipeline as a workaround. |
Ignoring stderr doesn’t seem like a good fix. Presumably, either helm should downgrade the version of that lib, look for an upstream fix of that lib, or swallow the errors itself in a more limited way. |
seems like this is probably the change that introduced it: I posted an issue upstream at: |
@wyardley I had the same conern "It would prevent one from seeing important and / or fatal errors as well as this noise" when other enginee proposed this workaround. Apparently, in DevOps pipeline fatal error will fail the step as by default failOnStderr is false. |
In almost any type of CI pipeline that's configured properly, the job will fail based on the process's exit code (success on exit code 0, failure on any other exit code), so suppressing stderr should not make it "succeed" on failure. However, suppressing all stderr would potentially make debugging a failure more challenging in some cases IMO. |
I finally had some time to dig in to this on my own cluster.
is client-go trying to fill its cache. When an APIService is defined that no longer has a controller servicing it, those messages are displayed by client-go. In my case, I upgraded prometheus-adapter which seems to have changed from
This is not a helm problem. |
WRT the argument over stderr vs stdout: it's very common to have stdout reserved for necessary output and unrelated data sent to stderr. If you're using ArgoCD, for example, and you had all those client-go errors cluttering up the yaml output, it would make it unusable. For this reason, it's sent to stderr. Messages on stderr are not necessarily errors and "any type of CI pipeline that's configured properly" will accept the exit code of the application as the indication of failure, not output from any terminal channel. We only have two and leaving stdout uncluttered will satisfy the needs of the majority. |
Strange enough, our AKS cluster still have the controller servicing the apiservice in place. Can you shine a light on why we are getting the error on helm upgrade? ➜ k get apiservice v1beta1.external.metrics.k8s.io -o yaml
➜ k get svc -n keda ➜ k get ep -n keda
|
No. Please check with the metrics provider. |
Hello, |
Re-install kubectl fixed the warning message for me. I first remove originally homebrew installed kubectl in WSL, which throws this warning in every call. And reinstall using apt-get kubuctl. installation guide. |
The problems is that tooling needs to be update to fixed version of the kubernetes client. Probably the brew source isn't using latest versions... In parallel, a fix has been released as part of custom-metrics-api service (the library that projects like prometheus-adapter or KEDA use), so future releases of metrics servers should work properly even though you use affected versions (for example, in KEDA we will solve it in next release) |
Does Keda have an issue open to follow? @JorTurFer |
hi @mmorejon |
With the latest version of helm, I'm getting warnings like this when running operations. I'm wondering if it has to do with deprecated API versions, or if it's something else.
(The command still succeeds, but it's messy / unsightly)
Output of
helm version
:version.BuildInfo{Version:"v3.11.0", GitCommit:"472c5736ab01133de504a826bd9ee12cbe4e7904", GitTreeState:"clean", GoVersion:"go1.19.5"}
Output of
kubectl version
:Cloud Provider/Platform (AKS, GKE, Minikube etc.): GKE
The text was updated successfully, but these errors were encountered: