Skip to content
This repository has been archived by the owner on Nov 1, 2022. It is now read-only.

Flux produces warning logs about deprecated API versions for kubernetes objects that do not exist #3554

Closed
2 tasks done
jddarby opened this issue Sep 21, 2021 · 13 comments · Fixed by #3558
Closed
2 tasks done
Labels

Comments

@jddarby
Copy link

jddarby commented Sep 21, 2021

Describe the bug

Having upgraded to using flux 1.24.1 from 1.23.1 we have started seeing warning logs from flux telling us that we are using deprecated API versions, for a number of kubernetes objects, each time flux syncs with the Git repository e.g.

W0921 14:33:34.146163       7 warnings.go:70] v1 ComponentStatus is deprecated in v1.19+
W0921 14:33:34.241446       7 warnings.go:70] apiregistration.k8s.io/v1beta1 APIService is deprecated in v1.19+, unavailable in v1.22+; use apiregistration.k8s.io/v1 APIService
W0921 14:33:34.327825       7 warnings.go:70] certificates.k8s.io/v1beta1 CertificateSigningRequest is deprecated in v1.19+, unavailable in v1.22+; use certificates.k8s.io/v1 CertificateSigningRequest
W0921 14:33:34.352626       7 warnings.go:70] networking.k8s.io/v1beta1 IngressClass is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 IngressClassList
W0921 14:33:34.358693       7 warnings.go:70] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
W0921 14:33:34.363110       7 warnings.go:70] extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
W0921 14:33:34.403549       7 warnings.go:70] rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
W0921 14:33:34.411424       7 warnings.go:70] rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
W0921 14:33:34.416192       7 warnings.go:70] rbac.authorization.k8s.io/v1beta1 RoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 RoleBinding
W0921 14:33:34.420343       7 warnings.go:70] rbac.authorization.k8s.io/v1beta1 Role is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 Role
W0921 14:33:34.439403       7 warnings.go:70] storage.k8s.io/v1beta1 CSIDriver is deprecated in v1.19+, unavailable in v1.22+; use storage.k8s.io/v1 CSIDriver
W0921 14:33:34.443146       7 warnings.go:70] storage.k8s.io/v1beta1 CSINode is deprecated in v1.17+, unavailable in v1.22+; use storage.k8s.io/v1 CSINode
W0921 14:33:34.447938       7 warnings.go:70] storage.k8s.io/v1beta1 StorageClass is deprecated in v1.19+, unavailable in v1.22+; use storage.k8s.io/v1 StorageClass
W0921 14:33:34.452411       7 warnings.go:70] storage.k8s.io/v1beta1 VolumeAttachment is deprecated in v1.19+, unavailable in v1.22+; use storage.k8s.io/v1 VolumeAttachment
W0921 14:33:34.466610       7 warnings.go:70] admissionregistration.k8s.io/v1beta1 MutatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 MutatingWebhookConfiguration
W0921 14:33:34.470702       7 warnings.go:70] admissionregistration.k8s.io/v1beta1 ValidatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 ValidatingWebhookConfiguration
W0921 14:33:34.605743       7 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0921 14:33:34.641443       7 warnings.go:70] scheduling.k8s.io/v1beta1 PriorityClass is deprecated in v1.14+, unavailable in v1.22+; use scheduling.k8s.io/v1 PriorityClass
W0921 14:33:34.650638       7 warnings.go:70] coordination.k8s.io/v1beta1 Lease is deprecated in v1.19+, unavailable in v1.22+; use coordination.k8s.io/v1 Lease

This would be a useful output, however our cluster does not contain many of the objects that these warnings allude to (e.g. this cluster has no ingress object).

Do you know why we are seeing these logs on the new version? Is this expected behaviour?

Steps to reproduce

  1. Install flux using helm into AKS cluster.
  2. Configure to point at directory containing simple configmap e.g.
apiVersion: v1
data:
  HELLO: "WORLD"
kind: ConfigMap
metadata:
  name: test
  namespace: default
  1. Observe logs and see warnings

Expected behavior

To not get spammed by warning logs for objects that don't exist.

Kubernetes version / Distro / Cloud provider

AKS v1.20.9

Flux version

Flux v1.24.1

Git provider

Azure DevOps

Container Registry provider

Azure

Additional context

No response

Maintenance Acknowledgement

  • I am aware of Flux v1's maintenance status

Code of Conduct

  • I agree to follow this project's Code of Conduct
@jddarby jddarby added blocked-needs-validation Issue is waiting to be validated before we can proceed bug labels Sep 21, 2021
@kingdonb
Copy link
Member

Thanks for bringing this to our attention! I am sorry that the Flux upgrade affected you this way. I wonder if it was really caused by a patch to Flux from this upgrade, or if this might be a coincidence from having upgraded Kubernetes versions to 1.20, and perhaps this is just the first Kubernetes version that warned about this deprecation.

I am not sure why you get those messages, but it is not intended behavior and that is noisy. I wouldn't expect Flux to warn you about deprecations for resources that you do not have.

Having not looked any deeper into the issue as of this moment, or attempted to reproduce it yet, I would suspect that Flux is enumerating all of the APIs it knows about (it knows about all of them, as it can list them via kubectl api-resources) and the warning comes from Kubernetes.

Flux is doing its discovery thing, probably looking for resources that might be in need of automation or garbage collection, which being handled through an annotation which is not indexed means I think all resources on the cluster must necessarily be scanned in an orderly way, and since some of the resource APIs represent deprecated endpoints, I would guess that the calls to those endpoints are emitting deprecation warnings because they are used (in spite of no objects being there.)

The most straightforward workaround that I can suggest without any code changes to Flux would be to disable those APIs if you really aren't using them. Usually there is a configuration flag that you can set which disables deprecated APIs ahead of schedule. That way they won't be listed in api-resources, Flux won't scan them, and you won't get warned anymore.

The second most straightforward fix (actually this might be even more straightforward) is to upgrade to a release of Kubernetes that has already disabled the deprecated APIs. That way you don't have to make any configuration changes.

Both of these scenarios, I think, you should be prepared to revert your changes in case it turns out that you were really using some deprecated API that you didn't know about, and now things are broken... I am not sure which of these is more straightforward with AKS. It looks like the 1.22.x Kubernetes release is not due out on AKS until October.

Here is a page with general information about upgrading related to these latest/upcoming Kubernetes release versions:

https://kubernetes.io/blog/2021/07/14/upcoming-changes-in-kubernetes-1-22/

I am open to suggestions about what might be the root cause, or if we can silence the errors somehow when these discovery APIs are triggered (and still leave them intact in cases where the API objects in groups or versions that have been deprecated are really being sent to the cluster by Flux.)

@dimbleby
Copy link
Contributor

The logs definitely seem to be a result of a change in the flux container rather than elsewhere: we experimented and things were quiet at 1.23.1, but noisy at 1.24.0. It looks as though that upgrade pulled a lot of dependency bumps: perhaps one of those introduces this?

If we upgrade to (say) kubernetes 1.22: that seems like it should get rid of the warning about things that are removed at 1.22. But there's a constant pipeline of things that start in beta and then graduate and the beta is deprecated: shouldn't we expect new warnings for things that are newly deprecated?

@dimbleby
Copy link
Contributor

Looks like this is coming from client-go.

Here are some notes on how one could suppress these warnings
https://kubernetes.io/blog/2020/09/03/warnings/#customize-client-handling

The next example shows how to construct a client that ignores warnings. This is useful for clients that operate on metadata for all resource types (found dynamically at runtime using the discovery API) and do not benefit from warnings about a particular resource being deprecated.

sounds like it describes what flux is doing, and therefore suppressing these warnings would be sensible?

@kingdonb
Copy link
Member

kingdonb commented Sep 21, 2021

Ah, I get it now. The cluster is using APIs which are deprecated, and the Flux daemon pulled in the latest versions of the K8s API clients, in order to remain compatible with future versions of Kubernetes.

Those clients are emitting the warnings. The discovery API enumeration is the activity that triggers it.

We decided not to upgrade all the way to K8s 0.22.0 client (#3538 was merged in favor of #3537) because upgrading one more version would move past the version where these deprecation warnings become enforcement and those APIs are removed. This would have been a breaking change, which I think there is no intention of ever merging unless Flux v1 is still alive and maintained over a year from now (which would be unexpected, Flux v1 is in maintenance mode.)

The warnings are emitted by the client, as I see you have just now also mentioned. I would be open to a PR that suppresses them, they are not helpful when they are emitted during a discovery sweep. Does the error surface on every sync, or only each daemon startup? (I guess it would happen on every GC sweep.)

@kingdonb kingdonb removed the blocked-needs-validation Issue is waiting to be validated before we can proceed label Sep 21, 2021
@dimbleby
Copy link
Contributor

Yes, every sweep.

Can you give us any pointers to the relevant part of the code? Golang noob here: while I expect I can copy-paste from that blog post, any direction you can give as to where it should go would be appreciated.

@kingdonb
Copy link
Member

I am in there now looking, and I may have to reproduce the issue and spend some time in a debugger exercising breakpoints in my editor in order to better help narrow it down and figure out where exactly those messages are emitted from.

I am honestly not totally familiar with that part of the code myself. Please bear with me as I narrow it down. It may be some time as I am working on some priority issues that are unrelated to this one, with looming deadlines.

Thanks for working with us on this!

@dimbleby
Copy link
Contributor

Somewhere round about here I guess.

Presumably straightforward to change the restClientConfig? But maybe you think that we should only be suppressing the warnings on discovery, and so it's a little more complicated than that?

I would be completely happy just to remove the warning altogether. If for whatever reason I am stuck using a deprecated API, eg some third-party helm chart that I can't easily update, I'm not so sure that flux logging that every time round the loop is very useful anyway.

@kingdonb
Copy link
Member

That's certainly a good place to start, and if it would help once we have a PR that does that, we can get it merged and put out a pre-release. I tend to agree although I think it makes sense to have the warning at apply time, in a way that it doesn't make sense to have the warning displayed at GC time. For Flux users who are stuck on Flux v1 past Kubernetes 1.22, they may not have another way to interact with their cluster, and they will unfortunately have no other way to receive these deprecation warnings in that case.

In any case we would like to have a way to suppress them all, in case the answer is "we know it's coming, we don't care, we'll deal with it whenever things break, just silence the noise" which is a perfectly reasonable position to take.

@dimbleby
Copy link
Contributor

Haven't tested it yet and it's getting late here so this will have to wait at least until tomorrow now, but I expect we can send you an MR like this if you'd be happy to take it:

diff --git a/cmd/fluxd/main.go b/cmd/fluxd/main.go
index 6e7cb88a..a6e07933 100644
--- a/cmd/fluxd/main.go
+++ b/cmd/fluxd/main.go
@@ -421,6 +421,7 @@ func main() {
                        logger.Log("err", err)
                        os.Exit(1)
                }
+               restClientConfig.WarningHandler = rest.NoWarnings{}
                restClientConfig.QPS = 50.0
                restClientConfig.Burst = 100
        }

@kingdonb
Copy link
Member

kingdonb commented Sep 21, 2021

Well, I can definitely see all of these warnings on Kubernetes 1.21, and I will happily test the MR as you've proposed it to verify whether the warnings are suppressed.

I think we would want a flag, unless the warnings can be suppressed at only GC time where they are emitted spuriously. I'd like to test if adding a deprecated API resource emits the error again.

There is a possibility that we can merge this change having in mind that it needs changes before it can be released. That way we will get an official "flux-prerelease" build from CI, that you can point your servers at and use with trust, at least until a way that can be strong enough for release is designed. It shouldn't be very hard to build the feature in the way that I want.

I'd like to have concordance with the rest of the maintainers about how we can handle this. I can bring it up at the Flux dev meeting which is coming up in a day or two.

Meanwhile thanks for the great detective work! We will get this resolved somehow, whether it requires a user-facing option or only a conditional in the right place to select the rest.NoWarnings{} handler as you have done, but only for GC. 👍

@dimbleby
Copy link
Contributor

dimbleby commented Sep 21, 2021

I experimented a bit with this after all.

  • the dynamicClientset turns out to be the one that generates these logs, if you wanted to be more surgical about which client should have the warnings suppressed.

    • I don't know whether that's useful though, perhaps just as well to set this configuration globally
  • none of this seems to have anything to do with the output that flux gives when applying objects that it finds in git

    • that is done by shelling out to kubectl; and if we wanted to show the deprecation warnings that kubectl produces, then the way to do that would be to log stderr from that command.
    • here I think
    • this risks being pretty noisy too; and adding a flag to control whether or not you want to do that sounds a bit like adding new features to a project that is in maintenance mode...

I reckon I'd lean towards just merging #3555.

If you later wanted to add an enhancement to show deprecation warnings from kubectl - and if the project maintenance state allows it - you could still do so; but hopefully we needn't block this fix on that.

@kingdonb
Copy link
Member

kingdonb commented Oct 5, 2021

Sorry about the delay addressing this in a release. The timing around KubeCon is rough!

Please bear with me here, I will try to make sure this gets attention at this week's Flux dev meeting, but if there are issues of higher urgency and we run out of time, it may get deferred until after KubeCon is over next week.

Thanks again for your report!

@kingdonb kingdonb changed the title Flux produces warning logs aboout deprecated API versions for kubernetes objects that do not exist Flux produces warning logs about deprecated API versions for kubernetes objects that do not exist Oct 5, 2021
@kingdonb
Copy link
Member

We are reviewing the proposed changes and I'll get back later this week with a timeline for a new release.

Thanks again for your patience on this!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants