Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wrongly flags resources #287

Closed
parag-warudkar-ck opened this issue Apr 7, 2022 · 3 comments
Closed

Wrongly flags resources #287

parag-warudkar-ck opened this issue Apr 7, 2022 · 3 comments

Comments

@parag-warudkar-ck
Copy link

parag-warudkar-ck commented Apr 7, 2022

I am encountering cases with kubent where it flags resources to use deprecated API incorrectly.

For example I have an Ingress that is reported as below in JSON output -

{ "Name": "ingress-name", "Namespace": "ingress-dev", "Kind": "Ingress", "ApiVersion": "networking.k8s.io/v1beta1", "RuleSet": "Deprecated APIs removed in 1.22", "ReplaceWith": "networking.k8s.io/v1", "Since": "1.19.0" },

However if I pull the yaml for this ingress from the cluster - you can see that it is already using the recommended replacement apiVersion networking.k8s.io/v1 and the only references to deprecated apiVersion are in managedFields and last-applied-configuration. Ideally kubent should not flag these types of deprecated API references in managedFields which IIRC are server managed.

Is there a way to get kubent to not flag these? It would be less than helpful for us otherwise as we have to parse the report, connect to the cluster and validate if the reported ones are managedField references and remove those.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"networking.k8s.io/v1beta1","kind":"Ingress","metadata":{...}}
  creationTimestamp: "2021-06-24T00:32:48Z"
  generation: 1
  managedFields:
  - apiVersion: networking.k8s.io/v1beta1
    fieldsType: FieldsV1
    manager: xxx
    operation: Update
    time: "2021-06-24T00:33:54Z"
  - apiVersion: networking.k8s.io/v1
    manager: kubectl-client-side-apply
    operation: Update
    time: "2021-09-02T20:49:25Z"
  - apiVersion: networking.k8s.io/v1beta1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          f:kubectl.kubernetes.io/last-applied-configuration: {}
spec:
  defaultBackend:
    service:
      name: blah
      port:
        number: 443
status:
  loadBalancer:
    ingress:
    - ip: xx.xx.xx.xx
@stepanstipl
Copy link
Contributor

Hi @paragw-ck,

I think there's a bit of misunderstanding about how API resource versioning in K8s works and what kubent does.

The gist of it is that when you request a resource, for example using kubectl, it will be returned in the version that your client requested (i.e. the same resource can be returned in any of the versions supported by your K8S version).

So kubent is actually detecting when resources were created/updated using an old version of the API, as this is likely going to cause you an issue after the cluster upgrade (and it's using last-applied-configuration for that amongst others).

Please see the following comments for more details:

I hope this makes sense 😀

(I'm going to close this issue, as kubent behaves as expected, and I hope its clear why it does that, but feel free to comment further if you have any questions/disagree.)

@parag-warudkar-ck
Copy link
Author

parag-warudkar-ck commented Apr 8, 2022

Thank you for the incredibly helpful comment - there is no documentation of any of this anywhere unless I failed to find it in some obscure place but I understand the finer points about versioning now.

However, I don't really understand what you mean by

So kubent is actually detecting when resources were created/updated using an old version of the API, as this is likely going to cause you an issue after the cluster upgrade

Specifically the cause an issue part - in our case we have updated the source artifacts to use newer API version and the newer API version change is already applied to the cluster. So next time when the cluster is upgraded there should be no issue as the source artifact is already using new API version.

Or am I mistaken and unless I delete and recreate the object with new API version the new cluster version without support for the old apiVersion would somehow look at last-applied and say oh this was created from old apiVersion that I don't support so it's invalid even if it is right now using a supported apiVersion? If that is in fact the case it would be valid for kubent to point that out. But I am not sure that's the case - would appreciate if you can clarify how it can cause issues with cluster upgrades and what would have to be done to not run into those - delete and recreate would cause outages for things like Ingress.

If the cluster admin has a way of knowing that the objects in question will not cause an issue does it still make sense to allow a flag to be specified to kubent to not flag these cases?

@stepanstipl
Copy link
Contributor

Hi @paragw-ck,

so the existing resources in the cluster will be fine - these are auto-converted by the K8S API server.

Specifically the cause an issue part - in our case we have updated the source artifacts to use newer API version and the newer API version change is already applied to the cluster. So next time when the cluster is upgraded there should be no issue as the source artifact is already using new API version.

Or am I mistaken and unless I delete and recreate the object with new API version the new cluster version without support for the old apiVersion would somehow look at last-applied and say oh this was created from old apiVersion that I don't support so it's invalid even if it is right now using a supported apiVersion? If that is in fact the case it would be valid for kubent to point that out. But I am not sure that's the case - would appreciate if you can clarify how it can cause issues with cluster upgrades and what would have to be done to not run into those - delete and recreate would cause outages for things like Ingress.

So please note that how the resources are stored internally is not related to how the resources were created/are retrieved. There's always only one internal version, but the same resource can be created/retrieved using all the supported API versions.

You can try it with your ingress, and compare:

kubectl get ingress.v1.networking.k8s.io [NAME] -o yaml

vs

kubectl get ingress.v1beta1.networking.k8s.io [NAME] -o yaml

Same resource, and will be returned in 2 different API versions.

As mentioned above, the issue is not with existing resources, as these are auto-converted as needed, and the internal storage version is different/independent on any of these API versions

This issue would be if any of your tooling would try to create/retrieve the resource using the old version after the upgrade, as this would fail. So if you have updated the version in all your manifests, and ensured that all tools (such as custom controllers/operators) are not using the old API, you should be all good.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants