Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consider kubernetes resources version when doing unmarshall operations in the proxy #2896

Open
ifbyol opened this issue Jul 6, 2022 · 2 comments

Comments

@ifbyol
Copy link
Member

ifbyol commented Jul 6, 2022

Is your feature request related to a problem? Please describe.
To include the dev.okteto.com/deployed-by label in all the resources when they are created with the okteto deploy command, we process the resources being modified in the request and we unmarshall the spec field of the resource.

The problem is that the spec field is associated to a specific version of the resource, so it could happen that if a non-supported (by our code) version of a resource is handled, we could be failing the request because the unmarshall would fail (we always use a static version of the spec).

For example, if some day in the future a V2 is released for deployments and the spec changes, If anyone creates a V2 deployment, our proxy could fail processing that request.

Describe the solution you'd like
We should only try to include the label in .Spec.Template.ObjectMetadata for supported versions of resources. We probably need to review if the common metadata field is also versioned to take it into account or not

Describe alternatives you've considered
N/A

Additional context
N/A

@jmacelroy
Copy link
Contributor

@ifbyol is this causing a bug today or is it something that will be a bug when we add support for new versions of kubernetes with new spec changes? If that does happen how would we know there is an issue? Is there a test that we could write or already have that will let us know this has happened or is it a bad silent failure?

I see this has possibly having two fixes/approaches.

  1. Have an error loud and clear that there is a mismatch in spec expectations. This helps for when users use an unsupported kube version.
  2. Have a test that will show these issues to us during release testing and when adding new cluster version support so that we can address the spec change at that time and not let it get released.

@jmacelroy jmacelroy added this to the 2.6 milestone Jul 6, 2022
@ifbyol
Copy link
Member Author

ifbyol commented Jul 7, 2022

@jmacelroy It is not an issue now, it will be in the future with new versions of kubernetes with new spec changes.

If that does happen how would we know there is an issue?

Not sure how we could detect it. As this is in the interaction between the CLI and the Kubernetes server (not Okteto server), we would detect during tests with new k8s versions. I guess that if a new V2 of, let say, deployments is released, we will have to modify several places in the product, so we should consider also the CLI for that.

Is there a test that we could write or already have that will let us know this has happened or is it a bad silent failure?

Not sure we could write a test to detect it automatically when this starts to happen. It would be something that would arise when someone tries to use new manifest for some resource. I think it would be more like a silent failure. With the changes we are doing as part of this ticket #2663, we wouldn't "fail" in our end, if the spec is not valid, we would forward the request and let the k8s server handle the request. If the new resource is valid in the server, it wouldn't include the deployed-by label, so delete operations wouldn't work as expected

I see this has possibly having two fixes/approaches.

  1. Have an error loud and clear that there is a mismatch in spec expectations. This helps for when users use an unsupported kube version.
  2. Have a test that will show these issues to us during release testing and when adding new cluster version support so that we can address the spec change at that time and not let it get released.

Regarding the number 1. I would make 2 distinctions:

  • Malformed spec for supported k8s versions. In this case I think we should not fail and let the k8s server fail the request to display the original message as the user would have executed the command directly. We might have issues if the k8s server could migrate on the fly old versions to new versions
  • Malformed specs for unsupported k8s versions. I'm ok with failing with a clear error indicating that the CLI still not support new version of the resource. It would depend on the experience we want for the users

Regarding the number 2. I'm not sure how we can write a test that will cover this test. Maybe we can create a test which lists the versions of resources in the cluster, get the latest versions for the resources we handle in the proxy and try to create them through the proxy. In the moment a new cluster starts supporting a new version, we would have the error

@pchico83 pchico83 removed this from the 2.6 milestone Jul 27, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants