Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubectl edit machine fails validations #137

Closed
karan opened this issue May 4, 2018 · 23 comments
Closed

kubectl edit machine fails validations #137

karan opened this issue May 4, 2018 · 23 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.

Comments

@karan
Copy link
Contributor

karan commented May 4, 2018

machines "karangoel-test1" was not valid:

  • : Invalid value: "The edited file failed validation": [ValidationError(Machine.spec.providerConfig.value): unknown field "apiVersion" in io.k8s. apimachinery.pkg.runtime.RawExtension, ValidationError(Machine.spec.providerConfig.value): unknown field "kind" in io.k8s.apimachinery.pkg.runtime. RawExtension, ValidationError(Machine.spec.providerConfig.value): unknown field "terraformMachine" in io.k8s.apimachinery.pkg.runtime.RawExtension, ValidationError(Machine.spec.providerConfig.value): unknown field "terraformVariables" in io.k8s.apimachinery.pkg.runtime.RawExtension, ValidationError(Machine.spec.providerConfig.value): missing required field "Raw" in io.k8s.apimachinery.pkg.runtime.RawExtension]

To repro:

  1. Deploy cluster
  2. kubectl edit machine machineName
  3. Edit something other than providerConfig (eg controlPlane version)

/cc sig-cluster-lifecycle

@krousey
Copy link
Contributor

krousey commented May 15, 2018

This is kubectl validation failing for some reason. I am able to successfully edit when I do kubectl edit machine <machine name> --validate=false and everything works as expected.

@karan
Copy link
Contributor Author

karan commented May 15, 2018

Yes if I disable validation it works. We should look into why the validation is failing though.

@karan
Copy link
Contributor Author

karan commented May 15, 2018

This seems to be an issue with newer kubectl. v1.9 and newer have this problem.

@rsdcastro
Copy link

Does this impact only Terraform or also GCP?

@karan
Copy link
Contributor Author

karan commented May 29, 2018

It should impact gce as well.

I think @k4leung4 did some testing for this bug as well.

@k4leung4
Copy link
Contributor

This is not provider specifics. As I understand it, this is an issue with the interaction of the kubectl and handling of custom types.
So it is problematic with apply/edits of machines/machinesets/machinedeployments.

@rsdcastro rsdcastro added this to the cluster-api-alpha-implementation milestone May 30, 2018
@roberthbailey
Copy link
Contributor

@pwittrock - is there a tracking bug for kubectl 1.9 not working properly with custom types?

@nikhita
Copy link
Member

nikhita commented Jun 1, 2018

Noticed something similar with kubectl validation. It seems to have grown stricter since 1.9. For example,

If my deployment yaml contains spec.replicAs: 3:

  • earlier kubectl versions do not complain, the deployment configuration is accepted and the json decoder unmarshals it as spec.replicas: 3 (unmarshalling is always case-insensitive).
  • the newer kubectl versions check if the literal replicAs exists for the Deployment type, sees that it doesn't and complains that no such field replicAs exists. If --validate=false is used, the deployment configuration is accepted and the json decoder unmarshals normally as spec.replicas: 3.

Of course, this is true for all resources, not just deployments.

In this case, Machine.spec.providerConfig.value is of type *runtime.RawExtension:

Value *runtime.RawExtension `json:"value,omitempty"`

RawExtension by itself does not have any field called apiVersion, etc:

https://github.com/kubernetes/kubernetes/blob/0ea07c40305afa845bc34eb6a73da960552c39b1/staging/src/k8s.io/apimachinery/pkg/runtime/types.go#L92-L100

Looks like kubectl notices that it does not have any field called apiVersion and complains at validation. When --validate=false is used, we skip this validation and the json decoder decodes correctly.

Afaik the only change to kubectl validation was moving to openAPI validation -- I'll check that further to see what could be going wrong and comment here if I find anything.

@pwittrock
Copy link

@seans3 Can you take a look?

@seans3
Copy link

seans3 commented Jun 1, 2018

Talked to @apelisse. Work in kube-openapi is already beginning to address this.

@seans3
Copy link

seans3 commented Jun 1, 2018

@roberthbailey
Copy link
Contributor

Since this is not a cluster api issues I'm going to bump it out of the alpha milestone.

@roberthbailey roberthbailey removed this from the cluster-api-alpha-implementation milestone Jul 18, 2018
@timothysc timothysc added this to the v1alpha1 milestone Jan 10, 2019
@timothysc timothysc added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Jan 10, 2019
@timothysc
Copy link
Member

@detiber - please verify and, or document.

@detiber
Copy link
Member

detiber commented Jan 31, 2019

/help

@k8s-ci-robot
Copy link
Contributor

@detiber:
This request has been marked as needing help from a contributor.

Please ensure the request meets the requirements listed here.

If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.

In response to this:

/help

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Jan 31, 2019
@ncdc
Copy link
Contributor

ncdc commented Feb 28, 2019

This is most likely an issue that will be resolved with a change in kubernetes/kubernetes. I propose we move this out of v1alpha1. cc @detiber

@detiber
Copy link
Member

detiber commented Feb 28, 2019

/milestone Next

@k8s-ci-robot k8s-ci-robot modified the milestones: v1alpha1, Next Feb 28, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 29, 2019
@ncdc
Copy link
Contributor

ncdc commented May 30, 2019

The referenced kube-openapi PR has been merged. I'm wondering if this is possibly fixed. Would someone have some time to test this issue and report back?

@seans3
Copy link

seans3 commented May 30, 2019

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 30, 2019
@timothysc timothysc modified the milestones: Next, v1alpha2 Jun 7, 2019
@timothysc timothysc added kind/bug Categorizes issue or PR as related to a bug. and removed help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. labels Jun 7, 2019
@timothysc
Copy link
Member

/cc @michaelgugino

@detiber
Copy link
Member

detiber commented Jun 7, 2019

I just tested this with kubeadm version 1.13.1 and I am no longer able to replicate the issue. Closing.

/close

@k8s-ci-robot
Copy link
Contributor

@detiber: Closing this issue.

In response to this:

I just tested this with kubeadm version 1.13.1 and I am no longer able to replicate the issue. Closing.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

chuckha pushed a commit to chuckha/cluster-api that referenced this issue Oct 2, 2019
jayunit100 pushed a commit to jayunit100/cluster-api that referenced this issue Jan 31, 2020
this is the first layout of e2e test: it has two stages
1) deploy a bootstrap cluster
2) apply secret to bootstrap cluster and apply job to bootstrap cluster.

the job on bootstrap can be expanded with more regards to
1) target cluster topology.
2) target cluster verification.

addressed review comments
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

No branches or pull requests