New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Required arrays with null value #52

Open
mbohlool opened this Issue Jan 31, 2018 · 11 comments

Comments

Projects
None yet
8 participants
@mbohlool
Member

mbohlool commented Jan 31, 2018

Looks like there is a problem with 1.8 spec going forward [TODO:ref issue] that an array that marked as required can be null. Assuming an empty array is an acceptable array in these situation (and it look like it is as kubernetes return empty array in some of those cases, TODO: add ref issues) we should consider fixing it in the main repo (TODO: create an issue) and also here as a quick fix until main repo issue has been fixed. This is my brain dump on the issue in case of anyone wants to work on this:

  • we should first research the problem and see if that is the case for all required arrays or a subset of them
  • as a quick fix, we can remove those arrays (or subset of them based on previous findings) from required list in a preprocessing step in this repo.
  • A regeneration of python (and other languages) clients should fix the issues (TODO: ref those issues)
  • Next, there should be a more permanent fix in the main repo.
@roycaihw

This comment has been minimized.

Member

roycaihw commented Mar 26, 2018

Gathering known required API fields issues in python client:

The issue started in release-4.0. Although we had the required-field validation generated since release-1.0, the api_client would assign [] for the null fields before deserialize the model (before release-3.0). Since release-4.0 the generated api_client doesn't convert null to empty array anymore, and the ValueError gets raised.

Agree that the problem is in our API convention (can an required array be null?), and there should be a permanent fix in the main repo.

@ceridwen

This comment has been minimized.

Contributor

ceridwen commented Mar 27, 2018

There are a number of other API fields with this problem. From watching the issues in the two repositories in question, I have the following list. (This is just straight links, I don't have time right now to make the formatting pretty. I tried to avoid duplicates.)

Kubernetes

kubernetes-client/python#491
kubernetes-client/python#484
kubernetes-client/python#466
kubernetes-client/python#427
kubernetes-client/python#424

OpenShift

kubernetes-client/python#464
openshift/openshift-restclient-python#138

There are probably more that I haven't run into and that no one else has bothered to post an issue for yet.

@smiller171

This comment has been minimized.

smiller171 commented May 8, 2018

Looking at related issues (specifically #394) This has been an issue since at least November.

Is there any word on a fix or a workaround? As someone new to Kubernetes and even newer to the Python client, I have absolutely no idea how to move past this and make my application work.

@StephanX

This comment has been minimized.

StephanX commented May 17, 2018

@smiller171 For good or ill, I've managed to work around this in my kubernetes-python module by commenting out the subject enforcement check at lines 184 & 185 of my /usr/local/lib/python3.6/site-packages/kubernetes/client/models/v1beta1_cluster_role_binding.py file (the content being):

        if subjects is None:
            raise ValueError("Invalid value for `subjects`, must not be `None`")
@djmckinley

This comment has been minimized.

djmckinley commented May 17, 2018

I just came across this issue with the "limits" array in the LimitRangeSpec resource. If I write a LimitRange object with an empty array, when reading it back, I get the exception: "Invalid value for limits, must not be None". I worked around it by catching those exceptions and dealing with them - but I'm wondering if removing the test raising the exception like StephanX did would be better.

This seems to be a serious, pervasive issue with at least the 5.0.0 library. Is that accurate, or am I misunderstanding something?

Is there any fix in the works? Or suggested workaround other than dealing with each and every occurrence? So far, in my project, we've had to implement workarounds for this limits case, and the PodDisruptionBudget case (kubernetes-client/python#466). This issue (#52) makes it appear there could be many more problems lurking out there. Do we need to give up on 5.0.0, and go back to an earlier stable library release?

@roycaihw

This comment has been minimized.

Member

roycaihw commented May 21, 2018

Upstream PR fixing serverAddressByClientCIDRs: kubernetes/kubernetes#61963

@roycaihw

This comment has been minimized.

Member

roycaihw commented Jun 12, 2018

Fixes for Conditions in CustomResourceDefinitionStatus kubernetes/kubernetes#64996
DisruptedPods in policy.v1beta1.PodDisruptionBudgetStatus kubernetes/kubernetes#65041

@lavalamp

This comment has been minimized.

Member

lavalamp commented Jun 22, 2018

Given that apiserver ignores the rule today, everything just works better if we adjust the specs to mark these fields as optional, no?

@lavalamp

This comment has been minimized.

Member

lavalamp commented Jun 22, 2018

@mydockergit

This comment has been minimized.

mydockergit commented Sep 26, 2018

@roycaihw you can check v1beta1.CustomResourceDefinitionStatus requires conditions (kubernetes-client/python#447) because it is closed already.

@roycaihw

This comment has been minimized.

Member

roycaihw commented Sep 27, 2018

@mydockergit The field is still marked as required, as openapi spec shows:

   "io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.CustomResourceDefinitionStatus": {
    "description": "CustomResourceDefinitionStatus indicates the state of the CustomResourceDefinition",
    "required": [
     "conditions",
     "acceptedNames",
     "storedVersions"
    ],

I think kubernetes-client/python#447 was closed because we want to fix the API upstream

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment