-
Notifications
You must be signed in to change notification settings - Fork 39k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Request: Allow openapi references in CRD validation schema #54579
Comments
/area custom-resources |
We were chatting about this idea before. Basically, we need a reference namespace for our API groups, something like One note about this: the apiextensions apiserver is not coupled to kube-apiserver right now and shouldn't be in the future. How does apiextensionsapiserver know about the specs of the types of other groups? The aggregrator is the only one who knows how to resolve such a reference correctly or at least it know which downstream server to ask. In other words, getting the information flow right is not so trivial. |
It should at least be possible for CRDs that are registered with the
kube-apiserver, right? Or am I misunderstanding how CRD works?
…On Thu, Oct 26, 2017 at 3:09 AM, Dr. Stefan Schimanski < ***@***.***> wrote:
We were chatting about this idea before. Basically, we need a reference
namespace for our API groups, something like
apps.k8s.io/v1.StateFulSetSpec (whatever the actual syntax is).
One note about this: the apiextensions apiserver is not coupled to
kube-apiserver right now and shouldn't be in the future. How does
apiextensionsapiserver know about the specs of the types of other groups?
The aggregrator is the only one who knows how to resolve such a reference
correctly or at least it know which downstream server to ask.
In other words, getting the information flow right is not so trivial.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#54579 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AA3JwTYEnOBHjCR7QLFQqGm7hK4z-zMLks5swFpZgaJpZM4QGZfO>
.
--
Michael Taufen
Google SWE
|
Could we have apiextensions-apiserver download OpenAPI from kube-apiserver, like kubectl does? |
Yes, something like that which loosely couples those two. |
I am concerned about giving ourselves really hard problems when we do version updates that drop a previously deprecated field. |
We cannot express in our CRD JSON Schema that certain fields do not exist (following the kube API conventions). So a CR cannot become invalid because a deprecated field was dropped. But as we store raw JSON for CRs, those removed fields would continue to exist in old CRs. |
CRD embeds v1.Pod. (time passes) we deprecate v1.Pod in favor of v2.Pod (...much later...) We remove v1.Pod. Suddenly the entire system loses the ability to interpret the CRD that embedds the v1.Pod object. |
At least it won't validate it anymore. Deletion is not a problem. |
Which also means that we need a migration early enough, or more precisely the CRD author has to take care of this. Also with native types: if you miss the point in time to migrate, the objects are lost. |
/cc @sdminonne |
@lavalamp, personally I agree with @sttts, my understanding is that a coupling problem does exist but it's totally on the CRD creation/handling side. A CR/CRD is explicitly created with a reference to a native object. If the object does not exist anymore, it's a user error. @nikhita thanks |
It's not about new objects, I agree those will be fine. It is about stored objects. |
OK, I didn't get it. Thanks for the clarification. |
/kind feature |
We should not support this. There should not be references to schemas that cross two separately released components (the core and the software that has the CRD). This will result in unexpected addition or deletion (think rollback of server version) of fields in the CRD schema. It also makes it hard for the controller to determine if objects it previously created are what it intended to create, since they change unexpectedly. Most CRDs I've seen do not typically embed a pod spec. They only have select fields in them. |
Apologies, but I'm working on a project right now that embeds a PodSpecTemplate. From my experience this is not that uncommon an occurrence. Sounds like something we might need more data on. Not saying that refs are the absolute best answer, but from my perspective there are definitely projects that will need this type of functionality. |
Just to add another aspect to consider: So far the only proposal for supporting server-side apply (and/or strategic merge patch) on resources served through CRD is to put the necessary schema info in OpenAPI. Without the ability to directly pull in the schema for a PodTemplateSpec, you would have to recursively copy in all those schemas into your own, if you want server-side apply to work correctly for your resource. Maybe we could consider some tooling to expand references inline, which would then be checked in with the CRD manifest. That wouldn't be my ideal user experience, although it would get a little better if we at least had the ability to break down a schema into definitions that can be referenced from within the same schema. |
In case it helps I'm a maintainer of prometheus-operator, its CRD uses many of the Kubernetes API specifications. To use it:
It's not a terrible one |
@ant31 That's awesome! +cc @pwittrock Have you considered this approach for kube-builder? |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I am sorry for sending notification to everyone involved in this thread but I simply wanted to note another aspect: same-file referencing. |
While migrating from CRD v1beta1 to v1, this issue comes back up... We used the same technique as described above by the maintainer of the Prometheus operator to generate and inline the schema for the built-in types that we reference; however, this makes for very large schema sections. This was acceptable when all versions had to share a single, backward compatible schema, but since CRD v1 wants the schema for each version, even if they are the same, this quickly leads to a CRD that is too big to create. Has there been any momentum on this discussion? Or, have I misunderstood that CRD v1 can still allow for a single definition of a shared schema? Note: x-kubernetes-embedded-resource isn't useful because few of the types we need to reference are standalone resources, e.g. we want to allow customers to specify additional volumes for the pods our operator will create. |
I acknowledge this is an issue and will look into it. @rjeberhard |
Sorry for the delay. Our operator isn't written in Go, but instead in Java using the standard client. Also, I discovered that Kubernetes will accept the large CRD if I use |
Serverside apply doesn't use an annotation and wouldn't hit this limit. |
How would this work for recursive references? We have a case where a property references a type that was previously defined in the same document. |
It would be nice for a user to know that an EDS resource they're trying to create is invalid. There are already several validations in the CRDs' definition, but it's still missing some, like in .spec.template.metadata. When a resource with invalid metadata (like annotations or labels where the value is an int instead of a string, which is common) is created, it will break the informer since it's no longer able to de-serialize it into the Go structs, throwing the following error: ``` E0520 14:25:07.400582 1 reflector.go:123] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:204: Failed to list *v1alpha1.ExtendedDaemonSet: v1alpha1.ExtendedDaemonSetList.Items: []v1alpha1.ExtendedDaemonSet: v1alpha1.ExtendedDaemonSet.Spec: v1alpha1.ExtendedDaemonSetSpec.Template: v1.PodTemplateSpec.ObjectMeta: v1.ObjectMeta.CreationTimestamp: Annotations: ReadString: expects " or n, but found 1, error found in #10 byte of ...|rollout":1},"creatio|..., bigger context ...|e-eu1.staging.dog","wheelofmisery/force-rollout":1},"creationTimestamp":null,"labels":{"app":"datado|... ``` To avoid that, this commit expands the validation to also check the metadata more thoroughly. The actual validation inlined here comes from K8s OpenAPI specs for a PodSpec. Currently there is no way to reference that, so unfortunately we have to inline it (see kubernetes/kubernetes#54579). Now, when a user tries to create an invalid resource, they'll be stopped with an error message: ``` $ kubectl apply -f invalid-foo-eds.yml The ExtendedDaemonSet "foo" is invalid: spec.template.metadata.annotations.rollout: Invalid value: "integer": spec.template.metadata.annotations.rollout in body must be of type string: "integer" ```
It would be nice for a user to know that an EDS resource they're trying to create is invalid. There are already several validations in the CRDs' definition, but it's still missing some, like in .spec.template.metadata. When a resource with invalid metadata (like annotations or labels where the value is an int instead of a string, which is common) is created, it will break the informer since it's no longer able to de-serialize it into the Go structs, throwing the following error: ``` E0520 14:25:07.400582 1 reflector.go:123] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:204: Failed to list *v1alpha1.ExtendedDaemonSet: v1alpha1.ExtendedDaemonSetList.Items: []v1alpha1.ExtendedDaemonSet: v1alpha1.ExtendedDaemonSet.Spec: v1alpha1.ExtendedDaemonSetSpec.Template: v1.PodTemplateSpec.ObjectMeta: v1.ObjectMeta.CreationTimestamp: Annotations: ReadString: expects " or n, but found 1, error found in #10 byte of ...|rollout":1},"creatio|..., bigger context ...|e-eu1.staging.dog","wheelofmisery/force-rollout":1},"creationTimestamp":null,"labels":{"app":"datado|... ``` To avoid that, this commit expands the validation to also check the metadata more thoroughly. The actual validation inlined here comes from K8s OpenAPI specs for a PodSpec. Currently there is no way to reference that, so unfortunately we have to inline it (see kubernetes/kubernetes#54579). Now, when a user tries to create an invalid resource, they'll be stopped with an error message: ``` $ kubectl apply -f invalid-foo-eds.yml The ExtendedDaemonSet "foo" is invalid: spec.template.metadata.annotations.rollout: Invalid value: "integer": spec.template.metadata.annotations.rollout in body must be of type string: "integer" ```
It would be nice for a user to know that an EDS resource they're trying to create is invalid. There are already several validations in the CRDs' definition, but it's still missing some, like in .spec.template.metadata. When a resource with invalid metadata (like annotations or labels where the value is an int instead of a string, which is common) is created, it will break the informer since it's no longer able to de-serialize it into the Go structs, throwing the following error: ``` E0520 14:25:07.400582 1 reflector.go:123] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:204: Failed to list *v1alpha1.ExtendedDaemonSet: v1alpha1.ExtendedDaemonSetList.Items: []v1alpha1.ExtendedDaemonSet: v1alpha1.ExtendedDaemonSet.Spec: v1alpha1.ExtendedDaemonSetSpec.Template: v1.PodTemplateSpec.ObjectMeta: v1.ObjectMeta.CreationTimestamp: Annotations: ReadString: expects " or n, but found 1, error found in #10 byte of ...|rollout":1},"creatio|..., bigger context ...|e-eu1.staging.dog","wheelofmisery/force-rollout":1},"creationTimestamp":null,"labels":{"app":"datado|... ``` To avoid that, this commit expands the validation to also check the metadata more thoroughly. The actual validation inlined here comes from K8s OpenAPI specs for a PodSpec. Currently there is no way to reference that, so unfortunately we have to inline it (see kubernetes/kubernetes#54579). Now, when a user tries to create an invalid resource, they'll be stopped with an error message: ``` $ kubectl apply -f invalid-foo-eds.yml The ExtendedDaemonSet "foo" is invalid: spec.template.metadata.annotations.rollout: Invalid value: "integer": spec.template.metadata.annotations.rollout in body must be of type string: "integer" ```
What's the recommended solution for this in 2024? I have a CRD that uses a |
@james-mchugh, just responding to let you know that I've not found a better solution. We're using the Java client, so we generate schemas that pull in standard types by building a large CRD with schema that matches the standard types by walking the types and then accounting for special cases like IntOrString. It's a pain... |
We trimmed descriptions to stay below annotation size limits, see zalando-incubator/stackset-controller#498 |
Offline chat with @mbohlool: CRD validation schema doesn't currently support references. You can get around this by specifying all of the types inline, but it's extraordinarily verbose (and prone to typos).
The text was updated successfully, but these errors were encountered: