-
Notifications
You must be signed in to change notification settings - Fork 1.1k
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Incorporate schema into CRDs #912
Comments
Related: kubernetes/kubernetes#59154 |
We might need kubernetes-sigs/kubebuilder#406 to be fixed before we can use the |
Issues go stale after 90 days of inactivity. Send feedback to Knative Productivity Slack channel or file an issue in knative/test-infra. /lifecycle stale |
Stale issues rot after 30 days of inactivity. Send feedback to Knative Productivity Slack channel or file an issue in knative/test-infra. /lifecycle rotten |
/remove-lifecycle rotten I think this is still valuable. |
The bug above seems fixed, is this worth another look? |
/remove-area build |
@evankanderson: Those labels are not set on the issue: In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/good-first-issue It looks like #10003 just needs a little help across the finish line /triage accepted |
@evankanderson: Please ensure the request meets the requirements listed here. If this request no longer meets these requirements, the label can be removed In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
So, I've taken a whack at schema generation today, just to see where we stand. Opened #11243 so everybody can play with it easily. To copy my remarks from the PR, here are the blockers we have today.
1 we could patch (as I did) in the vendored version of the generator if need be, but 2. seems to be the elephant in the room for us quite honestly. Maybe it is time to create our own PodSpec type and use that? That would probably also aid in API docs generation etc. What do others think? |
I actually think if we could make it so we use the schema to validate the podSpec fields we don't allow we might actually end up with less code than before, because we could drop our custom validation, potentially. If so then that starts to seem like a win, but I worry we'll run in to size/maintenance issues as we support more and more of the podSpec. Do I have a memory that when we tried this before we ran in to maximum size limits for the schema, or am I misremembering? |
The size limits were hit, because we served 3 versions of the APIs. The schema is copied across those versions. The current Service schema clocks in at 340K, which is not small but also not breaking anything yet. |
Made some good progress with adjusting some of the K8s tooling for our needs, which ultimately led me to #11244, which has a full schema for all of our types, filtered down by the surface we allow in PodSpec (+ feature flags, but we could drop those fields from the schema too if that's contentious). If all agree, we could ship those schemas as a first step and then continue working on the tooling to be able to continuously generate them automatically. /assign |
My $0.02 in terms of feature-flagged fields supported in PodSpec would be to be maximal in the OpenAPI spec, and make sure we have documentation that certain fields may not be supported on all clusters. (Which is true anyway, since you could also have OPA policy or other admission webhooks enforcing limits.) |
+1 to what Evan said. It's already the case that certain pod fields may be in the api but forbidden by policy, so including the maximal set feels reasonable to me too. (Does make me wonder a little if we should start thinking about moving the remaining validation to an actual policy engine at some point, but that feels like an area that's not quite ready for us to adopt yet so 🤷♂️. I guess some of it will happen via the new PSP replacement stuff anyway) |
Another updated from my side: #11244 is now submittable as-is (modulo getting the pkg dependencies updated, which I'm waiting for the robot to do). I've excluded the script from update-codegen.sh from now, as there's a bit of trickery to be done with run_go_tool and I'd love to get our own fork of controller-tools going before we fix this all. A separate script might be good enough for now too. My plan going forward would be:
|
Correction from the above: Switch 1 and 2 and do the networking change first to obsolete undoing stuff in vendor after generation. |
When does this break again? ie. if we add a v2 do we reach that limit? |
Good question! Actually, since we're now able to filter the PodSpec down to only the things we support, the Service CRD clocks in at a mere 104K. Assuming a 1M size limit in K8s (which is what a quick search yielded) we should be quite a way away from hitting any YAML size limits here. |
K8s added the ability to specify schema to CRD definitions in 1.9. We should take advantage of that.
The text was updated successfully, but these errors were encountered: