-
Notifications
You must be signed in to change notification settings - Fork 38.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP - introduce ratcheting validation mechanism #64907
Conversation
@kubernetes/sig-api-machinery-pr-reviews |
would like feedback on the mechanism first, then will tag various teams on the specific validation commits (those were mostly to illustrate how the mechanism could be used) |
ae9aefa
to
23262bd
Compare
23262bd
to
78b3e95
Compare
/retest |
spec := data.(*core.PodSpec) | ||
for i, c := range spec.InitContainers { | ||
names := sets.NewString() | ||
for j, e := range c.Env { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: seems like these loops could be a subroutine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cleaned up the loops
Looks good overall. I do wish there were less setup code: folks have to remember to call this everywhere there's a pod template type. But it is fairly straightforward, maybe that is better than a complicated automated system. We can always automate when we find people making errors that the automation would catch. |
We should think about how to do this problem for CRD validation, too, when we add it. Perhaps there all validations should be ratcheting. |
This is getting interesting. For native types we only have positive validation predicates Another speciality for CRDs: for defaulting we have to apply the OpenAPI schema before we even have an old object to check against. We could get away by tagging certain OpenAPI properties as ratcheting, drop them on (coercing) schema validation for defaulting and only apply them on strategy validation. But even this way, it is not obvious how a ratcheting validation with an |
@sttts I assumed we would be writing a validator that takes both old and
new objects, and only enforces a validation check on the new object if the
old one passes (or is missing the relevant field). I am talking about value
validation and not schema validation, of course.
And yes, we will definitely be implementing read-only/immutable fields, for
built-ins and CRs. Take a look at some of the stuff @apelisse and @seans3
have been doing in kube-openapi if you haven't seen it already.
…On Mon, Jun 11, 2018 at 1:35 AM Dr. Stefan Schimanski < ***@***.***> wrote:
We should think about how to do this problem for CRD validation, too, when
we add it. Perhaps there all validations should be ratcheting.
This is getting interesting. For native types we only have positive
validation predicates A(old,new) and B(old,new) and C(old,new). So we can
easily declare some as ratcheting (call it A', B', C'), making the
complete validation predicate A'(old,new) and B'(old,new) and C'(old,new)
weaker than the original. With CRDs and their OpenAPI based validation, we
don't have positivity, e.g. ((not A(old,new)) or B(old,new)) and
C(old,new) is expressible. Making A ratcheting as A' gives a stronger
predicate than before, not what we want. So ratcheting has to be restricted
to positive predicates.
Another speciality for CRDs: for defaulting we have to apply the OpenAPI
schema before we even have an old object to check against. We could get
away by tagging certain OpenAPI properties as ratcheting, drop them on
(coercing) schema validation for defaulting and only apply them on strategy
validation.
But even this way, it is not obvious how a ratcheting validation with an
old and a new object will look like. The validation algorithm and the
whole OpenAPI schema semantics does not know the concept of validation of
differences between two objects on update. If we come up with such
semantics though, this would also be very interesting to formulate
"read-only field", which cannot be espressed today in OpenAPI at all.
—
You are receiving this because you were assigned.
Reply to this email directly, view it on GitHub
<#64907 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAngloo5xoZ7tDp7wJ4Tuc7jh4WNh3uLks5t7ivlgaJpZM4Uflyh>
.
|
One of the reasons the approach in this PR works well is that the tightening validations are separable. Take the following scenario:
Any update would also be required to pass V, A, B. If V, A, B, and C could not be checked individually, the old object failing C means we'd have to skip all validation on the updated object, which would open us up to objects with much bigger validation problems. @sttts is the declarative validation able to be broken down to the level of individual checks? |
/sub |
No, in general it is not. But one knows which parts of a schema can be "broken apart" in the way you describe. So I think we can mark certain sub-schemata as ratcheting and the CRD type validation will check those markers. E.g. in |
@lavalamp If you have a link, that would be awesome. This sounds interesting. I saw the extension PRs. Do you mean those to express new kube-specific properties? We can certainly make this work using the restriction to positive sub-schemata (see post above). I am not completely convinced though yet that this always gives the semantics we want and that users expect, especially because schema and value validation are not separable: some value sub-schemata are required to select the right branches of |
/cc @yliaog |
Sorry for not noticing this earlier. I commented on #64841 |
/hold |
@liggitt Is the idea that every release the ratcheting validations would be moved to standard ones, and potentially new ratcheting validations would be introduced? |
No, they couldn't be moved to be standard validations until we could guarantee the API servers would never encounter persisted etcd data that would fail those validations. |
/lgtm cancel |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: liggitt The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@liggitt: PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Hi, sorry to be late to the party, but some here is some feedback on this approach:
|
What would the storage migration do with existing objects that would not pass the new stricter validation? As long as the previous API version is still usable (or data persisted via the previous API version is still able to be encountered), we have to handle round-tripping and updating that object via the new API version. Making an update of an unrelated field (like an annotation or a finalizer) fail for validation reasons is problematic. |
Quick note: We don't have versioned validation yet: #8550 |
allErrs := validation.ValidateCronJob(cronJob) | ||
allErrs = append(allErrs, apimachineryvalidation.ValidateRatchetingCreate( | ||
&cronJob.Spec.JobTemplate.Spec.Template.Spec, | ||
field.NewPath("spec", "template", "spec", "template", "spec"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: field.NewPath("spec", "jobTemplate", "spec", "template", "spec")
@hoegaarden and I were also wondering what you thought about the following:
|
@liggitt should we shoot for v1.12? |
This isn't a top priority for the next week, so no |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Fate of this? |
@smarterclayton It's solvable but with much more difficulty-- read from this comment to the end of the issue. |
If we limit ourselves to ratcheting in cases where the created objects were fatally flawed, I don't think we need to continue to accept invalid objects. I mostly closed this because I wasn't actively working on it. |
What this PR does / why we need it:
Introduces a ratcheting validation mechanism, as described in #64841 (comment)
Tightening validation on existing data cannot be done in a way that prevents existing stored objects from being updated.
This PR introduces the following mechanism:
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #64841
Fixes #57510 - flex driver name
Fixes #58477 - duplicate envvar names
Fixes #54567, #64011 - invalid memory quantities (xref #63426)
xref #52936 - storage medium
xref #60934 - duplicate pvc claimName
Special notes for your reviewer:
/assign @lavalamp @deads2k
Release note: