Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor matchingPrecedence to tighten the range #85841

Conversation

@yue9944882
Copy link
Member

yue9944882 commented Dec 3, 2019

NONE
@yue9944882

This comment has been minimized.

Copy link
Member Author

yue9944882 commented Dec 3, 2019

/kind feature
/sig api-machinery

@yue9944882 yue9944882 force-pushed the yue9944882:feat/refactor-flow-control-model-for-bootstrapping branch 3 times, most recently from 6bd28f3 to cf48800 Dec 3, 2019
@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented Dec 3, 2019

This PR may require API review.

If so, when the changes are ready, complete the pre-review checklist and request an API review.

Status of requested reviews is tracked in the API Review project.

@yue9944882 yue9944882 force-pushed the yue9944882:feat/refactor-flow-control-model-for-bootstrapping branch from cf48800 to bfd6a44 Dec 3, 2019
@fedebongio

This comment has been minimized.

Copy link
Contributor

fedebongio commented Dec 3, 2019

/assign @lavalamp

// MatchingPrecedence. Each MatchingPrecedence value must be non-negative.
// Note that if the precedence is not specified or zero, it will be set to 1000 as default.
// MatchingPrecedence. Each MatchingPrecedence value must be ranged in [0,10000]. For two FlowSchemas
// with conflicting MatchingPrecedence, the one created earlier is prior.

This comment has been minimized.

Copy link
@lavalamp

lavalamp Dec 4, 2019

Member

@deads2k had a good point that this is not stable if we add the same rules to multiple clusters.

Here's another idea: what if, if there are N matching and tied FSCs, we choose one randomly with probability 1/N? @MikeSpreitzer @yue9944882

My thinking is that alphabetical disambiguation doesn't really make any logical sense as a priority mechanism. The coin-flip as a tie-break has plenty of historical precedent and also behaves the same on all clusters (at least in aggregate). We need to help admins know that their configuration has this condition, but we'd want to do that anyway.

(Actually I can see folks using this on purpose to e.g. test a new F.S. before deleting a previous one)

This comment has been minimized.

Copy link
@lavalamp

lavalamp Dec 4, 2019

Member

Everything else about this PR LGTM.

This comment has been minimized.

Copy link
@MikeSpreitzer

MikeSpreitzer Dec 4, 2019

Member

The flow schemas should be in a clearly defined order, and that's what the MatchingPrecedence values are for. There should not be two FlowSchema objects with the same MatchingPrecedence value, but our implementation has to do something in this case. My position is that we should tell users to avoid this case and make no promise about what happens in this case, and I do not care what our implementation actually does. Frankly, the more user-hostile the better, because this is a case that the users should avoid.

This comment has been minimized.

Copy link
@yue9944882

yue9944882 Dec 5, 2019

Author Member

The flow schemas should be in a clearly defined order, and that's what the MatchingPrecedence values are for.

+1. the behavior of flow-schema matching should be stable, no matter the logic makes sense or not. what happening to one request through the shuffle-sharding + fair-queuing is already tough to observe, it can be worse if we randomlize the order..

@ncdc ncdc removed their request for review Dec 4, 2019
@yue9944882 yue9944882 force-pushed the yue9944882:feat/refactor-flow-control-model-for-bootstrapping branch from bfd6a44 to 8b33d44 Dec 9, 2019
@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

k8s-ci-robot commented Dec 9, 2019

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: yue9944882
To complete the pull request process, please assign lavalamp
You can assign the PR to them by writing /assign @lavalamp in a comment when ready.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@yue9944882 yue9944882 force-pushed the yue9944882:feat/refactor-flow-control-model-for-bootstrapping branch from 8b33d44 to 911c76b Dec 9, 2019
@k8s-ci-robot k8s-ci-robot added size/M and removed size/L labels Dec 9, 2019
@yue9944882 yue9944882 force-pushed the yue9944882:feat/refactor-flow-control-model-for-bootstrapping branch 3 times, most recently from 3ec8243 to 72d9377 Dec 9, 2019
@yue9944882 yue9944882 changed the title Refactor matchingPrecedence to pointer and tighten the range Refactor matchingPrecedence to tighten the range Dec 9, 2019
@yue9944882 yue9944882 force-pushed the yue9944882:feat/refactor-flow-control-model-for-bootstrapping branch from 72d9377 to eb46b18 Dec 19, 2019
@MikeSpreitzer

This comment has been minimized.

Copy link
Member

MikeSpreitzer commented Jan 2, 2020

I am not a fan of restricting the range more than necessary. If we decide we want the max to be a power of 10 then I think it should be 10^9.

I am not a fan of restricting the range. It adds a detail to the API: in addition to saying "int32" the API also has to say "the max is really 10000".

I do not see the fact that the maximum allowed value equals the maximum int32 as a problem. This is not an obscure number, we have all seen it many times before. We have "int32" staring users in the face. We have all been living with integers stored in binary for many years, and will continue to do so for many years. There are other uses of this value in the Kubernetes API; two examples are in DeploymentSpec. Like in those cases, ours is also relatively easy on users: this is not a value that users will be entering, they will only be reading it.

@yue9944882

This comment has been minimized.

Copy link
Member Author

yue9944882 commented Jan 2, 2020

we're not innovating the range here. the aggregator api is already applying [0,1000] or [0, 20000] to similar priority fields. 10^9 doesnt seem more user-friendly than 10^5 to me, people can get bored counting zeros as the number grows long..

if apiService.Spec.GroupPriorityMinimum <= 0 || apiService.Spec.GroupPriorityMinimum > 20000 {
allErrs = append(allErrs, field.Invalid(field.NewPath("spec", "groupPriorityMinimum"), apiService.Spec.GroupPriorityMinimum, "must be positive and less than 20000"))
}
if apiService.Spec.VersionPriority <= 0 || apiService.Spec.VersionPriority > 1000 {
allErrs = append(allErrs, field.Invalid(field.NewPath("spec", "versionPriority"), apiService.Spec.VersionPriority, "must be positive and less than 1000"))
}

@yue9944882

This comment has been minimized.

Copy link
Member Author

yue9944882 commented Jan 14, 2020

/close

@yue9944882 yue9944882 closed this Jan 14, 2020
@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

k8s-ci-robot commented Jan 14, 2020

@yue9944882: PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.