Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support custom kube-scheduler configuration #7186

Closed
wants to merge 2 commits into from
Closed

Support custom kube-scheduler configuration #7186

wants to merge 2 commits into from

Conversation

rralcala
Copy link
Contributor

Mentioned in #6942

This change allows using the --config flag to set additional options,
theese are marked as depecrated as flags
(https://kubernetes.io/docs/reference/command-line-tools-reference/kube-scheduler/)
so I thought that it might be better to have them in a config file.

It also makes it easy to add more.

Mentioned in #6942

This change allows using the --config flag to set additional options,
theese are marked as depecrated as flags
(https://kubernetes.io/docs/reference/command-line-tools-reference/kube-scheduler/)
so I thought that it might be better to have them in a config file.

It also makes it easy to add more.
@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Jun 24, 2019
@k8s-ci-robot
Copy link
Contributor

Hi @rralcala. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: rralcala
To complete the pull request process, please assign kashifsaadat
You can assign the PR to them by writing /assign @kashifsaadat in a comment when ready.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Jun 24, 2019
@k8s-ci-robot k8s-ci-robot added size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Jun 25, 2019
@mikesplain
Copy link
Contributor

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Jun 27, 2019
@rralcala
Copy link
Contributor Author

/retest

// Build is responsible for building the manifest for the kube-scheduler
func (b *KubeSchedulerBuilder) Build(c *fi.ModelBuilderContext) error {
if !b.IsMaster {
return nil
}

// We want to add the custom config file if supported and present in the Spec
buildConfigFile := b.IsKubernetesGTE("1.11") && b.Cluster.Spec.KubeScheduler != nil
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we've already released Kops 1.11 & 1.12 should we be setting this so that it only targets k8s 1.13+ so that we don't change the behavior for previous deployments that upgrade Kops but not K8s?

Another option is to add the buildConfig behind a feature flag.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Personally I'd add it "way in the future", to give us some time to figure this out ... it's a great change, but I believe we'll be really early adopters and will likely have some teething issues. How would you feel about 1.15 or 1.16?

@@ -40,14 +46,21 @@ type KubeSchedulerBuilder struct {

var _ fi.ModelBuilder = &KubeSchedulerBuilder{}

const (
KubeConfigPath = "/var/lib/kube-scheduler/kubeconfig"
KubeSchedulerConfigPath = "/var/lib/kube-scheduler/config"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want to add the file format (.yaml) to the config?

return nil
}

func (b *KubeSchedulerBuilder) buildConfigFile() (string, error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This sets every setting even if they are not being explicitly configured by Kops, because it's importing the kubeschedulerscheme from upstream.

Would modeling an internal schema closely to what we do for other configurations inside Kops be less heavy handed? It would also allow us to not import yet another upstream dependency.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is an interesting one. We do hope that eventually upstream will define componentconfig for everything, and we can then map our kops equivalents to componentconfig (hopefully), and ideally replace our structures with the official ones. One way we could do this is to introduce a new kops API version, and automatically map to the official structures.

So I think what is being done here is a step in the right direction - though we do need to be sure that we are setting the componentconfig is equivalent to how we were setting the flags (i.e. that we aren't changing the meaning of the kops configuration fields).

@@ -553,6 +553,10 @@ type KubeSchedulerConfig struct {
UsePolicyConfigMap *bool `json:"usePolicyConfigMap,omitempty"`
// FeatureGates is set of key=value pairs that describe feature gates for alpha/experimental features.
FeatureGates map[string]string `json:"featureGates,omitempty" flag:"feature-gates"`
// The maximum qps to send to the api server
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we should create a different struct for the config flags? This would draw a clear line of where the functionality is used.

// Build is responsible for building the manifest for the kube-scheduler
func (b *KubeSchedulerBuilder) Build(c *fi.ModelBuilderContext) error {
if !b.IsMaster {
return nil
}

// We want to add the custom config file if supported and present in the Spec
buildConfigFile := b.IsKubernetesGTE("1.11") && b.Cluster.Spec.KubeScheduler != nil
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Personally I'd add it "way in the future", to give us some time to figure this out ... it's a great change, but I believe we'll be really early adopters and will likely have some teething issues. How would you feel about 1.15 or 1.16?

return nil
}

func (b *KubeSchedulerBuilder) buildConfigFile() (string, error) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is an interesting one. We do hope that eventually upstream will define componentconfig for everything, and we can then map our kops equivalents to componentconfig (hopefully), and ideally replace our structures with the official ones. One way we could do this is to introduce a new kops API version, and automatically map to the official structures.

So I think what is being done here is a step in the right direction - though we do need to be sure that we are setting the componentconfig is equivalent to how we were setting the flags (i.e. that we aren't changing the meaning of the kops configuration fields).

var encoder runtime.Encoder
conf := kubeschedulerconfigv1alpha1.KubeSchedulerConfiguration{}
// Apply defaults first
kubeschedulerscheme.Scheme.Default(&conf)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess this is what you're thinking about @rdrgmnzs when you said "sets every setting"? The semantics are a little odd in that some defaults have changed if not specified with componentconfig. It would be simpler only to set the fields that are explicitly set, but I'm not certain if we can do that...

I'm going to ask team componentconfig; it feels like a very different approach to the one that I feel I can say we have proven works in kops.

if b.Cluster.Spec.KubeScheduler.Burst != nil {
conf.ClientConnection.Burst = *b.Cluster.Spec.KubeScheduler.Burst
}
sb := new(strings.Builder)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we just use var b bytes.Buffer and then pass &b to Encoder? (Or ... is this much more efficient somehow?)

@rralcala
Copy link
Contributor Author

Thank you @justinsb and @rdrgmnzs, I'll incorporate the feedback and modify my proposal accordingly, regardless of it getting merged I'm happy that I brought up the conversation, 1.16 sounds good to me.

I'll probably submit another PR to add a couple of flags using the existing method.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jul 10, 2019
@k8s-ci-robot
Copy link
Contributor

@rralcala: PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot
Copy link
Contributor

@rralcala: The following tests failed, say /retest to rerun them all:

Test name Commit Details Rerun command
pull-kops-e2e-kubernetes-aws 28dc8ff link /test pull-kops-e2e-kubernetes-aws
pull-kops-verify-gomod 28dc8ff link /test pull-kops-verify-gomod
pull-kops-verify-generated 28dc8ff link /test pull-kops-verify-generated
pull-kops-verify-staticcheck 28dc8ff link /test pull-kops-verify-staticcheck

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 21, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 20, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closed this PR.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@mayukuse24
Copy link

Any updates on this? Will this PR be included in a release soon?

@rralcala
Copy link
Contributor Author

rralcala commented Jul 1, 2020

Check this one out #8407, I see that's in v1.18.0-beta.1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants