-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support custom kube-scheduler configuration #7186
Conversation
Mentioned in #6942 This change allows using the --config flag to set additional options, theese are marked as depecrated as flags (https://kubernetes.io/docs/reference/command-line-tools-reference/kube-scheduler/) so I thought that it might be better to have them in a config file. It also makes it easy to add more.
Hi @rralcala. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: rralcala The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/ok-to-test |
/retest |
// Build is responsible for building the manifest for the kube-scheduler | ||
func (b *KubeSchedulerBuilder) Build(c *fi.ModelBuilderContext) error { | ||
if !b.IsMaster { | ||
return nil | ||
} | ||
|
||
// We want to add the custom config file if supported and present in the Spec | ||
buildConfigFile := b.IsKubernetesGTE("1.11") && b.Cluster.Spec.KubeScheduler != nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we've already released Kops 1.11 & 1.12 should we be setting this so that it only targets k8s 1.13+ so that we don't change the behavior for previous deployments that upgrade Kops but not K8s?
Another option is to add the buildConfig behind a feature flag.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Personally I'd add it "way in the future", to give us some time to figure this out ... it's a great change, but I believe we'll be really early adopters and will likely have some teething issues. How would you feel about 1.15 or 1.16?
@@ -40,14 +46,21 @@ type KubeSchedulerBuilder struct { | |||
|
|||
var _ fi.ModelBuilder = &KubeSchedulerBuilder{} | |||
|
|||
const ( | |||
KubeConfigPath = "/var/lib/kube-scheduler/kubeconfig" | |||
KubeSchedulerConfigPath = "/var/lib/kube-scheduler/config" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we want to add the file format (.yaml
) to the config?
return nil | ||
} | ||
|
||
func (b *KubeSchedulerBuilder) buildConfigFile() (string, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This sets every setting even if they are not being explicitly configured by Kops, because it's importing the kubeschedulerscheme from upstream.
Would modeling an internal schema closely to what we do for other configurations inside Kops be less heavy handed? It would also allow us to not import yet another upstream dependency.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is an interesting one. We do hope that eventually upstream will define componentconfig for everything, and we can then map our kops equivalents to componentconfig (hopefully), and ideally replace our structures with the official ones. One way we could do this is to introduce a new kops API version, and automatically map to the official structures.
So I think what is being done here is a step in the right direction - though we do need to be sure that we are setting the componentconfig is equivalent to how we were setting the flags (i.e. that we aren't changing the meaning of the kops configuration fields).
@@ -553,6 +553,10 @@ type KubeSchedulerConfig struct { | |||
UsePolicyConfigMap *bool `json:"usePolicyConfigMap,omitempty"` | |||
// FeatureGates is set of key=value pairs that describe feature gates for alpha/experimental features. | |||
FeatureGates map[string]string `json:"featureGates,omitempty" flag:"feature-gates"` | |||
// The maximum qps to send to the api server |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we should create a different struct for the config flags? This would draw a clear line of where the functionality is used.
// Build is responsible for building the manifest for the kube-scheduler | ||
func (b *KubeSchedulerBuilder) Build(c *fi.ModelBuilderContext) error { | ||
if !b.IsMaster { | ||
return nil | ||
} | ||
|
||
// We want to add the custom config file if supported and present in the Spec | ||
buildConfigFile := b.IsKubernetesGTE("1.11") && b.Cluster.Spec.KubeScheduler != nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Personally I'd add it "way in the future", to give us some time to figure this out ... it's a great change, but I believe we'll be really early adopters and will likely have some teething issues. How would you feel about 1.15 or 1.16?
return nil | ||
} | ||
|
||
func (b *KubeSchedulerBuilder) buildConfigFile() (string, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is an interesting one. We do hope that eventually upstream will define componentconfig for everything, and we can then map our kops equivalents to componentconfig (hopefully), and ideally replace our structures with the official ones. One way we could do this is to introduce a new kops API version, and automatically map to the official structures.
So I think what is being done here is a step in the right direction - though we do need to be sure that we are setting the componentconfig is equivalent to how we were setting the flags (i.e. that we aren't changing the meaning of the kops configuration fields).
var encoder runtime.Encoder | ||
conf := kubeschedulerconfigv1alpha1.KubeSchedulerConfiguration{} | ||
// Apply defaults first | ||
kubeschedulerscheme.Scheme.Default(&conf) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess this is what you're thinking about @rdrgmnzs when you said "sets every setting"? The semantics are a little odd in that some defaults have changed if not specified with componentconfig. It would be simpler only to set the fields that are explicitly set, but I'm not certain if we can do that...
I'm going to ask team componentconfig; it feels like a very different approach to the one that I feel I can say we have proven works in kops.
if b.Cluster.Spec.KubeScheduler.Burst != nil { | ||
conf.ClientConnection.Burst = *b.Cluster.Spec.KubeScheduler.Burst | ||
} | ||
sb := new(strings.Builder) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we just use var b bytes.Buffer
and then pass &b
to Encoder? (Or ... is this much more efficient somehow?)
@rralcala: PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@rralcala: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Any updates on this? Will this PR be included in a release soon? |
Check this one out #8407, I see that's in v1.18.0-beta.1 |
Mentioned in #6942
This change allows using the --config flag to set additional options,
theese are marked as depecrated as flags
(https://kubernetes.io/docs/reference/command-line-tools-reference/kube-scheduler/)
so I thought that it might be better to have them in a config file.
It also makes it easy to add more.