Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pools: add default label for master/worker pool #1937

Merged

Conversation

yuqi-zhang
Copy link
Contributor

Today the master/worker MCP does not have any different labels.
This means that if e.g. a user wants to add a kubeletconfig, they
must first create a label for the pool and target the kubeletconfig
against the label. This PR creates a default label for master/worker
pools to skip that step.

@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jul 20, 2020
@cgwalters
Copy link
Member

Hmm, the kubeletconfig can't target a name directly? It seems weird to add labels that just match our names.

@yuqi-zhang
Copy link
Contributor Author

yuqi-zhang commented Jul 20, 2020

Hmm, the kubeletconfig can't target a name directly? It seems weird to add labels that just match our names.

by design it searches the MCPs for a label that it wants to match to create a corresponding machineconfig

3 notes:

  1. it seems that our labels are all quoted ("machineconfiguration.openshift.io/mco-built-in": "" instead of machineconfiguration.openshift.io/mco-built-in: ""), whereas many other repos are not. Was there a reason for this?

  2. this only considers master/worker mcp and does not consider custom pools, which isn't synced like master/worker and probably needs a minor rework if we want to do that too

  3. will also update the docs but leaving this up for consideration right now

@yuqi-zhang
Copy link
Contributor Author

To be more specific on my previous point, I mean that we use labels to select the pool here: https://github.com/openshift/machine-config-operator/blob/master/pkg/controller/kubelet-config/kubelet_config_controller.go#L607 . It's not a direct kubeletconfig-pool matching. Would you say we should redesign that?

@yuqi-zhang
Copy link
Contributor Author

/test e2e-aws-proxy

3 similar comments
@yuqi-zhang
Copy link
Contributor Author

/test e2e-aws-proxy

@yuqi-zhang
Copy link
Contributor Author

/test e2e-aws-proxy

@yuqi-zhang
Copy link
Contributor Author

/test e2e-aws-proxy

@yuqi-zhang
Copy link
Contributor Author

/test e2e-aws-proxy

Just doing this for an environment, please ignore

@yuqi-zhang
Copy link
Contributor Author

/test e2e-aws-proxy

@yuqi-zhang
Copy link
Contributor Author

/retest

@yuqi-zhang
Copy link
Contributor Author

/test e2e-aws-proxy

@runcom
Copy link
Member

runcom commented Jul 27, 2020

3 notes:

  1. it seems that our labels are all quoted ("machineconfiguration.openshift.io/mco-built-in": "" instead of machineconfiguration.openshift.io/mco-built-in: ""), whereas many other repos are not. Was there a reason for this?

uhm, I can't remember anything there 🤔 is it causing any specific issue?

2. this only considers master/worker mcp and does not consider custom pools, which isn't synced like master/worker and probably needs a minor rework if we want to do that too

I think we can target custom pools as well as we're growing further support as first-class citizen (stub ignition creation and the like)

@yuqi-zhang
Copy link
Contributor Author

yuqi-zhang commented Jul 27, 2020

Adding it as a template instead, so it should get applied to all pools. Tested with master/worker, will test with custom pools to make sure

Also it seems that quotes or not, it doesn't make a difference so long as the string inside is the same

template:
metadata:
labels:
pools.operator.machineconfiguration.openshift.io/{{ status.configuration.name }}: ""
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

omg niceee

Copy link
Contributor

@deads2k deads2k Jul 29, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

omg niceee

hmm..... What do you think this does? To my knowledge, this field doesn't appear in the CRD type, so it will be pruned out.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Having a default label applied to each MCP object being created, do you know what the correct way to do this is?

@yuqi-zhang
Copy link
Contributor Author

I can confirm that this works for installs/upgrades for master/worker pools, however the template doesn't seem to apply for custom pools by default. Maybe I'm mis-understanding the templating mechanism?

@runcom
Copy link
Member

runcom commented Jul 30, 2020

alrighty, the master/worker MCP are still something managed by MCO and it makes sense to have default labels applied. Custom pools on the other hand are user-provided and it makes sense to follow the pattern in the default pool manifests (maybe we'll find a way to do that). This is an optimization for the default pools anyway, the old behavior is untouched as per acceptance criteria in the jira card

@runcom
Copy link
Member

runcom commented Jul 30, 2020

@openshift/openshift-team-mco-maintainers ptal

/approve

@yuqi-zhang
Copy link
Contributor Author

I've changed it back to just adding a label for master/workers based on their manifests for now. In the long term we may want to rework the controller to manage the label

Today the MCPs does not have any different labels by default. This
means that if e.g. a user wants to add a kubeletconfig, they must
first create a label for the pool and target the kubeletconfig
against the label. This PR creates a default label for master/worker
pools to skip that step.

Signed-off-by: Yu Qi Zhang <jerzhang@redhat.com>
Update docs with the default pools, as well as minor updating of
config names/spec version.

Signed-off-by: Yu Qi Zhang <jerzhang@redhat.com>
@yuqi-zhang
Copy link
Contributor Author

Added quotes to labels. Although I feel like we should remove them like other openshift repos, for now lets keep them consistent

@ericavonb
Copy link
Contributor

To be more specific on my previous point, I mean that we use labels to select the pool here: https://github.com/openshift/machine-config-operator/blob/master/pkg/controller/kubelet-config/kubelet_config_controller.go#L607 . It's not a direct kubeletconfig-pool matching. Would you say we should redesign that?

Wait, KubeletConfigs specify MachineConfigPools via label selectors to create MachineConfigs selected by that pool via label selectors? 😱 Isn't this backwards?

Except that it sorta assumes this'll work based on the pool name. A lot of indirection going on. KubeletConfigs should not be aware of MachineConfigPools at all, nor should they rely on any sort of naming or labeling, and they definitely shouldn't be aware of templates or rendering logic. Gah, this deserves an issue at least

@yuqi-zhang
Copy link
Contributor Author

We can add that as part of the KCC/CRCC rework items. What would you recommendation be in terms of hooking up KC/CRC to MCs?

@yuqi-zhang
Copy link
Contributor Author

This PR just aims to add a default. We can always remove it if we rework the behaviour

@ericavonb
Copy link
Contributor

This PR just aims to add a default. We can always remove it if we rework the behaviour

👍 sorry, yes this PR seems fine by me.

Shouldn't have polluted the PR like that! Sorry again.

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Jul 31, 2020
@openshift-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: ericavonb, runcom, yuqi-zhang

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:
  • OWNERS [ericavonb,runcom,yuqi-zhang]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@yuqi-zhang
Copy link
Contributor Author

No worries at all! Thanks for the review :D

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

3 similar comments
@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Aug 1, 2020

@yuqi-zhang: The following test failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
ci/prow/e2e-aws-proxy c4e1fc74597b6bcb601dca1e79b48a5591c7e0cf link /test e2e-aws-proxy

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

1 similar comment
@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-merge-robot openshift-merge-robot merged commit 057d852 into openshift:master Aug 1, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

8 participants