Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Drop unused machine labels #1474

Merged

Conversation

vikaschoudhary16
Copy link
Contributor

@vikaschoudhary16 vikaschoudhary16 commented Mar 27, 2019

@openshift-ci-robot openshift-ci-robot added the size/S Denotes a PR that changes 10-29 lines, ignoring generated files. label Mar 27, 2019
@wking
Copy link
Member

wking commented Mar 27, 2019

Is there a machine-API PR or something you can link for this change?

@vikaschoudhary16
Copy link
Contributor Author

@wking updated the links above. Thanks!

@abhinavdahiya
Copy link
Contributor

the node role label has meaning in kubernetes scheduling. Will these end up on the nodes? If yes I want to punt on adding these as labels are added by kubelet through machine-config .

Please coordinate with MCO team to define who owns these specific labels...

},
},
Spec: machineapi.MachineSpec{
Labels: map[string]string{
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we don't really need this one

},
},
Spec: machineapi.MachineSpec{
Labels: map[string]string{
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we don't really need this one

"machine.openshift.io/cluster-api-cluster": clusterID,
"machine.openshift.io/cluster-api-machine-role": role,
"machine.openshift.io/cluster-api-machine-type": role,
"machine.openshift.io/cluster-api-machineset": name,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this will never pass the tests https://github.com/openshift/origin/blob/master/test/extended/machines/workers.go#L23
we'd need to update the test to fetch from the node

@trown
Copy link

trown commented Apr 17, 2019

/test e2e-openstack

@ingvagabund
Copy link
Member

/retest

@ingvagabund
Copy link
Member

ingvagabund commented Apr 18, 2019

/lgtm

The PR is changing only manifests generated for AWS.

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Apr 18, 2019
@frobware
Copy link

/approve

@abhinavdahiya
Copy link
Contributor

/lgtm

@openshift-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: abhinavdahiya, frobware, ingvagabund, vikaschoudhary16

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Apr 18, 2019
@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

1 similar comment
@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@wking
Copy link
Member

wking commented Apr 18, 2019

e2e-aws:

fail [github.com/openshift/origin/test/extended/operators/cluster.go:109]: Expected
    <[]string | len:1, cap:1>: [
        "Pod openshift-kube-apiserver/installer-1-ip-10-0-174-121.ec2.internal is not healthy: ces/kube-apiserver-pod-1/configmaps/kube-apiserver-pod/version\" ...\nI0418 20:16:00.193741       1 cmd.go:229] Creating directory \"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-1/configmaps/kubelet-serving-ca\" ...\nI0418 20:16:00.193819       1 cmd.go:234] Writing config file \"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-1/configmaps/kubelet-serving-ca/ca-bundle.crt\" ...\nI0418 20:16:00.193903       1 cmd.go:229] Creating directory \"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-1/configmaps/sa-token-signing-certs\" ...\nI0418 20:16:00.193980       1 cmd.go:234] Writing config file \"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-1/configmaps/sa-token-signing-certs/service-account-001.pub\" ...\nI0418 20:16:00.194072       1 cmd.go:229] Creating directory \"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-1/configmaps/kube-apiserver-server-ca\" ...\nI0418 20:16:00.194149       1 cmd.go:234] Writing config file \"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-1/configmaps/kube-apiserver-server-ca/ca-bundle.crt\" ...\nI0418 20:16:00.194238       1 cmd.go:171] Creating target resource directory \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\" ...\nI0418 20:16:00.194327       1 cmd.go:179] Getting secrets ...\nI0418 20:16:00.390280       1 copy.go:32] Got secret openshift-kube-apiserver/aggregator-client\nI0418 20:16:00.590146       1 copy.go:32] Got secret openshift-kube-apiserver/external-loadbalancer-serving-certkey\nI0418 20:16:00.789572       1 copy.go:32] Got secret openshift-kube-apiserver/internal-loadbalancer-serving-certkey\nI0418 20:16:00.993985       1 copy.go:32] Got secret openshift-kube-apiserver/localhost-serving-cert-certkey\nI0418 20:16:01.189560       1 copy.go:32] Got secret openshift-kube-apiserver/service-network-serving-certkey\nI0418 20:16:01.389749       1 copy.go:24] Failed to get secret openshift-kube-apiserver/serving-cert: secrets \"serving-cert\" not found\nF0418 20:16:01.592729       1 cmd.go:89] failed to copy: secrets \"serving-cert\" not found\n",
    ]
to be empty
...
Failing tests:

[Feature:Platform] Managed cluster should have no crashlooping pods in core namespaces over two minutes [Suite:openshift/conformance/parallel]

/retest

@openshift-merge-robot openshift-merge-robot merged commit db5913f into openshift:master Apr 19, 2019
enxebre added a commit to enxebre/installer that referenced this pull request Apr 29, 2019
We dropped aws role labels in openshift#1474 to reducethe surface to what's actually needed by our core controllers. However having this lables provides better UX and searchability for end users and consumers like machineAutoscaler resource so we are putting them back
openshift-merge-robot added a commit that referenced this pull request May 1, 2019
BUG 1702050: Add aws role labels back (revert #1474)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. size/S Denotes a PR that changes 10-29 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

10 participants