New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BUG 1806438: Drop run-level #496
BUG 1806438: Drop run-level #496
Conversation
mao should follow normal admission control rules for a cluster.
/hold |
/retest |
2 similar comments
/retest |
/retest |
@enxebre: This pull request references Bugzilla bug 1806438, which is valid. The bug has been moved to the POST state. The bug has been updated to refer to the pull request using the external bug tracker. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/cherrypick release-4.4 |
@enxebre: once the present PR merges, I will cherry-pick it on top of release-4.4 in a new PR and assign it to you. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/hold cancel |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
Doesn't seem to have broken the e2e so I guess this doesn't make a difference? (I'm assuming this change is tested by the e2e tests 🤔)
/test e2e-azure |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/approve
I wonder why it was like this originally.
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: bison The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest Please review the full test history for this PR and help us cut down flakes. |
1 similar comment
/retest Please review the full test history for this PR and help us cut down flakes. |
@enxebre: All pull requests linked via external trackers have merged. Bugzilla bug 1806438 has been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@enxebre: new pull request created: #499 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Without the runlabel openshift#496, we’ll run as a high user by default, no need to say run me as non root. Otherwise when removing the runlevel completely for the openshift-machine-api namespace openshift/cluster-autoscaler-operator#133 the kube controller manager complains with 'Error creating: pods "machine-api-operator-75c887884f-" is forbidden: unable to validate against any security context constraint: [spec.containers[0].securityContext.securityContext.runAsUser: Invalid value: 65534: must be in the ranges: [1000340000, 1000349999] spec.containers[1].securityContext.securityContext.runAsUser: Invalid value: 65534: must be in the ranges: [1000340000, 1000349999]]' https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_cluster-autoscaler-operator/133/pull-ci-openshift-cluster-autoscaler-operator-master-e2e-aws/496/artifacts/e2e-aws/pods/openshift-kube-controller-manager_kube-controller-manager-ip-10-0-133-251.us-east-2.compute.internal_kube-controller-manager.log"
…raints use permissions in machine-api-controllers clusterRole Without the runlabel openshift#496, we’ll run as a high user by default, no need to say run me as non root. Otherwise when removing the runlevel completely for the openshift-machine-api namespace openshift/cluster-autoscaler-operator#133 the kube controller manager complains with 'Error creating: pods "machine-api-operator-75c887884f-" is forbidden: unable to validate against any security context constraint: [spec.containers[0].securityContext.securityContext.runAsUser: Invalid value: 65534: must be in the ranges: [1000340000, 1000349999] spec.containers[1].securityContext.securityContext.runAsUser: Invalid value: 65534: must be in the ranges: [1000340000, 1000349999]]' https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_cluster-autoscaler-operator/133/pull-ci-openshift-cluster-autoscaler-operator-master-e2e-aws/496/artifacts/e2e-aws/pods/openshift-kube-controller-manager_kube-controller-manager-ip-10-0-133-251.us-east-2.compute.internal_kube-controller-manager.log"
…raints use permissions in machine-api-controllers clusterRole Without the runlabel openshift#496, we’ll run as a high user by default, no need to say run me as non root. Otherwise when removing the runlevel completely for the openshift-machine-api namespace openshift/cluster-autoscaler-operator#133 the kube controller manager complains with 'Error creating: pods "machine-api-operator-75c887884f-" is forbidden: unable to validate against any security context constraint: [spec.containers[0].securityContext.securityContext.runAsUser: Invalid value: 65534: must be in the ranges: [1000340000, 1000349999] spec.containers[1].securityContext.securityContext.runAsUser: Invalid value: 65534: must be in the ranges: [1000340000, 1000349999]]' https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_cluster-autoscaler-operator/133/pull-ci-openshift-cluster-autoscaler-operator-master-e2e-aws/496/artifacts/e2e-aws/pods/openshift-kube-controller-manager_kube-controller-manager-ip-10-0-133-251.us-east-2.compute.internal_kube-controller-manager.log"
…raints use permissions in machine-api-controllers clusterRole Without the runlabel openshift#496, we’ll run as a high user by default, no need to say run me as non root. Otherwise when removing the runlevel completely for the openshift-machine-api namespace openshift/cluster-autoscaler-operator#133 the kube controller manager complains with 'Error creating: pods "machine-api-operator-75c887884f-" is forbidden: unable to validate against any security context constraint: [spec.containers[0].securityContext.securityContext.runAsUser: Invalid value: 65534: must be in the ranges: [1000340000, 1000349999] spec.containers[1].securityContext.securityContext.runAsUser: Invalid value: 65534: must be in the ranges: [1000340000, 1000349999]]' https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_cluster-autoscaler-operator/133/pull-ci-openshift-cluster-autoscaler-operator-master-e2e-aws/496/artifacts/e2e-aws/pods/openshift-kube-controller-manager_kube-controller-manager-ip-10-0-133-251.us-east-2.compute.internal_kube-controller-manager.log"
…raints use permissions in machine-api-controllers clusterRole Without the runlabel openshift#496, we’ll run as a high user by default, no need to say run me as non root. Otherwise when removing the runlevel completely for the openshift-machine-api namespace openshift/cluster-autoscaler-operator#133 the kube controller manager complains with 'Error creating: pods "machine-api-operator-75c887884f-" is forbidden: unable to validate against any security context constraint: [spec.containers[0].securityContext.securityContext.runAsUser: Invalid value: 65534: must be in the ranges: [1000340000, 1000349999] spec.containers[1].securityContext.securityContext.runAsUser: Invalid value: 65534: must be in the ranges: [1000340000, 1000349999]]' https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_cluster-autoscaler-operator/133/pull-ci-openshift-cluster-autoscaler-operator-master-e2e-aws/496/artifacts/e2e-aws/pods/openshift-kube-controller-manager_kube-controller-manager-ip-10-0-133-251.us-east-2.compute.internal_kube-controller-manager.log"
…raints use permissions in machine-api-controllers clusterRole Without the runlabel openshift#496, we’ll run as a high user by default, no need to say run me as non root. Otherwise when removing the runlevel completely for the openshift-machine-api namespace openshift/cluster-autoscaler-operator#133 the kube controller manager complains with 'Error creating: pods "machine-api-operator-75c887884f-" is forbidden: unable to validate against any security context constraint: [spec.containers[0].securityContext.securityContext.runAsUser: Invalid value: 65534: must be in the ranges: [1000340000, 1000349999] spec.containers[1].securityContext.securityContext.runAsUser: Invalid value: 65534: must be in the ranges: [1000340000, 1000349999]]' https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_cluster-autoscaler-operator/133/pull-ci-openshift-cluster-autoscaler-operator-master-e2e-aws/496/artifacts/e2e-aws/pods/openshift-kube-controller-manager_kube-controller-manager-ip-10-0-133-251.us-east-2.compute.internal_kube-controller-manager.log"
mao should follow normal admission control rules for a cluster.