New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug 1806892: no need to run CCO as privileged pod #159
Conversation
/test e2e-aws |
Can you elaborate on the history and how you reached this conclusion? (preferably in the git commit message) IIRC this developed as a result of that work to move to the bootstrap node but I can't remember the specifics. |
/test e2e-aws |
The history of this privileged label seems unclear. It first appears at fbf7b74 It has since just been carried around. Removing the label means that the file copy operation on pod startup fails at https://github.com/openshift/cloud-credential-operator/blob/master/manifests/05_deployment.yaml#L202-L205 , so the change in this PR to the Docker file addresses that. I'll see what I can do to fill out the commit message. |
Remove the label that was running CCO as a privileged pod. It is unnecessary for the kinds of operations that CCO needs to perform in cluster. Removing the label makes the file copy of any provided certificate bundles fail (used for things like global proxy settings). Fix up the permissions at container build time to allow the Pod startup to continue to be able to copy over any certificate bundle mounted into the pod. The label appears to have no effect when CCO is running on the bootstrap node, so there is no need to maintain a Namespace definition for use only during bootstrap (like we must do for the bootstrap-only Pod definition).
/retitle BZ 1806892: no need to run CCO as privileged pod |
/retitle Bug 1806892: no need to run CCO as privileged pod |
@joelddiaz: This pull request references Bugzilla bug 1806892, which is valid. The bug has been moved to the POST state. The bug has been updated to refer to the pull request using the external bug tracker. 3 validation(s) were run on this bug
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/lgtm |
@joelddiaz: All pull requests linked via external trackers have merged. Bugzilla bug 1806892 has been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: dgoodwin, joelddiaz The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@@ -20,6 +20,8 @@ FROM registry.svc.ci.openshift.org/openshift/origin-v4.0:base | |||
WORKDIR /root/ | |||
COPY --from=builder /go/src/github.com/openshift/cloud-credential-operator/manager . | |||
ADD manifests/ /manifests | |||
# Update perms so we can copy updated CA if needed | |||
RUN chmod -R g+w /etc/pki/ca-trust/extracted/pem/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@joelddiaz Do we need to give write permission to files under /etc/pki/ca-trust/extracted/pem/ directory? If we just need copy updated CA, is it enough to only give write permission to directory?
I just have a doubt. If it doesn't influences, please ignore my question.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If there is an existing file with RO permissions, then only changing the directory will still cause a failure.
/cherrypick |
/cherrypick release-4.4 |
@sdodson: new pull request created: #189 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
We dropped the run-level label from this namespace back in 8120317 (no need to run CCO as privileged pod, 2020-03-03, openshift#159, 4.5 [1]). But because of how the cluster-version operator reconciles manifest labels, dropping a label from the manifest does not remove it from the in-cluster resource when old clusters are updated into the new manifest [2]. This commit uses the approach the cluster-version operator used to drop its run-level [3], by setting the value to an empty string, which the run-level-consuming code treats identically to an unset label. This avoids errors about: ...container has runAsNonRoot and image will run as root... when updating to 4.11 [4]. Because the namespace manifest sorts before the deployment manifest, there is probably no need to get this change back into 4.10 manifests, although that wouldn't hurt. [1]: https://bugzilla.redhat.com/show_bug.cgi?id=1806892 [2]: https://issues.redhat.com/browse/OTA-330 [3]: openshift/cluster-version-operator@539e944#diff-39019af7716ea5d1b86c1eb34e131de533e2701b1bb3014f9a2d71be603ff345R12 [4]: https://bugzilla.redhat.com/show_bug.cgi?id=2101880#c2
We dropped the run-level label from this namespace back in 8120317 (no need to run CCO as privileged pod, 2020-03-03, openshift#159, 4.5 [1]). But because of how the cluster-version operator reconciles manifest labels, dropping a label from the manifest does not remove it from the in-cluster resource when old clusters are updated into the new manifest [2]. This commit uses the approach the cluster-version operator used to drop its run-level [3], by setting the value to an empty string, which the run-level-consuming code treats identically to an unset label. This avoids errors about: ...container has runAsNonRoot and image will run as root... when updating to 4.11 [4]. Because the namespace manifest sorts before the deployment manifest, there is probably no need to get this change back into 4.10 manifests, although that wouldn't hurt. [1]: https://bugzilla.redhat.com/show_bug.cgi?id=1806892 [2]: https://issues.redhat.com/browse/OTA-330 [3]: openshift/cluster-version-operator@539e944#diff-39019af7716ea5d1b86c1eb34e131de533e2701b1bb3014f9a2d71be603ff345R12 [4]: https://bugzilla.redhat.com/show_bug.cgi?id=2101880#c2
Remove the label that was running CCO as a privileged pod. It is unnecessary for the kinds of operations that CCO needs to perform in cluster.
Removing the label makes the file copy of any provided certificate bundles fail (used for things like global proxy settings). Fix up the permissions at container build time to allow the Pod startup to continue to be able to copy over any certificate bundle mounted into the pod.
The label appears to have no effect when CCO is running on the bootstrap node, so there is no need to maintain a Namespace definition for use only during bootstrap (like we must do for the bootstrap-only Pod definition).