New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OCPBUGS-17701: daemon: igmore mounting MCD pod content when target is "/" #3860
Conversation
With ReexecuteForTargetRoot, we have already chroot into rootfs. So, we should already have necessary MCD pod content mounted inside the host. This also avoids overriding previously mounted content.
@sinnykumari: This pull request references Jira Issue OCPBUGS-17701, which is invalid:
Comment The bug has been updated to refer to the pull request using the external bug tracker. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@sinnykumari: This pull request references Jira Issue OCPBUGS-17701, which is invalid:
Comment In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/jira refresh |
@sinnykumari: This pull request references Jira Issue OCPBUGS-17701, which is valid. The bug has been moved to the POST state. 3 validation(s) were run on this bug
Requesting review from QA contact: In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This should be good to get reviewed and lgtm |
@sinnykumari: GitHub didn't allow me to request PR reviews from the following users: for, pre-merge, testing. Note that only openshift members and repo collaborators can review this PR, and authors cannot review their own PRs. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
installed aws cluster with version 4.12.29 ~ oc patch mcp worker -p '[{ "op": "add", "path": "/spec/paused", "value": true }]' --type json
machineconfigpool.machineconfiguration.openshift.io/worker patched
~ oc get mcp/worker -o yaml | yq -y '.spec.paused'
true upgrade cluster to 4.13.9 ~ oc adm upgrade --to-image quay.io/openshift-release-dev/ocp-release:4.13.9-x86_64 --force --allow-explicit-upgrade
Requested update to release image quay.io/openshift-release-dev/ocp-release:4.13.9-x86_64 upgrade cluster to 4.14.0-0.ci.test-2023-08-16-002136-ci-ln-tr87ly2-latest ~ oc adm upgrade --to-image registry.build03.ci.openshift.org/ci-ln-tr87ly2/release:latest --force --allow-explicit-upgrade --allow-upgrade-with-warnings
Requested update to release image registry.build03.ci.openshift.org/ci-ln-tr87ly2/release:latest unpause worker pool, wait for workers to be updated ~ oc patch mcp worker -p '[{ "op": "add", "path": "/spec/paused", "value": false }]' --type json
machineconfigpool.machineconfiguration.openshift.io/worker patched all workers are updated to 4.14 ~ cv
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version 4.14.0-0.ci.test-2023-08-16-002136-ci-ln-tr87ly2-latest True False 43m Cluster version is 4.14.0-0.ci.test-2023-08-16-002136-ci-ln-tr87ly2-latest
~ mcp worker
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE
worker rendered-worker-b5624eea6f384e4a7f6b8b8137eb840a True False False 3 3 3 0 3h46m
~ co machine-config
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
machine-config 4.14.0-0.ci.test-2023-08-16-002136-ci-ln-tr87ly2-latest True False False 3h46m
~ oc get node -o wide -l node-role.kubernetes.io/worker
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-10-0-142-62.us-east-2.compute.internal Ready worker 3h37m v1.27.4+4e87926 10.0.142.62 <none> Red Hat Enterprise Linux CoreOS 414.92.202308151250-0 (Plow) 5.14.0-284.25.1.el9_2.x86_64 cri-o://1.27.1-6.rhaos4.14.gitc2c9f36.el9
ip-10-0-160-128.us-east-2.compute.internal Ready worker 3h35m v1.27.4+4e87926 10.0.160.128 <none> Red Hat Enterprise Linux CoreOS 414.92.202308151250-0 (Plow) 5.14.0-284.25.1.el9_2.x86_64 cri-o://1.27.1-6.rhaos4.14.gitc2c9f36.el9
ip-10-0-218-182.us-east-2.compute.internal Ready worker 3h38m v1.27.4+4e87926 10.0.218.182 <none> Red Hat Enterprise Linux CoreOS 414.92.202308151250-0 (Plow) 5.14.0-284.25.1.el9_2.x86_64 cri-o://1.27.1-6.rhaos4.14.gitc2c9f36.el9 |
/label qe-approved |
/lgtm |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: cgwalters, djoshy, sinnykumari The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@sinnykumari: all tests passed! Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
@sinnykumari: Jira Issue OCPBUGS-17701: All pull requests linked via external trackers have merged: Jira Issue OCPBUGS-17701 has been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
With ReexecuteForTargetRoot, we have already chroot into rootfs. So, we should already have necessary MCD pod content mounted inside the host. This also avoids overriding previously mounted content.
We saw this issue in 4.13 #3812 (comment) . We don't see this issue in usual upgrade case in ci because we upgrade from 4.13 to 4.14 , where both have RHEL 9 baseOS. But I forgot that in EUS upgrade with paused pool, it is likely that OS upgrade will happen from 4.12 (RHEL 8) to 4.14 .
This was anyway I had in mind to cherry-pick to 4.14 but somehow went off the radar.