New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OCPBUGS-25753,OCPBUGS-22721: Run resolv-prepender entirely async #4102
OCPBUGS-25753,OCPBUGS-22721: Run resolv-prepender entirely async #4102
Conversation
Skipping CI for Draft Pull Request. |
@cybertron: This pull request references Jira Issue OCPBUGS-25753, which is invalid:
Comment The bug has been updated to refer to the pull request using the external bug tracker. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
I'm still testing this to confirm it actually fixes the problem, but this version at least seems to work as expected so I thought I'd get it up for initial review. |
Currently the resolv-prepender dispatcher script starts the systemd service and then waits for it to complete. This can cause the dispatcher script to time out if the runtimecfg image pull is slow or if resolv.conf does not get populated in a timely fashion (it's not entirely clear to me why the latter happens, but it does). This can cause configure-ovs to time out if there are a large number of interfaces on the system triggering the dispatcher script, such as when there are many VLANs configured. To avoid this, we can stop waiting for the systemd service in the dispatcher script. In fact, there's an argument that we shouldn't wait since we need to be able to handle asynchronous execution anyway for the slow image pull case (which was the entire reason the script was split into a service the way it is). I have found a few possible issues with async execution however: * If we start the service with an empty $DHCP6_FQDN_FQDN value and then later get a new value for that, we may not correctly apply the new value if the service is still running because we only ever "systemd start" the service, which is a noop if the service is already running. * Similarly, if new IP4/6_DOMAINS values come in on a later connection that may not be reflected in the service either. Even though these may sound like the same problem, I mention them separately on purpose because the solutions are different: * For the DHCP6 case, we can move that logic back into the dispatcher script so we will always set the hostname no matter what happens with the prepender code. One could argue that this should be in its own script anyway since it's largely unrelated to resolv.conf. * For the domains case, we do need to restart the service since the domains are involved in resolv.conf generation. However, we do not want to restart the service every time since that may be unnecessary and if we restart in the middle of the image pull it could result in a corrupt image (the whole thing we were trying to avoid by running this as a service in the first place). To avoid problems with restarting the service when we don't want to, I've added logic that only restarts the service if there are changed env values AND the runtimecfg image has already been pulled. This should mean the worst case scenario is that we don't properly set the domains and resolv.conf is temporarily generated with and incorrect search line. This should be resolved the next time any event that triggers the dispatcher script happens.
29f1fe7
to
10a4774
Compare
/jira refresh |
@cybertron: This pull request references Jira Issue OCPBUGS-25753, which is invalid:
Comment In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
@cybertron: This pull request references Jira Issue OCPBUGS-25753, which is valid. The bug has been moved to the POST state. 3 validation(s) were run on this bug
Requesting review from QA contact: The bug has been updated to refer to the pull request using the external bug tracker. This pull request references Jira Issue OCPBUGS-22721, which is valid. The bug has been moved to the POST state. 3 validation(s) were run on this bug
Requesting review from QA contact: The bug has been updated to refer to the pull request using the external bug tracker. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
We have confirmation that this fixes the problem in at least one environment, so I think we can move forward with it. /test e2e-metal-ipi-ovn-ipv6 These two jobs are affected by the hostname logic change so we should confirm that they still pass. |
/cc @mkowalski It looks like this should fix the issues we've had with timeouts when a large number of VLANs are configured. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/test e2e-metal-ipi-ovn-ipv6 No changes in the logic, just added quotes around the image references. As long as that didn't somehow break the script completely this should still be good to go. |
/retest-required Unrelated failures. The platform jobs affected by this change have all passed. |
pre-merge testing this feature on ipi-vsphere with 40 vlans on the worker and pass. |
@cybertron: This pull request references Jira Issue OCPBUGS-25753, which is valid. 3 validation(s) were run on this bug
Requesting review from QA contact: This pull request references Jira Issue OCPBUGS-22721, which is valid. 3 validation(s) were run on this bug
Requesting review from QA contact: In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
/assign @cdoern |
/retest-required I think the SNO job should be passing now that we merged the fix. |
@cybertron would you say this is a critical fix? |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: cybertron, mkowalski, yuqi-zhang The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
We discussed this elsewhere, but for posterity I think this can wait until the critical fixes label is no longer needed. It doesn't sound like it should be too long until that goes away. |
@cybertron: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
/retest-required |
2da0539
into
openshift:master
@cybertron: Jira Issue OCPBUGS-25753: All pull requests linked via external trackers have merged: Jira Issue OCPBUGS-25753 has been moved to the MODIFIED state. Jira Issue OCPBUGS-22721: All pull requests linked via external trackers have merged: Jira Issue OCPBUGS-22721 has been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
[ART PR BUILD NOTIFIER] This PR has been included in build openshift-proxy-pull-test-container-v4.16.0-202402011241.p0.g2da0539.assembly.stream for distgit openshift-proxy-pull-test. |
Fix included in accepted release 4.16.0-0.nightly-2024-02-02-002725 |
/cherry-pick release-4.15 |
@cybertron: new pull request created: #4161 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Currently the resolv-prepender dispatcher script starts the systemd service and then waits for it to complete. This can cause the dispatcher script to time out if the runtimecfg image pull is slow or if resolv.conf does not get populated in a timely fashion (it's not entirely clear to me why the latter happens, but it does). This can cause configure-ovs to time out if there are a large number of interfaces on the system triggering the dispatcher script, such as when there are many VLANs configured.
To avoid this, we can stop waiting for the systemd service in the dispatcher script. In fact, there's an argument that we shouldn't wait since we need to be able to handle asynchronous execution anyway for the slow image pull case (which was the entire reason the script was split into a service the way it is).
I have found a few possible issues with async execution however:
Even though these may sound like the same problem, I mention them separately on purpose because the solutions are different:
For the DHCP6 case, we can move that logic back into the dispatcher script so we will always set the hostname no matter what happens with the prepender code. One could argue that this should be in its own script anyway since it's largely unrelated to resolv.conf.
For the domains case, we do need to restart the service since the domains are involved in resolv.conf generation. However, we do not want to restart the service every time since that may be unnecessary and if we restart in the middle of the image pull it could result in a corrupt image (the whole thing we were trying to avoid by running this as a service in the first place).
To avoid problems with restarting the service when we don't want to, I've added logic that only restarts the service if there are changed env values AND the runtimecfg image has already been pulled. This should mean the worst case scenario is that we don't properly set the domains and resolv.conf is temporarily generated with and incorrect search line. This should be resolved the next time any event that triggers the dispatcher script happens.
- What I did
- How to verify it
- Description for the changelog