New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug 2103283: Add timeout to oc cp command to fix must-gather delays when routers are terminating #317
Conversation
…hen routers are terminating
@gcs278: This pull request references Bugzilla bug 2103283, which is invalid:
Comment In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/lgtm |
@Miciah: This pull request references Bugzilla bug 2103283, which is valid. The bug has been moved to the POST state. The bug has been updated to refer to the pull request using the external bug tracker. 3 validation(s) were run on this bug
Requesting review from QA contact: In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@@ -11,7 +11,7 @@ function gather_haproxy_config { | |||
for IC in ${INGRESS_CONTROLLERS}; do | |||
PODS=$(oc get pods -n openshift-ingress --no-headers -o custom-columns=":metadata.name" --selector ingresscontroller.operator.openshift.io/deployment-ingresscontroller="${IC}") | |||
for POD in ${PODS}; do | |||
oc cp openshift-ingress/"${POD}":/var/lib/haproxy/conf/haproxy.config "${HAPROXY_CONFIG_PATH}"/"${IC}"/"${POD}"/haproxy.config & | |||
timeout -v 3m oc cp openshift-ingress/"${POD}":/var/lib/haproxy/conf/haproxy.config "${HAPROXY_CONFIG_PATH}"/"${IC}"/"${POD}"/haproxy.config & |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is a 3 min timeout really how long we want to wait with this?
This copy operation should be seconds; IE 10x (seconds) is likely a 1-1.5 m wait time (can we half this)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is |
terminationGracePeriodSeconds as specified in the router deployment, which is 1 hour. |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: gcs278, Miciah, sferich888 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@gcs278: all tests passed! Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
@gcs278: All pull requests linked via external trackers have merged: Bugzilla bug 2103283 has been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/cherry-pick 4.11 |
@gcs278: cannot checkout In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/cherry-pick release-4.11 |
@gcs278: new pull request created: #318 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This fixes a bug where if router pods are stuck in terminating when doing a must-gather, the must-gather will timeout instead of waiting for the router pods to terminate. More specifically, we have a bug in 4.10 CI in which the e2e-aws-operator job consistently fails on e2e-aws-operator-gather-must-gather container test due to pods stuck in the
terminating
state.Two reasons why this happens in CI:
oc cp
command is ran onterminating
pods, it will hang until the pod is deleted.terminationGracePeriodSeconds
sends a SIGKILL. This results in a number of old router pods just hanging out in the terminating state. Backport is pending for 4.10 https://bugzilla.redhat.com/show_bug.cgi?id=2098230.Either way, seems like we should have a reasonable timeout on the
oc cp
command to protect against situations like this.