-
Notifications
You must be signed in to change notification settings - Fork 38.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mark restart_test as flaky #106359
Mark restart_test as flaky #106359
Conversation
d242da2
to
60c857d
Compare
/priority backlog |
60c857d
to
990e950
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: ehashman, mmiranda96 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
The problem is that the pods are still running and seems to not be killed correctly
is this a known issue? can this hide something important |
actually the test is passing
🤔 |
@ehashman @mmiranda96 do you mind if we hold this and I try to see if I can solve the flake? |
We can try to fix this, SG. |
@aojea we need to mark this as flaky to get it out of our serial lane and get that green for 1.23 release, separately we should fix the test. |
/hold cancel |
(let me double check that there is an issue for this, I will assign you @aojea) |
I think it is only an issue with dockershim+ubuntu which is why we previously haven't prioritized! |
Docker + Ubuntu is getting skipped (source). |
the test fails here , it takes more than 10 mins to delete the pods
checking this occurrence https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/1458684226500038656 it seems that the kubelet restarts when the test finish, kubelet logs starts after the test failure is there some way to get the previous kubelet logs? |
Are they not written to the same file? We might need to change the test to append rather than overwrite the file if no |
What type of PR is this?
/kind cleanup
/kind flake
What this PR does / why we need it:
Restart test has been flaking for a while now (see testgrid). This PR marks it as flaky, which removes it from the node-kubelet-serial suite.
Which issue(s) this PR fixes:
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: