Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Flaky test] When kubelet restarts Should test that a volume mounted to a pod that is deleted while the kubelet is down unmounts when the kubelet returns #75328

Open
mariantalla opened this Issue Mar 13, 2019 · 9 comments

Comments

@mariantalla
Copy link
Contributor

mariantalla commented Mar 13, 2019

Which jobs are flaking:

  • ci-kubernetes-e2e-gce-new-master-upgrade-master
  • ci-kubernetes-e2e-gce-new-master-upgrade-cluster
  • ci-kubernetes-e2e-gce-master-new-downgrade-cluster
  • ci-kubernetes-e2e-gce-new-master-upgrade-cluster-new

Which test(s) are failing:
When kubelet restarts Should test that a volume mounted to a pod that is deleted while the kubelet is down unmounts when the kubelet returns.

Testgrid link:

Reason for failure:
Timeout:

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/generic_persistent_volume-disruptive.go:73
Expected pod to be not found.
Expected error:
    <*errors.errorString | 0xc00009b860>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/utils.go:242

Anything else we need to know:

  • This test flakes on average ~10% of the time in the sig-release master-upgrade dashboards

/sig storage
/priority important-soon
/kind flake
/remove-kind failing test

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

k8s-ci-robot commented Mar 13, 2019

@mariantalla: Those labels are not set on the issue: kind/failing, kind/test

In response to this:

Which jobs are flaking:

  • ci-kubernetes-e2e-gce-new-master-upgrade-master
  • ci-kubernetes-e2e-gce-new-master-upgrade-cluster
  • ci-kubernetes-e2e-gce-master-new-downgrade-cluster
  • ci-kubernetes-e2e-gce-new-master-upgrade-cluster-new

Which test(s) are failing:
When kubelet restarts Should test that a volume mounted to a pod that is deleted while the kubelet is down unmounts when the kubelet returns.

Testgrid link:

Reason for failure:
Timeout:

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/generic_persistent_volume-disruptive.go:73
Expected pod to be not found.
Expected error:
   <*errors.errorString | 0xc00009b860>: {
       s: "timed out waiting for the condition",
   }
   timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/utils.go:242

Anything else we need to know:

  • This test flakes on average ~10% of the time in the sig-release master-upgrade dashboards

/sig storage
/priority important-soon
/kind flake
/remove-kind failing test

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@mariantalla

This comment has been minimized.

Copy link
Contributor Author

mariantalla commented Mar 13, 2019

@msau42 - sending this your way too for triage.
#75326, #75275, #75196 are higher priority in my head (because they flake more frequently)

Adding it to v1.14 for now.

/milestone v1.14

@k8s-ci-robot k8s-ci-robot added this to the v1.14 milestone Mar 13, 2019

@mariantalla mariantalla added this to Flakes in 1.14 CI Signal Mar 13, 2019

@mariantalla

This comment has been minimized.

Copy link
Contributor Author

mariantalla commented Mar 13, 2019

Yep, fair enough robot friend.

/remove-kind failing-test

@msau42

This comment has been minimized.

Copy link
Member

msau42 commented Mar 13, 2019

This test is different than the CSI reconstruction issues because it's testing in-tree default storageclass.

@msau42

This comment has been minimized.

Copy link
Member

msau42 commented Mar 13, 2019

/assign @jingxu97

@jingxu97

This comment has been minimized.

Copy link
Contributor

jingxu97 commented Mar 13, 2019

I think the root cause is because of this issue #75345
will try to work on a fix soon.

@athenabot

This comment has been minimized.

Copy link

athenabot commented Mar 16, 2019

/sig node

These SIGs are my best guesses for this issue. Please comment /remove-sig <name> if I am incorrect about one.
🤖 I am an (alpha) bot run by @vllry. 👩‍🔬

@mariantalla

This comment has been minimized.

Copy link
Contributor Author

mariantalla commented Mar 18, 2019

/remove-sig node

@mariantalla

This comment has been minimized.

Copy link
Contributor Author

mariantalla commented Mar 19, 2019

Expected fix: #75458

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.