Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubelet-gce-e2e-ci: broken test run #32430

Closed
k8s-github-robot opened this issue Sep 10, 2016 · 129 comments
Closed

kubelet-gce-e2e-ci: broken test run #32430

k8s-github-robot opened this issue Sep 10, 2016 · 129 comments
Assignees
Labels
kind/flake Categorizes issue or PR as related to a flaky test. priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. sig/node Categorizes an issue or PR as relevant to SIG Node.

Comments

@k8s-github-robot
Copy link

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubelet-gce-e2e-ci/9180/

Run so broken it didn't make JUnit output!

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence. area/test-infra labels Sep 10, 2016
@spxtr spxtr assigned pwittrock and unassigned spxtr Sep 10, 2016
@pwittrock
Copy link
Member

@spxtr Do I have state on this, or were you looking for me to find someone to reassign to?

@spxtr
Copy link
Contributor

spxtr commented Sep 10, 2016

I just assigned because you're the listed owner of the job. Feel free to find a better owner.

@pwittrock
Copy link
Member

@spxtr Got it - need to fix that

@pwittrock pwittrock added sig/node Categorizes an issue or PR as relevant to SIG Node. and removed area/test-infra labels Sep 10, 2016
@pwittrock pwittrock removed their assignment Sep 10, 2016
@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubelet-gce-e2e-ci/9191/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubelet-gce-e2e-ci/9193/

Run so broken it didn't make JUnit output!

@k8s-github-robot k8s-github-robot added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed priority/backlog Higher priority than priority/awaiting-more-evidence. labels Sep 10, 2016
@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubelet-gce-e2e-ci/9194/

Run so broken it didn't make JUnit output!

@k8s-github-robot k8s-github-robot added priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. and removed priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Sep 10, 2016
@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubelet-gce-e2e-ci/9195/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubelet-gce-e2e-ci/9196/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubelet-gce-e2e-ci/9197/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubelet-gce-e2e-ci/9230/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubelet-gce-e2e-ci/9248/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

[FLAKE-PING] @dchen1107

This flaky-test issue would love to have more attention.

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubelet-gce-e2e-ci/9252/

Multiple broken tests:

Failed: [k8s.io] ConfigMap updates should be reflected in volume [Conformance] {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:160
Expected error:
    <*errors.errorString | 0xc8200bf660>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:57

Failed: [k8s.io] Container Runtime Conformance Test container runtime conformance blackbox test when running a container with a new image should be able to pull image from docker hub {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/runtime_conformance_test.go:290
Timed out after 300.000s.
Error: Unexpected non-nil/non-zero extra argument at index 1:
    <*errors.errorString>: &errors.errorString{s:"unexpected container statuses []"}
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/runtime_conformance_test.go:281

Issues about this test specifically: #31811

Failed: [k8s.io] Secrets should be consumable from pods in volume with Mode set in the item [Conformance] {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:215
Expected error:
    <*errors.errorString | 0xc820b46dd0>: {
        s: "expected container secret-volume-test success: gave up waiting for pod 'pod-secrets-d452c828-788a-11e6-897d-42010a800006' to be 'success or failure' after 5m0s",
    }
    expected container secret-volume-test success: gave up waiting for pod 'pod-secrets-d452c828-788a-11e6-897d-42010a800006' to be 'success or failure' after 5m0s
not to have occurred
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2257

Failed: [k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow] {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:628
Expected error:
    <*errors.errorString | 0xc8200bf670>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:57

Failed: [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance] {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:43
Expected error:
    <*errors.errorString | 0xc820b0b510>: {
        s: "expected container test-container success: gave up waiting for pod 'client-containers-0fae2c2a-788d-11e6-a255-42010a800006' to be 'success or failure' after 5m0s",
    }
    expected container test-container success: gave up waiting for pod 'client-containers-0fae2c2a-788d-11e6-a255-42010a800006' to be 'success or failure' after 5m0s
not to have occurred
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2257

Issues about this test specifically: #31774

Failed: [k8s.io] AppArmor [Feature:AppArmor] when running with AppArmor should enforce a permissive profile {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/apparmor_test.go:78
Expected error:
    <*errors.errorString | 0xc8200c9660>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/apparmor_test.go:151

Issues about this test specifically: #30750

Failed: [k8s.io] Container Runtime Conformance Test container runtime conformance blackbox test when running a container with a new image should be able to pull image from gcr.io {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/runtime_conformance_test.go:290
Timed out after 300.000s.
Error: Unexpected non-nil/non-zero extra argument at index 1:
    <*errors.errorString>: &errors.errorString{s:"unexpected container statuses []"}
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/runtime_conformance_test.go:281

Issues about this test specifically: #28047

Failed: [k8s.io] InitContainer should invoke init containers on a RestartNever pod {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:100
Expected
    <*errors.errorString | 0xc8200c9660>: {
        s: "timed out waiting for the condition",
    }
to be nil
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:85

Failed: [k8s.io] HostPath should give a volume the correct mode [Conformance] {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:56
Expected error:
    <*errors.errorString | 0xc820c11560>: {
        s: "expected container test-container-1 success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s",
    }
    expected container test-container-1 success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s
not to have occurred
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2257

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings and Item mode set[Conformance] {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:59
Expected error:
    <*errors.errorString | 0xc820cc64a0>: {
        s: "expected container configmap-volume-test success: gave up waiting for pod 'pod-configmaps-26ba52c8-788d-11e6-af55-42010a800006' to be 'success or failure' after 5m0s",
    }
    expected container configmap-volume-test success: gave up waiting for pod 'pod-configmaps-26ba52c8-788d-11e6-af55-42010a800006' to be 'success or failure' after 5m0s
not to have occurred
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2257

Failed: [k8s.io] PrivilegedPod should test privileged pod {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/privileged.go:65
Expected error:
    <*errors.errorString | 0xc8200c1660>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:57

Issues about this test specifically: #30607

Failed: [k8s.io] Pods should get a host IP [Conformance] {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:144
Expected error:
    <*errors.errorString | 0xc8200c1660>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:57

Failed: [k8s.io] Secrets should be consumable from pods in env vars [Conformance] {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:344
Expected error:
    <*errors.errorString | 0xc820b59a30>: {
        s: "expected container secret-env-test success: gave up waiting for pod 'pod-secrets-575ae641-788d-11e6-a2a1-42010a800006' to be 'success or failure' after 5m0s",
    }
    expected container secret-env-test success: gave up waiting for pod 'pod-secrets-575ae641-788d-11e6-a2a1-42010a800006' to be 'success or failure' after 5m0s
not to have occurred
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2257

Failed: [k8s.io] Probing container should have monotonically increasing restart count [Conformance] [Slow] {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:204
starting pod liveness-http in namespace e2e-tests-container-probe-oqv1z
Expected error:
    <*errors.errorString | 0xc8200c1660>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:334

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings [Conformance] {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:54
Expected error:
    <*errors.errorString | 0xc8204be2d0>: {
        s: "expected container configmap-volume-test success: gave up waiting for pod 'pod-configmaps-88c6b5a5-788c-11e6-86e0-42010a800006' to be 'success or failure' after 5m0s",
    }
    expected container configmap-volume-test success: gave up waiting for pod 'pod-configmaps-88c6b5a5-788c-11e6-86e0-42010a800006' to be 'success or failure' after 5m0s
not to have occurred
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2257

Failed: [k8s.io] Downward API volume should provide container's memory request {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:184
Expected error:
    <*errors.errorString | 0xc820a493b0>: {
        s: "expected container client-container success: gave up waiting for pod 'downwardapi-volume-a2438dbe-788b-11e6-a255-42010a800006' to be 'success or failure' after 5m0s",
    }
    expected container client-container success: gave up waiting for pod 'downwardapi-volume-a2438dbe-788b-11e6-a255-42010a800006' to be 'success or failure' after 5m0s
not to have occurred
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2257

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,default) [Conformance] {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:117
Expected error:
    <*errors.errorString | 0xc820c42db0>: {
        s: "expected container test-container success: gave up waiting for pod 'pod-51c1d9a4-788e-11e6-8d9d-42010a800006' to be 'success or failure' after 5m0s",
    }
    expected container test-container success: gave up waiting for pod 'pod-51c1d9a4-788e-11e6-8d9d-42010a800006' to be 'success or failure' after 5m0s
not to have occurred
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2257

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings as non-root [Conformance] {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:63
Expected error:
    <*errors.errorString | 0xc820a82410>: {
        s: "expected container configmap-volume-test success: gave up waiting for pod 'pod-configmaps-e9ab7f77-788d-11e6-a255-42010a800006' to be 'success or failure' after 5m0s",
    }
    expected container configmap-volume-test success: gave up waiting for pod 'pod-configmaps-e9ab7f77-788d-11e6-a255-42010a800006' to be 'success or failure' after 5m0s
not to have occurred
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2257

Failed: [k8s.io] ConfigMap should be consumable from pods in volume as non-root with FSGroup [Feature:FSGroup] {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:50
Expected error:
    <*errors.errorString | 0xc820b76aa0>: {
        s: "expected container configmap-volume-test success: gave up waiting for pod 'pod-configmaps-ea047118-7889-11e6-af55-42010a800006' to be 'success or failure' after 5m0s",
    }
    expected container configmap-volume-test success: gave up waiting for pod 'pod-configmaps-ea047118-7889-11e6-af55-42010a800006' to be 'success or failure' after 5m0s
not to have occurred
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2257

Issues about this test specifically: #28868

Failed: [k8s.io] Kubelet Container Manager oom score adjusting when scheduling a busybox command that always fails in a pod should be possible to delete {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Sep 12 01:52:24.507: Couldn't delete ns: "e2e-tests-kubelet-container-manager-qusp3": namespace e2e-tests-kubelet-container-manager-qusp3 was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-kubelet-container-manager-qusp3 was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338

Failed: [k8s.io] Container Runtime Conformance Test container runtime conformance blackbox test when starting a container that exits should report termination message if TerminationMessagePath is set [Conformance] {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/runtime_conformance_test.go:170
Timed out after 300.000s.
Expected
    <api.PodPhase>: Pending
to equal
    <api.PodPhase>: Succeeded
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/runtime_conformance_test.go:156

Failed: [k8s.io] Container Runtime Conformance Test container runtime conformance blackbox test when running a container with a new image should be able to pull from private registry with secret {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/runtime_conformance_test.go:290
Timed out after 300.000s.
Error: Unexpected non-nil/non-zero extra argument at index 1:
    <*errors.errorString>: &errors.errorString{s:"unexpected container statuses []"}
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/runtime_conformance_test.go:281

Issues about this test specifically: #28142 #29259

Failed: [k8s.io] Pods should be updated [Conformance] {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:319
Expected error:
    <*errors.errorString | 0xc8200c1660>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:57

Failed: [k8s.io] Downward API should provide pod IP as an env var {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:83
Expected error:
    <*errors.errorString | 0xc820b29690>: {
        s: "expected container dapi-container success: gave up waiting for pod 'downward-api-7fd34e2b-788c-11e6-8d9d-42010a800006' to be 'success or failure' after 5m0s",
    }
    expected container dapi-container success: gave up waiting for pod 'downward-api-7fd34e2b-788c-11e6-8d9d-42010a800006' to be 'success or failure' after 5m0s
not to have occurred
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2257

Issues about this test specifically: #30794

Failed: [k8s.io] EmptyDir volumes when FSGroup is specified [Feature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs) {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:52
Expected error:
    <*errors.errorString | 0xc820b464c0>: {
        s: "expected container test-container success: gave up waiting for pod 'pod-ea037efc-7889-11e6-897d-42010a800006' to be 'success or failure' after 5m0s",
    }
    expected container test-container success: gave up waiting for pod 'pod-ea037efc-7889-11e6-897d-42010a800006' to be 'success or failure' after 5m0s
not to have occurred
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2257

Failed: [k8s.io] EmptyDir volumes volume on default medium should have the correct mode [Conformance] {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:93
Expected error:
    <*errors.errorString | 0xc820b28fd0>: {
        s: "expected container test-container success: gave up waiting for pod 'pod-4b19fdbd-788f-11e6-8d9d-42010a800006' to be 'success or failure' after 5m0s",
    }
    expected container test-container success: gave up waiting for pod 'pod-4b19fdbd-788f-11e6-8d9d-42010a800006' to be 'success or failure' after 5m0s
not to have occurred
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2257

Failed: [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:73
Expected error:
    <*errors.errorString | 0xc820b08470>: {
        s: "expected container test-container success: gave up waiting for pod 'pod-e6b06aa8-788d-11e6-818a-42010a800006' to be 'success or failure' after 5m0s",
    }
    expected container test-container success: gave up waiting for pod 'pod-e6b06aa8-788d-11e6-818a-42010a800006' to be 'success or failure' after 5m0s
not to have occurred
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2257

Issues about this test specifically: #31579

Failed: [k8s.io] Docker Containers should use the image defaults if command and args are blank [Conformance] {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:34
Expected error:
    <*errors.errorString | 0xc820b91670>: {
        s: "expected container test-container success: gave up waiting for pod 'client-containers-7cd6024d-788c-11e6-a2a1-42010a800006' to be 'success or failure' after 5m0s",
    }
    expected container test-container success: gave up waiting for pod 'client-containers-7cd6024d-788c-11e6-a2a1-42010a800006' to be 'success or failure' after 5m0s
not to have occurred
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2257

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,default) [Conformance] {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:109
Expected error:
    <*errors.errorString | 0xc8204d2b80>: {
        s: "expected container test-container success: gave up waiting for pod 'pod-09b7aa4c-788d-11e6-bcb9-42010a800006' to be 'success or failure' after 5m0s",
    }
    expected container test-container success: gave up waiting for pod 'pod-09b7aa4c-788d-11e6-bcb9-42010a800006' to be 'success or failure' after 5m0s
not to have occurred
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2257

Failed: [k8s.io] Secrets should be consumable from pods in volume [Conformance] {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:88
Expected error:
    <*errors.errorString | 0xc820b582c0>: {
        s: "expected container secret-volume-test success: gave up waiting for pod 'pod-secrets-cb6082bf-788a-11e6-a2a1-42010a800006' to be 'success or failure' after 5m0s",
    }
    expected container secret-volume-test success: gave up waiting for pod 'pod-secrets-cb6082bf-788a-11e6-a2a1-42010a800006' to be 'success or failure' after 5m0s
not to have occurred
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2257

Failed: [k8s.io] Container Runtime Conformance Test container runtime conformance blackbox test when running a container with a new image should not be able to pull image from invalid registry {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/runtime_conformance_test.go:290
Timed out after 300.000s.
Error: Unexpected non-nil/non-zero extra argument at index 1:
    <*errors.errorString>: &errors.errorString{s:"unexpected container statuses []"}
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/runtime_conformance_test.go:281

Issues about this test specifically: #28255

Failed: [k8s.io] InitContainer should not start app containers if init containers fail on a RestartAlways pod {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:274
Expected
    <*errors.errorString | 0xc8200c7660>: {
        s: "timed out waiting for the condition",
    }
to be nil
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:263

Failed: [k8s.io] EmptyDir volumes should support (root,0777,tmpfs) [Conformance] {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:77
Expected error:
    <*errors.errorString | 0xc820a5be20>: {
        s: "expected container test-container success: gave up waiting for pod 'pod-ea023238-7889-11e6-86e0-42010a800006' to be 'success or failure' after 5m0s",
    }
    expected container test-container success: gave up waiting for pod 'pod-ea023238-7889-11e6-86e0-42010a800006' to be 'success or failure' after 5m0s
not to have occurred
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2257

Failed: [k8s.io] Downward API volume should provide podname as non-root with fsgroup [Feature:FSGroup] {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:82
Expected error:
    <*errors.errorString | 0xc820c3ac70>: {
        s: "expected container client-container success: gave up waiting for pod 'metadata-volume-45ed06bc-788e-11e6-86e0-42010a800006' to be 'success or failure' after 5m0s",
    }
    expected container client-container success: gave up waiting for pod 'metadata-volume-45ed06bc-788e-11e6-86e0-42010a800006' to be 'success or failure' after 5m0s
not to have occurred
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2257

Failed: [k8s.io] MirrorPod when create a mirror pod should be updated when static pod updated {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/mirror_pod_test.go:54
Timed out after 120.000s.
Expected
    <*errors.errorString | 0xc820a83440>: {
        s: "expected the mirror pod \"static-pod-85cbf30c-788c-11e6-a255-42010a800006-tmp-node-e2e-48eba449-gci-dev-54-8743-3-0\" to appear: pods \"static-pod-85cbf30c-788c-11e6-a255-42010a800006-tmp-node-e2e-48eba449-gci-dev-54-8743-3-0\" not found",
    }
to be nil
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/mirror_pod_test.go:53

Issues about this test specifically: #30185

Failed: [k8s.io] EmptyDir volumes when FSGroup is specified [Feature:FSGroup] volume on default medium should have the correct mode using FSGroup {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:56
Expected error:
    <*errors.errorString | 0xc820b2d130>: {
        s: "expected container test-container success: gave up waiting for pod 'pod-bb36d773-788e-11e6-818a-42010a800006' to be 'success or failure' after 5m0s",
    }
    expected container test-container success: gave up waiting for pod 'pod-bb36d773-788e-11e6-818a-42010a800006' to be 'success or failure' after 5m0s
not to have occurred
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2257

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,tmpfs) [Conformance] {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:81
Expected error:
    <*errors.errorString | 0xc820a49740>: {
        s: "expected container test-container success: gave up waiting for pod 'pod-e34e1ea1-788a-11e6-a255-42010a800006' to be 'success or failure' after 5m0s",
    }
    expected container test-container success: gave up waiting for pod 'pod-e34e1ea1-788a-11e6-a255-42010a800006' to be 'success or failure' after 5m0s
not to have occurred
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2257

Failed: [k8s.io] Probing container should not be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:147
starting pod liveness-exec in namespace e2e-tests-container-probe-rge9u
Expected error:
    <*errors.errorString | 0xc8200c1660>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:334

Issues about this test specifically: #32267

Failed: [k8s.io] HostPath should support subPath [Conformance] {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:113
Expected error:
    <*errors.errorString | 0xc820c744e0>: {
        s: "expected container test-container-1 success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s",
    }
    expected container test-container-1 success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s
not to have occurred
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2257

Failed: [k8s.io] Pods should allow activeDeadlineSeconds to be updated [Conformance] {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:364
Expected error:
    <*errors.errorString | 0xc8200c9660>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:57

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's command [Conformance] {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/expansion.go:100
Expected error:
    <*errors.errorString | 0xc820c42db0>: {
        s: "expected container dapi-container success: gave up waiting for pod 'var-expansion-54f9482b-788e-11e6-a2a1-42010a800006' to be 'success or failure' after 5m0s",
    }
    expected container dapi-container success: gave up waiting for pod 'var-expansion-54f9482b-788e-11e6-a2a1-42010a800006' to be 'success or failure' after 5m0s
not to have occurred
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2257

Failed: [k8s.io] HostPath should support r/w {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:82
Expected error:
    <*errors.errorString | 0xc820753d60>: {
        s: "expected container test-container-1 success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s",
    }
    expected container test-container-1 success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s
not to have occurred
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2257

Failed: [k8s.io] Pods should support remote command execution over websockets {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:526
Expected error:
    <*errors.errorString | 0xc8200c7660>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:57

Failed: [k8s.io] Container Runtime Conformance Test container runtime conformance blackbox test when running a container with a new image should not be able to pull non-existing image from gcr.io {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/runtime_conformance_test.go:290
Timed out after 300.000s.
Error: Unexpected non-nil/non-zero extra argument at index 1:
    <*errors.errorString>: &errors.errorString{s:"unexpected container statuses []"}
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/runtime_conformance_test.go:281

Issues about this test specifically: #28250

Failed: [k8s.io] ConfigMap should be consumable via environment variable [Conformance] {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:209
Expected error:
    <*errors.errorString | 0xc820b46f90>: {
        s: "expected container env-test success: gave up waiting for pod 'pod-configmaps-ae3361f4-788b-11e6-897d-42010a800006' to be 'success or failure' after 5m0s",
    }
    expected container env-test success: gave up waiting for pod 'pod-configmaps-ae3361f4-788b-11e6-897d-42010a800006' to be 'success or failure' after 5m0s
not to have occurred
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2257

Failed: [k8s.io] EmptyDir volumes should support (root,0777,default) [Conformance] {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:105
Expected error:
    <*errors.errorString | 0xc820a407a0>: {
        s: "expected container test-container success: gave up waiting for pod 'pod-d1587961-788a-11e6-86e0-42010a800006' to be 'success or failure' after 5m0s",
    }
    expected container test-container success: gave up waiting for pod 'pod-d1587961-788a-11e6-86e0-42010a800006' to be 'success or failure' after 5m0s
not to have occurred
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2257

Failed: [k8s.io] EmptyDir volumes volume on tmpfs should have the correct mode [Conformance] {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:65
Expected error:
    <*errors.errorString | 0xc820d01990>: {
        s: "expected container test-container success: gave up waiting for pod 'pod-1d69df9e-788f-11e6-897d-42010a800006' to be 'success or failure' after 5m0s",
    }
    expected container test-container success: gave up waiting for pod 'pod-1d69df9e-788f-11e6-897d-42010a800006' to be 'success or failure' after 5m0s
not to have occurred
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2257

Failed: [k8s.io] Pods should support retrieving logs from the container over websockets {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:583
Expected error:
    <*errors.errorString | 0xc8200bf660>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:57

Issues about this test specifically: #30567 #31920

Failed: [k8s.io] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:129
Expected error:
    <*errors.errorString | 0xc8204d3070>: {
        s: "expected container dapi-container success: gave up waiting for pod 'downward-api-ed159de4-7889-11e6-a2a1-42010a800006' to be 'success or failure' after 5m0s",
    }
    expected container dapi-container success: gave up waiting for pod 'downward-api-ed159de4-7889-11e6-a2a1-42010a800006' to be 'success or failure' after 5m0s
not to have occurred
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2257

Issues about this test specifically: #27668

Failed: [k8s.io] Kubelet Container Manager oom score adjusting when scheduling a busybox command that always fails in a pod should have an error terminated reason {E2eNode Suite}

/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/container_manager_test.go:81
Timed out after 60.000s.
Expected
    <*errors.errorString | 0xc820a5bfb0>: {
        s: "expected only one container in the pod \"bin-false51624fa2-788d-11e6-86e0-42010a800006\"",
    }
to be nil
/var/lib/jenkins/workspace/kubelet-gce-e2e-ci/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/container_manager_test.go:80

@dchen1107
Copy link
Member

Above failures are on GCI nodes only. This shouldn't be the blocker for 1.4

@dchen1107
Copy link
Member

cc/ @vishh

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubelet-gce-e2e-ci/10761/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubelet-gce-e2e-ci/10763/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubelet-gce-e2e-ci/10776/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubelet-gce-e2e-ci/10784/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubelet-gce-e2e-ci/10790/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

[FLAKE-PING] @vishh

This flaky-test issue would love to have more attention.

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubelet-gce-e2e-ci/10796/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubelet-gce-e2e-ci/10799/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubelet-gce-e2e-ci/10801/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubelet-gce-e2e-ci/10802/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubelet-gce-e2e-ci/10810/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubelet-gce-e2e-ci/10819/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubelet-gce-e2e-ci/10822/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubelet-gce-e2e-ci/10824/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubelet-gce-e2e-ci/10842/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubelet-gce-e2e-ci/10845/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubelet-gce-e2e-ci/10849/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

[FLAKE-PING] @vishh

This flaky-test issue would love to have more attention.

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubelet-gce-e2e-ci/10874/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubelet-gce-e2e-ci/10906/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

[FLAKE-PING] @vishh

This flaky-test issue would love to have more attention.

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubelet-gce-e2e-ci/10936/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubelet-gce-e2e-ci/10941/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

[FLAKE-PING] @vishh

This flaky-test issue would love to have more attention.

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubelet-gce-e2e-ci/10968/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubelet-gce-e2e-ci/10987/

Run so broken it didn't make JUnit output!

@k8s-github-robot
Copy link
Author

[FLAKE-PING] @vishh

This flaky-test issue would love to have more attention.

@yujuhong
Copy link
Contributor

@dchen1107 the error messages in #32430 (comment) are annoying but harmless. We should reduce the spam from cadvisor though...

Most recent failures were caused by timeouts. The number of tests in the node e2e suite has increased since the timeout was set initially. @Random-Liu's PR #36413 bumped the timeout already. Let's close this issue and see if any more failure occurs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/flake Categorizes issue or PR as related to a flaky test. priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. sig/node Categorizes an issue or PR as relevant to SIG Node.
Projects
None yet
Development

No branches or pull requests

7 participants