Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master: broken test run #37767

Closed
k8s-github-robot opened this issue Dec 1, 2016 · 5 comments
Closed
Assignees
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/147/

Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:214
Expected error:
    <*errors.errorString | 0xc8228225e0>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.49.46 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-9pst1] []  0xc82259b3e0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc82259ba60 exit status 1 <nil> true [0xc8214fe320 0xc8214fe1a8 0xc8214fe1c0] [0xc8214fe320 0xc8214fe1a8 0xc8214fe1c0] [0xc8214fe328 0xc8214fe1a0 0xc8214fe1b0] [0xa97470 0xa975d0 0xa975d0] 0xc821753980}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.49.46 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-9pst1] []  0xc82259b3e0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc82259ba60 exit status 1 <nil> true [0xc8214fe320 0xc8214fe1a8 0xc8214fe1c0] [0xc8214fe320 0xc8214fe1a8 0xc8214fe1c0] [0xc8214fe328 0xc8214fe1a0 0xc8214fe1b0] [0xa97470 0xa975d0 0xa975d0] 0xc821753980}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820a700d0>: {
        s: "Not all pods in namespace 'kube-system' running and ready within 5m0s",
    }
    Not all pods in namespace 'kube-system' running and ready within 5m0s
not to have occurred

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820799ac0>: {
        s: "Not all pods in namespace 'kube-system' running and ready within 5m0s",
    }
    Not all pods in namespace 'kube-system' running and ready within 5m0s
not to have occurred

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82147c5e0>: {
        s: "Not all pods in namespace 'kube-system' running and ready within 5m0s",
    }
    Not all pods in namespace 'kube-system' running and ready within 5m0s
not to have occurred

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200b1060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8230f9d70>: {
        s: "Not all pods in namespace 'kube-system' running and ready within 5m0s",
    }
    Not all pods in namespace 'kube-system' running and ready within 5m0s
not to have occurred

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187

Failed: [k8s.io] KubeProxy should test kube-proxy [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubeproxy.go:107
Expected error:
    <*errors.errorString | 0xc8200b1060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26490 #33669

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:70
Expected error:
    <*errors.errorString | 0xc822f3d550>: {
        s: "error waiting for deployment test-rollover-deployment status to match expectation: timed out waiting for the condition",
    }
    error waiting for deployment test-rollover-deployment status to match expectation: timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26509 #26834 #29780 #35355

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821b90530>: {
        s: "Not all pods in namespace 'kube-system' running and ready within 5m0s",
    }
    Not all pods in namespace 'kube-system' running and ready within 5m0s
not to have occurred

Issues about this test specifically: #29516

Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication between nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:267
Expected error:
    <*errors.errorString | 0xc82086d960>: {
        s: "pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-11-29 20:24:29 -0800 PST} FinishedAt:{Time:2016-11-29 20:24:39 -0800 PST} ContainerID:docker://66fd9d79fb2e405e65cf8d0a8bb107013d86faf4c81c6b601180bab155daaa4d}",
    }
    pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-11-29 20:24:29 -0800 PST} FinishedAt:{Time:2016-11-29 20:24:39 -0800 PST} ContainerID:docker://66fd9d79fb2e405e65cf8d0a8bb107013d86faf4c81c6b601180bab155daaa4d}
not to have occurred

Issues about this test specifically: #30131 #31402

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:228
Expected error:
    <*errors.errorString | 0xc82053f980>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.49.46 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-7x7c1] []  0xc82096d780  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc82096df20 exit status 1 <nil> true [0xc82026cc60 0xc82026cc98 0xc82026ccb0] [0xc82026cc60 0xc82026cc98 0xc82026ccb0] [0xc82026cc68 0xc82026cc88 0xc82026cca0] [0xa97470 0xa975d0 0xa975d0] 0xc8207c8c00}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.49.46 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-7x7c1] []  0xc82096d780  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc82096df20 exit status 1 <nil> true [0xc82026cc60 0xc82026cc98 0xc82026ccb0] [0xc82026cc60 0xc82026cc98 0xc82026ccb0] [0xc82026cc68 0xc82026cc88 0xc82026cca0] [0xa97470 0xa975d0 0xa975d0] 0xc8207c8c00}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200b1060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070 #34383

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821dfa0d0>: {
        s: "Not all pods in namespace 'kube-system' running and ready within 5m0s",
    }
    Not all pods in namespace 'kube-system' running and ready within 5m0s
not to have occurred

Issues about this test specifically: #28019

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc8225a0330>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.49.46 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-btpvr -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"uid\":\"4f889757-b6d4-11e6-84f3-42010af00015\", \"resourceVersion\":\"35475\", \"creationTimestamp\":\"2016-11-30T08:09:26Z\", \"labels\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-btpvr\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-btpvr/services/redis-master\"}, \"spec\":map[string]interface {}{\"clusterIP\":\"10.127.248.121\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc820a908a0 exit status 1 <nil> true [0xc821bba000 0xc821bba018 0xc821bba038] [0xc821bba000 0xc821bba018 0xc821bba038] [0xc821bba010 0xc821bba028] [0xa975d0 0xa975d0] 0xc821058180}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"uid\":\"4f889757-b6d4-11e6-84f3-42010af00015\", \"resourceVersion\":\"35475\", \"creationTimestamp\":\"2016-11-30T08:09:26Z\", \"labels\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-btpvr\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-btpvr/services/redis-master\"}, \"spec\":map[string]interface {}{\"clusterIP\":\"10.127.248.121\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.49.46 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-btpvr -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"uid":"4f889757-b6d4-11e6-84f3-42010af00015", "resourceVersion":"35475", "creationTimestamp":"2016-11-30T08:09:26Z", "labels":map[string]interface {}{"role":"master", "app":"redis"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-btpvr", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-btpvr/services/redis-master"}, "spec":map[string]interface {}{"clusterIP":"10.127.248.121", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"role":"master", "app":"redis"}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc820a908a0 exit status 1 <nil> true [0xc821bba000 0xc821bba018 0xc821bba038] [0xc821bba000 0xc821bba018 0xc821bba038] [0xc821bba010 0xc821bba028] [0xa975d0 0xa975d0] 0xc821058180}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"uid":"4f889757-b6d4-11e6-84f3-42010af00015", "resourceVersion":"35475", "creationTimestamp":"2016-11-30T08:09:26Z", "labels":map[string]interface {}{"role":"master", "app":"redis"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-btpvr", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-btpvr/services/redis-master"}, "spec":map[string]interface {}{"clusterIP":"10.127.248.121", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"role":"master", "app":"redis"}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:270
Expected error:
    <*errors.errorString | 0xc82132d050>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.49.46 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-ej0uc] []  0xc821516700  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821516d80 exit status 1 <nil> true [0xc820036d50 0xc820036db8 0xc820036458] [0xc820036d50 0xc820036db8 0xc820036458] [0xc820036d58 0xc820036d98 0xc820036448] [0xa97470 0xa975d0 0xa975d0] 0xc82229a060}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.49.46 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-ej0uc] []  0xc821516700  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821516d80 exit status 1 <nil> true [0xc820036d50 0xc820036db8 0xc820036458] [0xc820036d50 0xc820036db8 0xc820036458] [0xc820036d58 0xc820036d98 0xc820036448] [0xa97470 0xa975d0 0xa975d0] 0xc82229a060}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:279
Expected error:
    <*errors.errorString | 0xc821fa60b0>: {
        s: "service verification failed for: 10.127.251.137\nexpected [service3-60u9t service3-8ylsh service3-zlbbk]\nreceived [service3-60u9t service3-8ylsh]",
    }
    service verification failed for: 10.127.251.137
    expected [service3-60u9t service3-8ylsh service3-zlbbk]
    received [service3-60u9t service3-8ylsh]
not to have occurred

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200b1060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:736
Expected error:
    <*errors.errorString | 0xc8222908c0>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.49.46 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-kmdbr] []  0xc82103ae40  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc82103b500 exit status 1 <nil> true [0xc8200b8bd0 0xc8200b8bf8 0xc8200b8c08] [0xc8200b8bd0 0xc8200b8bf8 0xc8200b8c08] [0xc8200b8bd8 0xc8200b8bf0 0xc8200b8c00] [0xa97470 0xa975d0 0xa975d0] 0xc8211f9560}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.49.46 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-kmdbr] []  0xc82103ae40  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc82103b500 exit status 1 <nil> true [0xc8200b8bd0 0xc8200b8bf8 0xc8200b8c08] [0xc8200b8bd0 0xc8200b8bf8 0xc8200b8c08] [0xc8200b8bd8 0xc8200b8bf0 0xc8200b8c00] [0xa97470 0xa975d0 0xa975d0] 0xc8211f9560}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821ad7e40>: {
        s: "Not all pods in namespace 'kube-system' running and ready within 5m0s",
    }
    Not all pods in namespace 'kube-system' running and ready within 5m0s
not to have occurred

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:331
Expected error:
    <*errors.errorString | 0xc821b8dca0>: {
        s: "service verification failed for: 10.127.254.126\nexpected [service1-418zj service1-6ey13 service1-opynj]\nreceived [service1-418zj service1-6ey13]",
    }
    service verification failed for: 10.127.254.126
    expected [service1-418zj service1-6ey13 service1-opynj]
    received [service1-418zj service1-6ey13]
not to have occurred

Issues about this test specifically: #29514

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:431
Expected error:
    <*errors.errorString | 0xc8214ada50>: {
        s: "Not all pods in namespace 'kube-system' running and ready within 5m0s",
    }
    Not all pods in namespace 'kube-system' running and ready within 5m0s
not to have occurred

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8209c4820>: {
        s: "Not all pods in namespace 'kube-system' running and ready within 5m0s",
    }
    Not all pods in namespace 'kube-system' running and ready within 5m0s
not to have occurred

Issues about this test specifically: #35279

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:284
Expected error:
    <*errors.errorString | 0xc821857680>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.49.46 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-jdz5b] []  0xc822a51280  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc822a519c0 exit status 1 <nil> true [0xc8200b8e08 0xc8200b8e30 0xc8200b8e40] [0xc8200b8e08 0xc8200b8e30 0xc8200b8e40] [0xc8200b8e10 0xc8200b8e28 0xc8200b8e38] [0xa97470 0xa975d0 0xa975d0] 0xc8209836e0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.49.46 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-jdz5b] []  0xc822a51280  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc822a519c0 exit status 1 <nil> true [0xc8200b8e08 0xc8200b8e30 0xc8200b8e40] [0xc8200b8e08 0xc8200b8e30 0xc8200b8e40] [0xc8200b8e10 0xc8200b8e28 0xc8200b8e38] [0xa97470 0xa975d0 0xa975d0] 0xc8209836e0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:372
Expected error:
    <*errors.errorString | 0xc822a1cf90>: {
        s: "service verification failed for: 10.127.255.249\nexpected [service1-6vg7n service1-dmzi1 service1-wpw14]\nreceived [service1-6vg7n service1-wpw14]",
    }
    service verification failed for: 10.127.255.249
    expected [service1-6vg7n service1-dmzi1 service1-wpw14]
    received [service1-6vg7n service1-wpw14]
not to have occurred

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8214a0e90>: {
        s: "Not all pods in namespace 'kube-system' running and ready within 5m0s",
    }
    Not all pods in namespace 'kube-system' running and ready within 5m0s
not to have occurred

Issues about this test specifically: #28091

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:700
Expected error:
    <*errors.errorString | 0xc82151bb00>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.49.46 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-gm9kn] []  0xc821dcc2c0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821dcc980 exit status 1 <nil> true [0xc820036318 0xc820036388 0xc8200363e0] [0xc820036318 0xc820036388 0xc8200363e0] [0xc820036320 0xc820036368 0xc8200363d8] [0xa97470 0xa975d0 0xa975d0] 0xc8215bd260}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.49.46 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-gm9kn] []  0xc821dcc2c0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821dcc980 exit status 1 <nil> true [0xc820036318 0xc820036388 0xc8200363e0] [0xc820036318 0xc820036388 0xc8200363e0] [0xc820036320 0xc820036368 0xc8200363d8] [0xa97470 0xa975d0 0xa975d0] 0xc8215bd260}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28493 #29964

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8227d2c90>: {
        s: "Not all pods in namespace 'kube-system' running and ready within 5m0s",
    }
    Not all pods in namespace 'kube-system' running and ready within 5m0s
not to have occurred

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:284
Expected error:
    <*errors.errorString | 0xc822291d10>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.49.46 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-3j1d8] []  0xc821a8b900  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821022060 exit status 1 <nil> true [0xc8214fe288 0xc8214fe2b0 0xc8214fe2c0] [0xc8214fe288 0xc8214fe2b0 0xc8214fe2c0] [0xc8214fe290 0xc8214fe2a8 0xc8214fe2b8] [0xa97470 0xa975d0 0xa975d0] 0xc82228fd40}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.49.46 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-3j1d8] []  0xc821a8b900  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821022060 exit status 1 <nil> true [0xc8214fe288 0xc8214fe2b0 0xc8214fe2c0] [0xc8214fe288 0xc8214fe2b0 0xc8214fe2c0] [0xc8214fe290 0xc8214fe2a8 0xc8214fe2b8] [0xa97470 0xa975d0 0xa975d0] 0xc82228fd40}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Nov 30 00:56:07.598: Memory usage exceeding limits:
 node gke-jenkins-e2e-default-pool-7015c8c2-obby:
 container "runtime": expected RSS memory (MB) < 314572800; got 535527424
node gke-jenkins-e2e-default-pool-7015c8c2-bvqq:
 container "runtime": expected RSS memory (MB) < 314572800; got 544419840
node gke-jenkins-e2e-default-pool-7015c8c2-il0w:
 container "runtime": expected RSS memory (MB) < 314572800; got 521555968

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:284
Expected error:
    <*errors.errorString | 0xc8229e7da0>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.49.46 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-815cx] []  0xc821cd6c80  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821cd7300 exit status 1 <nil> true [0xc820037928 0xc820037950 0xc820037960] [0xc820037928 0xc820037950 0xc820037960] [0xc820037930 0xc820037948 0xc820037958] [0xa97470 0xa975d0 0xa975d0] 0xc821930ba0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.49.46 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-815cx] []  0xc821cd6c80  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821cd7300 exit status 1 <nil> true [0xc820037928 0xc820037950 0xc820037960] [0xc820037928 0xc820037950 0xc820037960] [0xc820037930 0xc820037948 0xc820037958] [0xa97470 0xa975d0 0xa975d0] 0xc821930ba0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:431
Expected error:
    <*errors.errorString | 0xc821fa6750>: {
        s: "Not all pods in namespace 'kube-system' running and ready within 5m0s",
    }
    Not all pods in namespace 'kube-system' running and ready within 5m0s
not to have occurred

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200b1060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #37177

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:284
Expected error:
    <*errors.errorString | 0xc8223ba1c0>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.49.46 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-2xdwb] []  0xc82056b4a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc82056bcc0 exit status 1 <nil> true [0xc8214fe1c0 0xc8214fe1e8 0xc8214fe258] [0xc8214fe1c0 0xc8214fe1e8 0xc8214fe258] [0xc8214fe1c8 0xc8214fe1e0 0xc8214fe250] [0xa97470 0xa975d0 0xa975d0] 0xc822c20ea0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.49.46 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-2xdwb] []  0xc82056b4a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc82056bcc0 exit status 1 <nil> true [0xc8214fe1c0 0xc8214fe1e8 0xc8214fe258] [0xc8214fe1c0 0xc8214fe1e8 0xc8214fe258] [0xc8214fe1c8 0xc8214fe1e0 0xc8214fe250] [0xa97470 0xa975d0 0xa975d0] 0xc822c20ea0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #27156 #28979 #30489 #33649

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/148/

Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:270
Expected error:
    <*errors.errorString | 0xc82175cc40>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.49.46 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-xld21] []  0xc8217e5ba0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc827f441e0 exit status 1 <nil> true [0xc820956338 0xc820956360 0xc8209563d0] [0xc820956338 0xc820956360 0xc8209563d0] [0xc820956340 0xc820956358 0xc820956368] [0xa97470 0xa975d0 0xa975d0] 0xc827e7efc0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.49.46 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-xld21] []  0xc8217e5ba0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc827f441e0 exit status 1 <nil> true [0xc820956338 0xc820956360 0xc8209563d0] [0xc820956338 0xc820956360 0xc8209563d0] [0xc820956340 0xc820956358 0xc820956368] [0xa97470 0xa975d0 0xa975d0] 0xc827e7efc0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:70
Expected error:
    <*errors.errorString | 0xc820407680>: {
        s: "error waiting for deployment test-rollover-deployment status to match expectation: timed out waiting for the condition",
    }
    error waiting for deployment test-rollover-deployment status to match expectation: timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26509 #26834 #29780 #35355

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc82007df80>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:736
Expected error:
    <*errors.errorString | 0xc8223bce20>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.49.46 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-bdzd7] []  0xc821463e20  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820164580 exit status 1 <nil> true [0xc8203bae68 0xc8203bae90 0xc8203baea0] [0xc8203bae68 0xc8203bae90 0xc8203baea0] [0xc8203bae70 0xc8203bae88 0xc8203bae98] [0xa97470 0xa975d0 0xa975d0] 0xc8220b0e40}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.49.46 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-bdzd7] []  0xc821463e20  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820164580 exit status 1 <nil> true [0xc8203bae68 0xc8203bae90 0xc8203baea0] [0xc8203bae68 0xc8203bae90 0xc8203baea0] [0xc8203bae70 0xc8203bae88 0xc8203bae98] [0xa97470 0xa975d0 0xa975d0] 0xc8220b0e40}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:284
Expected error:
    <*errors.errorString | 0xc8225237e0>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.49.46 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-l9sj6] []  0xc82146eb60  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc82146f1a0 exit status 1 <nil> true [0xc820cd6618 0xc820cd6640 0xc820cd6650] [0xc820cd6618 0xc820cd6640 0xc820cd6650] [0xc820cd6620 0xc820cd6638 0xc820cd6648] [0xa97470 0xa975d0 0xa975d0] 0xc82232d500}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.49.46 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-l9sj6] []  0xc82146eb60  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc82146f1a0 exit status 1 <nil> true [0xc820cd6618 0xc820cd6640 0xc820cd6650] [0xc820cd6618 0xc820cd6640 0xc820cd6650] [0xc820cd6620 0xc820cd6638 0xc820cd6648] [0xa97470 0xa975d0 0xa975d0] 0xc82232d500}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc82007df80>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Nov 30 05:41:55.747: Memory usage exceeding limits:
 node gke-jenkins-e2e-default-pool-c3a9374a-c775:
 container "runtime": expected RSS memory (MB) < 314572800; got 535904256
node gke-jenkins-e2e-default-pool-c3a9374a-rrko:
 container "runtime": expected RSS memory (MB) < 314572800; got 528195584
node gke-jenkins-e2e-default-pool-c3a9374a-xr8t:
 container "runtime": expected RSS memory (MB) < 314572800; got 540254208

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399

Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication between nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:267
Expected error:
    <*errors.errorString | 0xc8224e15b0>: {
        s: "pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-11-30 07:11:13 -0800 PST} FinishedAt:{Time:2016-11-30 07:11:23 -0800 PST} ContainerID:docker://a87123aa66836c0e96caed6c5655bb5904b6b353b4975f02575b87a42393c22d}",
    }
    pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-11-30 07:11:13 -0800 PST} FinishedAt:{Time:2016-11-30 07:11:23 -0800 PST} ContainerID:docker://a87123aa66836c0e96caed6c5655bb5904b6b353b4975f02575b87a42393c22d}
not to have occurred

Issues about this test specifically: #30131 #31402

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc82007df80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070 #34383

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:279
Expected error:
    <*errors.errorString | 0xc821b50a40>: {
        s: "service verification failed for: 10.127.244.193\nexpected [service1-9rf7h service1-mpwfp service1-rcfsp]\nreceived [service1-mpwfp service1-rcfsp]",
    }
    service verification failed for: 10.127.244.193
    expected [service1-9rf7h service1-mpwfp service1-rcfsp]
    received [service1-mpwfp service1-rcfsp]
not to have occurred

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:284
Expected error:
    <*errors.errorString | 0xc820a52dd0>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.49.46 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-sm2q0] []  0xc820ddd500  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820dddb80 exit status 1 <nil> true [0xc820d76258 0xc820d762f0 0xc820d76300] [0xc820d76258 0xc820d762f0 0xc820d76300] [0xc820d76260 0xc820d76278 0xc820d762f8] [0xa97470 0xa975d0 0xa975d0] 0xc821530ba0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.49.46 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-sm2q0] []  0xc820ddd500  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820dddb80 exit status 1 <nil> true [0xc820d76258 0xc820d762f0 0xc820d76300] [0xc820d76258 0xc820d762f0 0xc820d76300] [0xc820d76260 0xc820d76278 0xc820d762f8] [0xa97470 0xa975d0 0xa975d0] 0xc821530ba0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #27156 #28979 #30489 #33649

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:284
Expected error:
    <*errors.errorString | 0xc820dcbf50>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.49.46 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-2sqfc] []  0xc820998540  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820998d00 exit status 1 <nil> true [0xc820d76318 0xc820d76350 0xc820d76360] [0xc820d76318 0xc820d76350 0xc820d76360] [0xc820d76320 0xc820d76348 0xc820d76358] [0xa97470 0xa975d0 0xa975d0] 0xc821ebc0c0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.49.46 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-2sqfc] []  0xc820998540  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820998d00 exit status 1 <nil> true [0xc820d76318 0xc820d76350 0xc820d76360] [0xc820d76318 0xc820d76350 0xc820d76360] [0xc820d76320 0xc820d76348 0xc820d76358] [0xa97470 0xa975d0 0xa975d0] 0xc821ebc0c0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:700
Expected error:
    <*errors.errorString | 0xc822d89b50>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.49.46 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-ff7wp] []  0xc827db8720  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc827db8d60 exit status 1 <nil> true [0xc820d760d0 0xc820d761d0 0xc820d761e0] [0xc820d760d0 0xc820d761d0 0xc820d761e0] [0xc820d760e0 0xc820d761c8 0xc820d761d8] [0xa97470 0xa975d0 0xa975d0] 0xc827db2360}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.49.46 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-ff7wp] []  0xc827db8720  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc827db8d60 exit status 1 <nil> true [0xc820d760d0 0xc820d761d0 0xc820d761e0] [0xc820d760d0 0xc820d761d0 0xc820d761e0] [0xc820d760e0 0xc820d761c8 0xc820d761d8] [0xa97470 0xa975d0 0xa975d0] 0xc827db2360}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28493 #29964

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:214
Expected error:
    <*errors.errorString | 0xc8220afbc0>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.49.46 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-ht7j3] []  0xc8216f9000  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8216f9640 exit status 1 <nil> true [0xc8217c0208 0xc8217c0230 0xc8217c02b0] [0xc8217c0208 0xc8217c0230 0xc8217c02b0] [0xc8217c0210 0xc8217c0228 0xc8217c0238] [0xa97470 0xa975d0 0xa975d0] 0xc820bba900}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.49.46 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-ht7j3] []  0xc8216f9000  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8216f9640 exit status 1 <nil> true [0xc8217c0208 0xc8217c0230 0xc8217c02b0] [0xc8217c0208 0xc8217c0230 0xc8217c02b0] [0xc8217c0210 0xc8217c0228 0xc8217c0238] [0xa97470 0xa975d0 0xa975d0] 0xc820bba900}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:284
Expected error:
    <*errors.errorString | 0xc8221f3650>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.49.46 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-mq4qv] []  0xc8214f4e60  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8214f54a0 exit status 1 <nil> true [0xc820d76228 0xc820d76250 0xc820d76260] [0xc820d76228 0xc820d76250 0xc820d76260] [0xc820d76230 0xc820d76248 0xc820d76258] [0xa97470 0xa975d0 0xa975d0] 0xc821389200}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.49.46 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-mq4qv] []  0xc8214f4e60  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8214f54a0 exit status 1 <nil> true [0xc820d76228 0xc820d76250 0xc820d76260] [0xc820d76228 0xc820d76250 0xc820d76260] [0xc820d76230 0xc820d76248 0xc820d76258] [0xa97470 0xa975d0 0xa975d0] 0xc821389200}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc82007df80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #37177

Failed: [k8s.io] KubeProxy should test kube-proxy [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubeproxy.go:107
Expected error:
    <*errors.errorString | 0xc82007df80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26490 #33669

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc820cfe190>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.49.46 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-m735w -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-m735w/services/redis-master\", \"uid\":\"ffa0720f-b704-11e6-b0a5-42010af00023\", \"resourceVersion\":\"27458\", \"creationTimestamp\":\"2016-11-30T13:57:57Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-m735w\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.249.98\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc821c42840 exit status 1 <nil> true [0xc8213ea330 0xc8213ea348 0xc8213ea360] [0xc8213ea330 0xc8213ea348 0xc8213ea360] [0xc8213ea340 0xc8213ea358] [0xa975d0 0xa975d0] 0xc820f0a0c0}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-m735w/services/redis-master\", \"uid\":\"ffa0720f-b704-11e6-b0a5-42010af00023\", \"resourceVersion\":\"27458\", \"creationTimestamp\":\"2016-11-30T13:57:57Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-m735w\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.249.98\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.49.46 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-m735w -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"selfLink":"/api/v1/namespaces/e2e-tests-kubectl-m735w/services/redis-master", "uid":"ffa0720f-b704-11e6-b0a5-42010af00023", "resourceVersion":"27458", "creationTimestamp":"2016-11-30T13:57:57Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-m735w"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.249.98", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc821c42840 exit status 1 <nil> true [0xc8213ea330 0xc8213ea348 0xc8213ea360] [0xc8213ea330 0xc8213ea348 0xc8213ea360] [0xc8213ea340 0xc8213ea358] [0xa975d0 0xa975d0] 0xc820f0a0c0}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"selfLink":"/api/v1/namespaces/e2e-tests-kubectl-m735w/services/redis-master", "uid":"ffa0720f-b704-11e6-b0a5-42010af00023", "resourceVersion":"27458", "creationTimestamp":"2016-11-30T13:57:57Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-m735w"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.249.98", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:228
Expected error:
    <*errors.errorString | 0xc822af2bc0>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.49.46 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-x9zwx] []  0xc8218853a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8218859e0 exit status 1 <nil> true [0xc8213ea0e0 0xc8213ea130 0xc8213ea148] [0xc8213ea0e0 0xc8213ea130 0xc8213ea148] [0xc8213ea0f8 0xc8213ea120 0xc8213ea140] [0xa97470 0xa975d0 0xa975d0] 0xc8220d3080}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.49.46 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-x9zwx] []  0xc8218853a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8218859e0 exit status 1 <nil> true [0xc8213ea0e0 0xc8213ea130 0xc8213ea148] [0xc8213ea0e0 0xc8213ea130 0xc8213ea148] [0xc8213ea0f8 0xc8213ea120 0xc8213ea140] [0xa97470 0xa975d0 0xa975d0] 0xc8220d3080}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence. area/test-infra labels Dec 1, 2016
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/149/

Multiple broken tests:

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:214
Expected error:
    <*errors.errorString | 0xc8213dbe40>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.136.116 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-4bb69] []  0xc820d57500  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820d57b20 exit status 1 <nil> true [0xc8200c44e8 0xc8200c4548 0xc8200c4558] [0xc8200c44e8 0xc8200c4548 0xc8200c4558] [0xc8200c44f8 0xc8200c4530 0xc8200c4550] [0xa97470 0xa975d0 0xa975d0] 0xc820562480}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.136.116 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-4bb69] []  0xc820d57500  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820d57b20 exit status 1 <nil> true [0xc8200c44e8 0xc8200c4548 0xc8200c4558] [0xc8200c44e8 0xc8200c4548 0xc8200c4558] [0xc8200c44f8 0xc8200c4530 0xc8200c4550] [0xa97470 0xa975d0 0xa975d0] 0xc820562480}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:284
Expected error:
    <*errors.errorString | 0xc821a3caf0>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.136.116 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-shbzf] []  0xc822c3c0c0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc822c3c740 exit status 1 <nil> true [0xc8200ce890 0xc8200ce8c0 0xc8200ce8d8] [0xc8200ce890 0xc8200ce8c0 0xc8200ce8d8] [0xc8200ce898 0xc8200ce8b8 0xc8200ce8d0] [0xa97470 0xa975d0 0xa975d0] 0xc8214e09c0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.136.116 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-shbzf] []  0xc822c3c0c0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc822c3c740 exit status 1 <nil> true [0xc8200ce890 0xc8200ce8c0 0xc8200ce8d8] [0xc8200ce890 0xc8200ce8c0 0xc8200ce8d8] [0xc8200ce898 0xc8200ce8b8 0xc8200ce8d0] [0xa97470 0xa975d0 0xa975d0] 0xc8214e09c0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200c7060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #37177

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200c7060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070 #34383

Failed: [k8s.io] Pod Disks should schedule a pod w/ a RW PD shared between multiple containers, write to PD, delete pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:360
Expected
    <string>: 
to equal
    <string>: 2615384943295043816

Issues about this test specifically: #28010 #28427 #33997

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc8215f5570>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.136.116 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-dj71x -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"metadata\":map[string]interface {}{\"resourceVersion\":\"34452\", \"creationTimestamp\":\"2016-11-30T21:51:24Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-dj71x\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-dj71x/services/redis-master\", \"uid\":\"23a0954a-b747-11e6-9bec-42010af0001c\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.243.164\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\"}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc82165f4a0 exit status 1 <nil> true [0xc8200361d8 0xc820036290 0xc8200362d8] [0xc8200361d8 0xc820036290 0xc8200362d8] [0xc8200361f0 0xc8200362c8] [0xa975d0 0xa975d0] 0xc821510ba0}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"metadata\":map[string]interface {}{\"resourceVersion\":\"34452\", \"creationTimestamp\":\"2016-11-30T21:51:24Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-dj71x\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-dj71x/services/redis-master\", \"uid\":\"23a0954a-b747-11e6-9bec-42010af0001c\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.243.164\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\"}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.136.116 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-dj71x -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"metadata":map[string]interface {}{"resourceVersion":"34452", "creationTimestamp":"2016-11-30T21:51:24Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-dj71x", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-dj71x/services/redis-master", "uid":"23a0954a-b747-11e6-9bec-42010af0001c"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.243.164", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1"}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc82165f4a0 exit status 1 <nil> true [0xc8200361d8 0xc820036290 0xc8200362d8] [0xc8200361d8 0xc820036290 0xc8200362d8] [0xc8200361f0 0xc8200362c8] [0xa975d0 0xa975d0] 0xc821510ba0}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"metadata":map[string]interface {}{"resourceVersion":"34452", "creationTimestamp":"2016-11-30T21:51:24Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-dj71x", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-dj71x/services/redis-master", "uid":"23a0954a-b747-11e6-9bec-42010af0001c"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.243.164", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1"}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:736
Expected error:
    <*errors.errorString | 0xc8216d8fd0>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.136.116 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-49svs] []  0xc820a3d040  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820a3d820 exit status 1 <nil> true [0xc8220302e8 0xc8220303e8 0xc8220303f8] [0xc8220302e8 0xc8220303e8 0xc8220303f8] [0xc8220302f0 0xc8220303e0 0xc8220303f0] [0xa97470 0xa975d0 0xa975d0] 0xc820e07ec0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.136.116 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-49svs] []  0xc820a3d040  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820a3d820 exit status 1 <nil> true [0xc8220302e8 0xc8220303e8 0xc8220303f8] [0xc8220302e8 0xc8220303e8 0xc8220303f8] [0xc8220302f0 0xc8220303e0 0xc8220303f0] [0xa97470 0xa975d0 0xa975d0] 0xc820e07ec0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200c7060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:284
Expected error:
    <*errors.errorString | 0xc821af0530>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.136.116 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-t845f] []  0xc823390980  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc823391120 exit status 1 <nil> true [0xc822074160 0xc822074188 0xc822074198] [0xc822074160 0xc822074188 0xc822074198] [0xc822074168 0xc822074180 0xc822074190] [0xa97470 0xa975d0 0xa975d0] 0xc8230141e0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.136.116 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-t845f] []  0xc823390980  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc823391120 exit status 1 <nil> true [0xc822074160 0xc822074188 0xc822074198] [0xc822074160 0xc822074188 0xc822074198] [0xc822074168 0xc822074180 0xc822074190] [0xa97470 0xa975d0 0xa975d0] 0xc8230141e0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:270
Expected error:
    <*errors.errorString | 0xc82192e2e0>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.136.116 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-d8h73] []  0xc821913840  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821913e60 exit status 1 <nil> true [0xc820d74170 0xc820d74198 0xc820d741a8] [0xc820d74170 0xc820d74198 0xc820d741a8] [0xc820d74178 0xc820d74190 0xc820d741a0] [0xa97470 0xa975d0 0xa975d0] 0xc821828780}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.136.116 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-d8h73] []  0xc821913840  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821913e60 exit status 1 <nil> true [0xc820d74170 0xc820d74198 0xc820d741a8] [0xc820d74170 0xc820d74198 0xc820d741a8] [0xc820d74178 0xc820d74190 0xc820d741a0] [0xa97470 0xa975d0 0xa975d0] 0xc821828780}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:700
Expected error:
    <*errors.errorString | 0xc8213da8f0>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.136.116 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-2r246] []  0xc823330de0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc823331780 exit status 1 <nil> true [0xc820d1a030 0xc820d1a058 0xc820d1a068] [0xc820d1a030 0xc820d1a058 0xc820d1a068] [0xc820d1a038 0xc820d1a050 0xc820d1a060] [0xa97470 0xa975d0 0xa975d0] 0xc82142e540}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.136.116 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-2r246] []  0xc823330de0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc823331780 exit status 1 <nil> true [0xc820d1a030 0xc820d1a058 0xc820d1a068] [0xc820d1a030 0xc820d1a058 0xc820d1a068] [0xc820d1a038 0xc820d1a050 0xc820d1a060] [0xa97470 0xa975d0 0xa975d0] 0xc82142e540}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28493 #29964

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:284
Expected error:
    <*errors.errorString | 0xc8217822d0>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.136.116 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-t2rfg] []  0xc8233c0580  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8233c0c00 exit status 1 <nil> true [0xc82140c1a8 0xc82140c1d0 0xc82140c1e0] [0xc82140c1a8 0xc82140c1d0 0xc82140c1e0] [0xc82140c1b0 0xc82140c1c8 0xc82140c1d8] [0xa97470 0xa975d0 0xa975d0] 0xc821510420}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.136.116 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-t2rfg] []  0xc8233c0580  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8233c0c00 exit status 1 <nil> true [0xc82140c1a8 0xc82140c1d0 0xc82140c1e0] [0xc82140c1a8 0xc82140c1d0 0xc82140c1e0] [0xc82140c1b0 0xc82140c1c8 0xc82140c1d8] [0xa97470 0xa975d0 0xa975d0] 0xc821510420}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #27156 #28979 #30489 #33649

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200c7060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:284
Expected error:
    <*errors.errorString | 0xc821800830>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.136.116 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-bvc3s] []  0xc8233b8f00  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8233b9580 exit status 1 <nil> true [0xc820d1a670 0xc820d1a698 0xc820d1a6a8] [0xc820d1a670 0xc820d1a698 0xc820d1a6a8] [0xc820d1a678 0xc820d1a690 0xc820d1a6a0] [0xa97470 0xa975d0 0xa975d0] 0xc821381b00}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.136.116 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-bvc3s] []  0xc8233b8f00  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8233b9580 exit status 1 <nil> true [0xc820d1a670 0xc820d1a698 0xc820d1a6a8] [0xc820d1a670 0xc820d1a698 0xc820d1a6a8] [0xc820d1a678 0xc820d1a690 0xc820d1a6a0] [0xa97470 0xa975d0 0xa975d0] 0xc821381b00}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:228
Expected error:
    <*errors.errorString | 0xc820f745f0>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.136.116 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-p9xr4] []  0xc820524be0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820525200 exit status 1 <nil> true [0xc820aee000 0xc820aee030 0xc820aee050] [0xc820aee000 0xc820aee030 0xc820aee050] [0xc820aee008 0xc820aee028 0xc820aee048] [0xa97470 0xa975d0 0xa975d0] 0xc82101ad20}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.136.116 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-p9xr4] []  0xc820524be0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820525200 exit status 1 <nil> true [0xc820aee000 0xc820aee030 0xc820aee050] [0xc820aee000 0xc820aee030 0xc820aee050] [0xc820aee008 0xc820aee028 0xc820aee048] [0xa97470 0xa975d0 0xa975d0] 0xc82101ad20}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:70
Expected error:
    <*errors.errorString | 0xc822b7c990>: {
        s: "error waiting for deployment test-rollover-deployment status to match expectation: timed out waiting for the condition",
    }
    error waiting for deployment test-rollover-deployment status to match expectation: timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26509 #26834 #29780 #35355

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Nov 30 09:26:20.236: Memory usage exceeding limits:
 node gke-jenkins-e2e-default-pool-73759a30-z5tr:
 container "runtime": expected RSS memory (MB) < 314572800; got 530051072
node gke-jenkins-e2e-default-pool-73759a30-5pqf:
 container "runtime": expected RSS memory (MB) < 314572800; got 528961536
node gke-jenkins-e2e-default-pool-73759a30-jl6z:
 container "runtime": expected RSS memory (MB) < 314572800; got 519925760

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/150/

Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:281
Expected
    <bool>: false
to be true

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] KubeProxy should test kube-proxy [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubeproxy.go:107
Expected error:
    <*errors.errorString | 0xc82007df80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26490 #33669

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc82007df80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #37177

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Nov 30 17:04:19.195: Memory usage exceeding limits:
 node gke-jenkins-e2e-default-pool-9de04e4b-lohh:
 container "runtime": expected RSS memory (MB) < 314572800; got 521113600
node gke-jenkins-e2e-default-pool-9de04e4b-soay:
 container "runtime": expected RSS memory (MB) < 314572800; got 517054464
node gke-jenkins-e2e-default-pool-9de04e4b-svvi:
 container "runtime": expected RSS memory (MB) < 314572800; got 523649024

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc82007df80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070 #34383

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:70
Expected error:
    <*errors.errorString | 0xc820a96bf0>: {
        s: "error waiting for deployment test-rollover-deployment status to match expectation: timed out waiting for the condition",
    }
    error waiting for deployment test-rollover-deployment status to match expectation: timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26509 #26834 #29780 #35355

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc823474440>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.137.156 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-rtsb9 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-rtsb9/services/redis-master\", \"uid\":\"1184db75-b77b-11e6-a87b-42010af0002d\", \"resourceVersion\":\"30304\", \"creationTimestamp\":\"2016-12-01T04:03:08Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-rtsb9\"}, \"spec\":map[string]interface {}{\"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.243.170\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"targetPort\":\"redis-server\", \"protocol\":\"TCP\", \"port\":6379}}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\"}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc8217dd7e0 exit status 1 <nil> true [0xc820aae1b0 0xc820aae1d0 0xc820aae1e8] [0xc820aae1b0 0xc820aae1d0 0xc820aae1e8] [0xc820aae1c8 0xc820aae1e0] [0xa975d0 0xa975d0] 0xc82351f740}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-rtsb9/services/redis-master\", \"uid\":\"1184db75-b77b-11e6-a87b-42010af0002d\", \"resourceVersion\":\"30304\", \"creationTimestamp\":\"2016-12-01T04:03:08Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-rtsb9\"}, \"spec\":map[string]interface {}{\"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.243.170\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"targetPort\":\"redis-server\", \"protocol\":\"TCP\", \"port\":6379}}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\"}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.137.156 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-rtsb9 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"apiVersion":"v1", "metadata":map[string]interface {}{"selfLink":"/api/v1/namespaces/e2e-tests-kubectl-rtsb9/services/redis-master", "uid":"1184db75-b77b-11e6-a87b-42010af0002d", "resourceVersion":"30304", "creationTimestamp":"2016-12-01T04:03:08Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-rtsb9"}, "spec":map[string]interface {}{"selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.243.170", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"targetPort":"redis-server", "protocol":"TCP", "port":6379}}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service"}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc8217dd7e0 exit status 1 <nil> true [0xc820aae1b0 0xc820aae1d0 0xc820aae1e8] [0xc820aae1b0 0xc820aae1d0 0xc820aae1e8] [0xc820aae1c8 0xc820aae1e0] [0xa975d0 0xa975d0] 0xc82351f740}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"apiVersion":"v1", "metadata":map[string]interface {}{"selfLink":"/api/v1/namespaces/e2e-tests-kubectl-rtsb9/services/redis-master", "uid":"1184db75-b77b-11e6-a87b-42010af0002d", "resourceVersion":"30304", "creationTimestamp":"2016-12-01T04:03:08Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-rtsb9"}, "spec":map[string]interface {}{"selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.243.170", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"targetPort":"redis-server", "protocol":"TCP", "port":6379}}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service"}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc82007df80>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc82007df80>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:453
Expected error:
    <*errors.errorString | 0xc8218f3030>: {
        s: "failed to wait for pods responding: pod with UID ea4e78d3-b773-11e6-a87b-42010af0002d is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-pb6qf/pods 24801} [{{ } {my-hostname-delete-node-kbl3c my-hostname-delete-node- e2e-tests-resize-nodes-pb6qf /api/v1/namespaces/e2e-tests-resize-nodes-pb6qf/pods/my-hostname-delete-node-kbl3c 1dfd057f-b774-11e6-a87b-42010af0002d 24661 0 {2016-11-30 19:13:22 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-pb6qf\",\"name\":\"my-hostname-delete-node\",\"uid\":\"ea4cf335-b773-11e6-a87b-42010af0002d\",\"apiVersion\":\"v1\",\"resourceVersion\":\"24559\"}}\n] [{v1 ReplicationController my-hostname-delete-node ea4cf335-b773-11e6-a87b-42010af0002d 0xc8212ff487}] []} {[{default-token-d8x64 {<nil> <nil> <nil> <nil> <nil> 0xc821cb7e60 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-d8x64 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8212ff580 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-9de04e4b-soay 0xc824477200 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-11-30 19:13:22 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-11-30 19:13:34 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-11-30 19:13:22 -0800 PST}  }]   10.240.0.4 10.124.1.7 2016-11-30T19:13:22-08:00 [] [{my-hostname-delete-node {<nil> 0xc821c34120 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://4099249c69b39e356dd0efddf4695a3847835002092446501585604c5b1d924a}]}} {{ } {my-hostname-delete-node-s2nf2 my-hostname-delete-node- e2e-tests-resize-nodes-pb6qf /api/v1/namespaces/e2e-tests-resize-nodes-pb6qf/pods/my-hostname-delete-node-s2nf2 ea500cdf-b773-11e6-a87b-42010af0002d 24490 0 {2016-11-30 19:11:56 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-pb6qf\",\"name\":\"my-hostname-delete-node\",\"uid\":\"ea4cf335-b773-11e6-a87b-42010af0002d\",\"apiVersion\":\"v1\",\"resourceVersion\":\"24476\"}}\n] [{v1 ReplicationController my-hostname-delete-node ea4cf335-b773-11e6-a87b-42010af0002d 0xc8212ff817}] []} {[{default-token-d8x64 {<nil> <nil> <nil> <nil> <nil> 0xc821cb7ec0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-d8x64 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8212ff910 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-9de04e4b-svvi 0xc8244772c0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-11-30 19:11:56 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-11-30 19:11:57 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-11-30 19:11:56 -0800 PST}  }]   10.240.0.3 10.124.2.3 2016-11-30T19:11:56-08:00 [] [{my-hostname-delete-node {<nil> 0xc821c34140 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://516d0b9c7f050d4f1b086785f4cbf71c8ce32d46c6eb49329d7963031020a76f}]}} {{ } {my-hostname-delete-node-wbbdr my-hostname-delete-node- e2e-tests-resize-nodes-pb6qf /api/v1/namespaces/e2e-tests-resize-nodes-pb6qf/pods/my-hostname-delete-node-wbbdr ea4e8a5d-b773-11e6-a87b-42010af0002d 24492 0 {2016-11-30 19:11:56 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-pb6qf\",\"name\":\"my-hostname-delete-node\",\"uid\":\"ea4cf335-b773-11e6-a87b-42010af0002d\",\"apiVersion\":\"v1\",\"resourceVersion\":\"24476\"}}\n] [{v1 ReplicationController my-hostname-delete-node ea4cf335-b773-11e6-a87b-42010af0002d 0xc8212ffba7}] []} {[{default-token-d8x64 {<nil> <nil> <nil> <nil> <nil> 0xc821cb7f20 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-d8x64 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8212ffca0 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-9de04e4b-soay 0xc824477380 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-11-30 19:11:56 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-11-30 19:11:57 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-11-30 19:11:56 -0800 PST}  }]   10.240.0.4 10.124.1.3 2016-11-30T19:11:56-08:00 [] [{my-hostname-delete-node {<nil> 0xc821c34160 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://3751d76e3174a470d8c305891d4da11125a6c7270cc66457ddd046c16831e219}]}}]}",
    }
    failed to wait for pods responding: pod with UID ea4e78d3-b773-11e6-a87b-42010af0002d is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-pb6qf/pods 24801} [{{ } {my-hostname-delete-node-kbl3c my-hostname-delete-node- e2e-tests-resize-nodes-pb6qf /api/v1/namespaces/e2e-tests-resize-nodes-pb6qf/pods/my-hostname-delete-node-kbl3c 1dfd057f-b774-11e6-a87b-42010af0002d 24661 0 {2016-11-30 19:13:22 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-pb6qf","name":"my-hostname-delete-node","uid":"ea4cf335-b773-11e6-a87b-42010af0002d","apiVersion":"v1","resourceVersion":"24559"}}
    ] [{v1 ReplicationController my-hostname-delete-node ea4cf335-b773-11e6-a87b-42010af0002d 0xc8212ff487}] []} {[{default-token-d8x64 {<nil> <nil> <nil> <nil> <nil> 0xc821cb7e60 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-d8x64 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8212ff580 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-9de04e4b-soay 0xc824477200 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-11-30 19:13:22 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-11-30 19:13:34 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-11-30 19:13:22 -0800 PST}  }]   10.240.0.4 10.124.1.7 2016-11-30T19:13:22-08:00 [] [{my-hostname-delete-node {<nil> 0xc821c34120 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://4099249c69b39e356dd0efddf4695a3847835002092446501585604c5b1d924a}]}} {{ } {my-hostname-delete-node-s2nf2 my-hostname-delete-node- e2e-tests-resize-nodes-pb6qf /api/v1/namespaces/e2e-tests-resize-nodes-pb6qf/pods/my-hostname-delete-node-s2nf2 ea500cdf-b773-11e6-a87b-42010af0002d 24490 0 {2016-11-30 19:11:56 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-pb6qf","name":"my-hostname-delete-node","uid":"ea4cf335-b773-11e6-a87b-42010af0002d","apiVersion":"v1","resourceVersion":"24476"}}
    ] [{v1 ReplicationController my-hostname-delete-node ea4cf335-b773-11e6-a87b-42010af0002d 0xc8212ff817}] []} {[{default-token-d8x64 {<nil> <nil> <nil> <nil> <nil> 0xc821cb7ec0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-d8x64 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8212ff910 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-9de04e4b-svvi 0xc8244772c0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-11-30 19:11:56 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-11-30 19:11:57 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-11-30 19:11:56 -0800 PST}  }]   10.240.0.3 10.124.2.3 2016-11-30T19:11:56-08:00 [] [{my-hostname-delete-node {<nil> 0xc821c34140 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://516d0b9c7f050d4f1b086785f4cbf71c8ce32d46c6eb49329d7963031020a76f}]}} {{ } {my-hostname-delete-node-wbbdr my-hostname-delete-node- e2e-tests-resize-nodes-pb6qf /api/v1/namespaces/e2e-tests-resize-nodes-pb6qf/pods/my-hostname-delete-node-wbbdr ea4e8a5d-b773-11e6-a87b-42010af0002d 24492 0 {2016-11-30 19:11:56 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-pb6qf","name":"my-hostname-delete-node","uid":"ea4cf335-b773-11e6-a87b-42010af0002d","apiVersion":"v1","resourceVersion":"24476"}}
    ] [{v1 ReplicationController my-hostname-delete-node ea4cf335-b773-11e6-a87b-42010af0002d 0xc8212ffba7}] []} {[{default-token-d8x64 {<nil> <nil> <nil> <nil> <nil> 0xc821cb7f20 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-d8x64 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8212ffca0 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-9de04e4b-soay 0xc824477380 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-11-30 19:11:56 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-11-30 19:11:57 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-11-30 19:11:56 -0800 PST}  }]   10.240.0.4 10.124.1.3 2016-11-30T19:11:56-08:00 [] [{my-hostname-delete-node {<nil> 0xc821c34160 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://3751d76e3174a470d8c305891d4da11125a6c7270cc66457ddd046c16831e219}]}}]}
not to have occurred

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:372
Expected error:
    <*errors.errorString | 0xc821152740>: {
        s: "service verification failed for: 10.127.255.46\nexpected [service1-20js7 service1-9ftr5 service1-l36r2]\nreceived [service1-9ftr5 service1-l36r2]",
    }
    service verification failed for: 10.127.255.46
    expected [service1-20js7 service1-9ftr5 service1-l36r2]
    received [service1-9ftr5 service1-l36r2]
not to have occurred

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/151/

Multiple broken tests:

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:279
Expected error:
    <*errors.errorString | 0xc8217c6060>: {
        s: "service verification failed for: 10.127.244.13\nexpected [service3-270h9 service3-5g81c service3-8c6c6]\nreceived [service3-270h9 service3-8c6c6]",
    }
    service verification failed for: 10.127.244.13
    expected [service3-270h9 service3-5g81c service3-8c6c6]
    received [service3-270h9 service3-8c6c6]
not to have occurred

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:70
Expected error:
    <*errors.errorString | 0xc821b4b290>: {
        s: "error waiting for deployment test-rollover-deployment status to match expectation: timed out waiting for the condition",
    }
    error waiting for deployment test-rollover-deployment status to match expectation: timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26509 #26834 #29780 #35355

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200ec0c0>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200ec0c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070 #34383

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200ec0c0>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Nov 30 22:53:10.009: Memory usage exceeding limits:
 node gke-jenkins-e2e-default-pool-bd25961a-5mrb:
 container "runtime": expected RSS memory (MB) < 314572800; got 521412608
node gke-jenkins-e2e-default-pool-bd25961a-bjx1:
 container "runtime": expected RSS memory (MB) < 314572800; got 522887168
node gke-jenkins-e2e-default-pool-bd25961a-1725:
 container "runtime": expected RSS memory (MB) < 314572800; got 521977856

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc8203a67d0>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.158.246 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-v0hpk -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-v0hpk\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-v0hpk/services/redis-master\", \"uid\":\"a5763b15-b78e-11e6-a6ee-42010af0001b\", \"resourceVersion\":\"1341\", \"creationTimestamp\":\"2016-12-01T06:23:17Z\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"clusterIP\":\"10.127.245.104\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc8205ca7e0 exit status 1 <nil> true [0xc820f6a330 0xc820f6a348 0xc820f6a360] [0xc820f6a330 0xc820f6a348 0xc820f6a360] [0xc820f6a340 0xc820f6a358] [0xa975d0 0xa975d0] 0xc820d42240}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-v0hpk\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-v0hpk/services/redis-master\", \"uid\":\"a5763b15-b78e-11e6-a6ee-42010af0001b\", \"resourceVersion\":\"1341\", \"creationTimestamp\":\"2016-12-01T06:23:17Z\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"clusterIP\":\"10.127.245.104\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.158.246 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-v0hpk -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-v0hpk", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-v0hpk/services/redis-master", "uid":"a5763b15-b78e-11e6-a6ee-42010af0001b", "resourceVersion":"1341", "creationTimestamp":"2016-12-01T06:23:17Z"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"role":"master", "app":"redis"}, "clusterIP":"10.127.245.104", "type":"ClusterIP", "sessionAffinity":"None"}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc8205ca7e0 exit status 1 <nil> true [0xc820f6a330 0xc820f6a348 0xc820f6a360] [0xc820f6a330 0xc820f6a348 0xc820f6a360] [0xc820f6a340 0xc820f6a358] [0xa975d0 0xa975d0] 0xc820d42240}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-v0hpk", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-v0hpk/services/redis-master", "uid":"a5763b15-b78e-11e6-a6ee-42010af0001b", "resourceVersion":"1341", "creationTimestamp":"2016-12-01T06:23:17Z"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"role":"master", "app":"redis"}, "clusterIP":"10.127.245.104", "type":"ClusterIP", "sessionAffinity":"None"}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/152/

Multiple broken tests:

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200bd060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:279
Expected error:
    <*errors.errorString | 0xc8217a3550>: {
        s: "service verification failed for: 10.127.253.199\nexpected [service3-1hslh service3-j7bqc service3-xz3rx]\nreceived [service3-1hslh service3-xz3rx]",
    }
    service verification failed for: 10.127.253.199
    expected [service3-1hslh service3-j7bqc service3-xz3rx]
    received [service3-1hslh service3-xz3rx]
not to have occurred

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec  1 09:23:35.165: Memory usage exceeding limits:
 node gke-jenkins-e2e-default-pool-b0cbed87-1onm:
 container "runtime": expected RSS memory (MB) < 314572800; got 527663104
node gke-jenkins-e2e-default-pool-b0cbed87-65he:
 container "runtime": expected RSS memory (MB) < 314572800; got 527720448
node gke-jenkins-e2e-default-pool-b0cbed87-l1bk:
 container "runtime": expected RSS memory (MB) < 314572800; got 533147648

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399

Failed: [k8s.io] PrivilegedPod should test privileged pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/privileged.go:67
Expected error:
    <*errors.errorString | 0xc821c76170>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.158.246 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-e2e-privilegedpod-197cq hostexec -- /bin/sh -c curl -q 'http://10.124.2.3:8080/shell?shellCommand=ip+link+add+dummy1+type+dummy'] []  <nil>    % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current\n                                 Dload  Upload   Total   Spent    Left  Speed\n\r  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:02 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:03 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:04 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:05 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:06 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:07 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:08 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:09 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:10 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:11 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:12 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:13 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:14 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:15 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:16 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:17 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:18 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:19 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:20 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:21 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:22 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:23 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:24 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:25 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:26 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:27 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:28 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:29 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:30 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:31 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:32 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:33 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:34 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:35 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:36 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:37 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:38 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:39 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:40 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:41 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:42 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:43 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:44 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:45 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:46 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:47 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:48 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:49 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:50 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:51 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:52 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:53 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:54 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:55 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:56 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:57 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:58 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:59 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:00 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:01 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:02 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:03 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:04 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:05 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:06 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:07 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:08 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:09 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:10 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:11 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:12 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:13 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:14 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:15 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:16 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:17 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:18 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:19 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:20 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:21 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:22 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:23 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:24 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:25 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:26 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:27 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:28 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:29 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:30 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:31 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:32 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:33 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:34 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:35 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:36 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:37 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:38 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:39 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:40 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:41 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:42 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:43 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:44 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:45 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:46 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:47 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:48 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:49 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:50 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:51 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:52 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:53 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:54 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:55 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:56 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:57 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:58 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:59 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:00 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:01 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:02 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:03 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:04 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:05 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:06 --:--:--     0curl: (7) Failed to connect to 10.124.2.3 port 8080: Operation timed out\nerror: error executing remote command: error executing command in container: Error executing in Docker Container: 7\n [] <nil> 0xc820a3d520 exit status 1 <nil> true [0xc8200c4fa8 0xc8200c4fc0 0xc8200c4fd8] [0xc8200c4fa8 0xc8200c4fc0 0xc8200c4fd8] [0xc8200c4fb8 0xc8200c4fd0] [0xa975d0 0xa975d0] 0xc821473b00}:\nCommand stdout:\n\nstderr:\n  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current\n                                 Dload  Upload   Total   Spent    Left  Speed\n\r  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:02 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:03 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:04 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:05 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:06 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:07 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:08 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:09 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:10 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:11 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:12 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:13 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:14 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:15 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:16 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:17 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:18 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:19 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:20 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:21 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:22 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:23 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:24 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:25 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:26 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:27 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:28 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:29 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:30 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:31 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:32 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:33 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:34 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:35 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:36 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:37 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:38 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:39 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:40 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:41 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:42 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:43 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:44 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:45 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:46 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:47 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:48 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:49 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:50 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:51 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:52 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:53 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:54 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:55 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:56 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:57 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:58 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:59 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:00 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:01 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:02 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:03 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:04 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:05 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:06 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:07 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:08 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:09 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:10 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:11 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:12 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:13 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:14 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:15 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:16 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:17 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:18 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:19 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:20 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:21 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:22 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:23 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:24 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:25 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:26 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:27 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:28 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:29 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:30 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:31 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:32 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:33 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:34 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:35 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:36 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:37 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:38 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:39 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:40 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:41 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:42 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:43 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:44 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:45 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:46 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:47 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:48 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:49 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:50 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:51 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:52 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:53 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:54 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:55 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:56 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:57 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:58 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:59 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:00 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:01 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:02 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:03 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:04 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:05 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:06 --:--:--     0curl: (7) Failed to connect to 10.124.2.3 port 8080: Operation timed out\nerror: error executing remote command: error executing command in container: Error executing in Docker Container: 7\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.158.246 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-e2e-privilegedpod-197cq hostexec -- /bin/sh -c curl -q 'http://10.124.2.3:8080/shell?shellCommand=ip+link+add+dummy1+type+dummy'] []  <nil>    % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:02 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:03 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:04 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:05 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:06 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:07 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:08 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:09 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:10 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:11 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:12 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:13 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:14 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:15 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:16 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:17 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:18 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:19 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:20 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:21 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:22 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:23 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:24 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:25 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:26 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:27 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:28 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:29 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:30 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:31 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:32 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:33 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:34 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:35 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:36 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:37 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:38 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:39 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:40 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:41 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:42 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:43 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:44 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:45 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:46 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:47 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:48 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:49 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:50 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:51 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:52 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:53 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:54 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:55 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:56 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:57 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:58 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:59 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:00 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:01 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:02 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:03 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:04 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:05 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:06 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:07 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:08 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:09 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:10 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:11 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:12 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:13 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:14 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:15 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:16 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:17 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:18 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:19 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:20 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:21 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:22 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:23 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:24 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:25 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:26 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:27 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:28 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:29 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:30 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:31 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:32 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:33 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:34 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:35 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:36 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:37 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:38 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:39 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:40 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:41 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:42 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:43 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:44 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:45 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:46 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:47 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:48 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:49 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:50 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:51 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:52 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:53 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:54 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:55 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:56 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:57 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:58 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:59 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:00 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:01 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:02 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:03 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:04 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:05 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:06 --:--:--     0curl: (7) Failed to connect to 10.124.2.3 port 8080: Operation timed out
    error: error executing remote command: error executing command in container: Error executing in Docker Container: 7
     [] <nil> 0xc820a3d520 exit status 1 <nil> true [0xc8200c4fa8 0xc8200c4fc0 0xc8200c4fd8] [0xc8200c4fa8 0xc8200c4fc0 0xc8200c4fd8] [0xc8200c4fb8 0xc8200c4fd0] [0xa975d0 0xa975d0] 0xc821473b00}:
    Command stdout:
    
    stderr:
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:02 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:03 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:04 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:05 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:06 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:07 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:08 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:09 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:10 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:11 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:12 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:13 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:14 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:15 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:16 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:17 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:18 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:19 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:20 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:21 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:22 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:23 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:24 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:25 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:26 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:27 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:28 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:29 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:30 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:31 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:32 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:33 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:34 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:35 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:36 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:37 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:38 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:39 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:40 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:41 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:42 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:43 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:44 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:45 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:46 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:47 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:48 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:49 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:50 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:51 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:52 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:53 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:54 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:55 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:56 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:57 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:58 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:59 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:00 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:01 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:02 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:03 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:04 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:05 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:06 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:07 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:08 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:09 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:10 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:11 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:12 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:13 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:14 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:15 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:16 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:17 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:18 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:19 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:20 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:21 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:22 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:23 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:24 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:25 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:26 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:27 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:28 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:29 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:30 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:31 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:32 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:33 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:34 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:35 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:36 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:37 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:38 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:39 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:40 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:41 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:42 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:43 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:44 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:45 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:46 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:47 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:48 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:49 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:50 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:51 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:52 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:53 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:54 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:55 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:56 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:57 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:58 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:59 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:00 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:01 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:02 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:03 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:04 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:05 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:06 --:--:--     0curl: (7) Failed to connect to 10.124.2.3 port 8080: Operation timed out
    error: error executing remote command: error executing command in container: Error executing in Docker Container: 7
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #29519 #32451

Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication between nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:267
Expected error:
    <*errors.errorString | 0xc82123d980>: {
        s: "pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-12-01 11:00:34 -0800 PST} FinishedAt:{Time:2016-12-01 11:00:44 -0800 PST} ContainerID:docker://4db20b9d259d7a79031d4c4fcad1b17f53d95486eeea307e4d133bf52c21e413}",
    }
    pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-12-01 11:00:34 -0800 PST} FinishedAt:{Time:2016-12-01 11:00:44 -0800 PST} ContainerID:docker://4db20b9d259d7a79031d4c4fcad1b17f53d95486eeea307e4d133bf52c21e413}
not to have occurred

Issues about this test specifically: #30131 #31402

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc82079cf90>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.158.246 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-s16sr -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"creationTimestamp\":\"2016-12-01T19:22:02Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-s16sr\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-s16sr/services/redis-master\", \"uid\":\"6fd1f924-b7fb-11e6-afa1-42010af00028\", \"resourceVersion\":\"46589\"}, \"spec\":map[string]interface {}{\"clusterIP\":\"10.127.247.26\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc8239d5e60 exit status 1 <nil> true [0xc821feede8 0xc821feee00 0xc821feee18] [0xc821feede8 0xc821feee00 0xc821feee18] [0xc821feedf8 0xc821feee10] [0xa975d0 0xa975d0] 0xc820db0960}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"creationTimestamp\":\"2016-12-01T19:22:02Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-s16sr\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-s16sr/services/redis-master\", \"uid\":\"6fd1f924-b7fb-11e6-afa1-42010af00028\", \"resourceVersion\":\"46589\"}, \"spec\":map[string]interface {}{\"clusterIP\":\"10.127.247.26\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.158.246 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-s16sr -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"creationTimestamp":"2016-12-01T19:22:02Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-s16sr", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-s16sr/services/redis-master", "uid":"6fd1f924-b7fb-11e6-afa1-42010af00028", "resourceVersion":"46589"}, "spec":map[string]interface {}{"clusterIP":"10.127.247.26", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc8239d5e60 exit status 1 <nil> true [0xc821feede8 0xc821feee00 0xc821feee18] [0xc821feede8 0xc821feee00 0xc821feee18] [0xc821feedf8 0xc821feee10] [0xa975d0 0xa975d0] 0xc820db0960}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"creationTimestamp":"2016-12-01T19:22:02Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-s16sr", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-s16sr/services/redis-master", "uid":"6fd1f924-b7fb-11e6-afa1-42010af00028", "resourceVersion":"46589"}, "spec":map[string]interface {}{"clusterIP":"10.127.247.26", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200bd060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070 #34383

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200bd060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200bd060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #37177

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:331
Expected error:
    <*errors.errorString | 0xc82107a010>: {
        s: "service verification failed for: 10.127.243.202\nexpected [service1-csst7 service1-q5sdn service1-rq1fd]\nreceived [service1-q5sdn service1-rq1fd]",
    }
    service verification failed for: 10.127.243.202
    expected [service1-csst7 service1-q5sdn service1-rq1fd]
    received [service1-q5sdn service1-rq1fd]
not to have occurred

Issues about this test specifically: #29514

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:70
Expected error:
    <*errors.errorString | 0xc821a1ea30>: {
        s: "error waiting for deployment test-rollover-deployment status to match expectation: timed out waiting for the condition",
    }
    error waiting for deployment test-rollover-deployment status to match expectation: timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26509 #26834 #29780 #35355

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

3 participants