Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubernetes-e2e-gke-container_vm-1.4-container_vm-1.5-upgrade-cluster: broken test run #37742

Closed
k8s-github-robot opened this issue Dec 1, 2016 · 2 comments
Assignees
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@k8s-github-robot
Copy link

Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-container_vm-1.4-container_vm-1.5-upgrade-cluster/478/

Run so broken it didn't make JUnit output!

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence. area/test-infra labels Dec 1, 2016
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-container_vm-1.4-container_vm-1.5-upgrade-cluster/480/

Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.134.44 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-t55vr] []  0xc8215a6680  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8215a6ca0 exit status 1 <nil> true [0xc8200c2ea0 0xc8200c2ed0 0xc8200c2ee0] [0xc8200c2ea0 0xc8200c2ed0 0xc8200c2ee0] [0xc8200c2ea8 0xc8200c2ec8 0xc8200c2ed8] [0xafa5c0 0xafa720 0xafa720] 0xc820b967e0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.134.44 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-t55vr] []  0xc8215a6680  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8215a6ca0 exit status 1 <nil> true [0xc8200c2ea0 0xc8200c2ed0 0xc8200c2ee0] [0xc8200c2ea0 0xc8200c2ed0 0xc8200c2ee0] [0xc8200c2ea8 0xc8200c2ec8 0xc8200c2ed8] [0xafa5c0 0xafa720 0xafa720] 0xc820b967e0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:521
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.134.44 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-092gp -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"clusterIP\":\"10.127.253.243\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"creationTimestamp\":\"2016-12-01T01:32:23Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-092gp\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-092gp/services/redis-master\", \"uid\":\"02281829-b766-11e6-80a9-42010af00026\", \"resourceVersion\":\"40387\"}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc82160a5e0 exit status 1 <nil> true [0xc8200c3568 0xc8200c3580 0xc8200c35b0] [0xc8200c3568 0xc8200c3580 0xc8200c35b0] [0xc8200c3578 0xc8200c35a0] [0xafa720 0xafa720] 0xc821f9d080}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"clusterIP\":\"10.127.253.243\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"creationTimestamp\":\"2016-12-01T01:32:23Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-092gp\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-092gp/services/redis-master\", \"uid\":\"02281829-b766-11e6-80a9-42010af00026\", \"resourceVersion\":\"40387\"}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.134.44 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-092gp -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"role":"master", "app":"redis"}, "clusterIP":"10.127.253.243", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"creationTimestamp":"2016-12-01T01:32:23Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-092gp", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-092gp/services/redis-master", "uid":"02281829-b766-11e6-80a9-42010af00026", "resourceVersion":"40387"}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc82160a5e0 exit status 1 <nil> true [0xc8200c3568 0xc8200c3580 0xc8200c35b0] [0xc8200c3568 0xc8200c3580 0xc8200c35b0] [0xc8200c3578 0xc8200c35a0] [0xafa720 0xafa720] 0xc821f9d080}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"role":"master", "app":"redis"}, "clusterIP":"10.127.253.243", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"creationTimestamp":"2016-12-01T01:32:23Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-092gp", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-092gp/services/redis-master", "uid":"02281829-b766-11e6-80a9-42010af00026", "resourceVersion":"40387"}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28523 #35741

Failed: [k8s.io] V1Job should fail a job [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc820170af0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:201

Issues about this test specifically: #37427

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821111260>: {
        s: "Namespace e2e-tests-services-5cc3p is active",
    }
    Namespace e2e-tests-services-5cc3p is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #28019

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.134.44 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-fm2tf] []  0xc8210e7000  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8210e7b40 exit status 1 <nil> true [0xc820036308 0xc820036340 0xc820036358] [0xc820036308 0xc820036340 0xc820036358] [0xc820036310 0xc820036330 0xc820036348] [0xafa5c0 0xafa720 0xafa720] 0xc820d5da40}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.134.44 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-fm2tf] []  0xc8210e7000  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8210e7b40 exit status 1 <nil> true [0xc820036308 0xc820036340 0xc820036358] [0xc820036308 0xc820036340 0xc820036358] [0xc820036310 0xc820036330 0xc820036348] [0xafa5c0 0xafa720 0xafa720] 0xc820d5da40}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:92
Failed to update the first deployment's overlapping annotation
Expected error:
    <*errors.errorString | 0xc820170af0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1244

Issues about this test specifically: #31502 #32947

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82112de90>: {
        s: "Namespace e2e-tests-services-5cc3p is active",
    }
    Namespace e2e-tests-services-5cc3p is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #33883

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.134.44 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-zv9dm] []  0xc821575a60  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821d3e2a0 exit status 1 <nil> true [0xc821850058 0xc821850080 0xc821850090] [0xc821850058 0xc821850080 0xc821850090] [0xc821850060 0xc821850078 0xc821850088] [0xafa5c0 0xafa720 0xafa720] 0xc820b12ae0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.134.44 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-zv9dm] []  0xc821575a60  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821d3e2a0 exit status 1 <nil> true [0xc821850058 0xc821850080 0xc821850090] [0xc821850058 0xc821850080 0xc821850090] [0xc821850060 0xc821850078 0xc821850088] [0xafa5c0 0xafa720 0xafa720] 0xc820b12ae0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:792
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.134.44 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-321k2] []  0xc821931fe0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821b5c6c0 exit status 1 <nil> true [0xc82017a5e8 0xc82017a650 0xc82017a680] [0xc82017a5e8 0xc82017a650 0xc82017a680] [0xc82017a608 0xc82017a628 0xc82017a658] [0xafa5c0 0xafa720 0xafa720] 0xc821bf0e40}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.134.44 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-321k2] []  0xc821931fe0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821b5c6c0 exit status 1 <nil> true [0xc82017a5e8 0xc82017a650 0xc82017a680] [0xc82017a5e8 0xc82017a650 0xc82017a680] [0xc82017a608 0xc82017a628 0xc82017a658] [0xafa5c0 0xafa720 0xafa720] 0xc821bf0e40}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820894fa0>: {
        s: "Namespace e2e-tests-services-5cc3p is active",
    }
    Namespace e2e-tests-services-5cc3p is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #31918

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:67
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:59

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820273d50>: {
        s: "Namespace e2e-tests-services-5cc3p is active",
    }
    Namespace e2e-tests-services-5cc3p is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82156fda0>: {
        s: "Namespace e2e-tests-services-5cc3p is active",
    }
    Namespace e2e-tests-services-5cc3p is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8211aaf80>: {
        s: "Namespace e2e-tests-services-5cc3p is active",
    }
    Namespace e2e-tests-services-5cc3p is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821638fe0>: {
        s: "Namespace e2e-tests-services-5cc3p is active",
    }
    Namespace e2e-tests-services-5cc3p is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8210199b0>: {
        s: "Namespace e2e-tests-services-5cc3p is active",
    }
    Namespace e2e-tests-services-5cc3p is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #28071

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:233
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.134.44 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-068hw] []  0xc821b5d360  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821b5d9e0 exit status 1 <nil> true [0xc8208902a0 0xc8208902c8 0xc8208902d8] [0xc8208902a0 0xc8208902c8 0xc8208902d8] [0xc8208902a8 0xc8208902c0 0xc8208902d0] [0xafa5c0 0xafa720 0xafa720] 0xc821cb1620}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.134.44 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-068hw] []  0xc821b5d360  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821b5d9e0 exit status 1 <nil> true [0xc8208902a0 0xc8208902c8 0xc8208902d8] [0xc8208902a0 0xc8208902c8 0xc8208902d8] [0xc8208902a8 0xc8208902c0 0xc8208902d0] [0xafa5c0 0xafa720 0xafa720] 0xc821cb1620}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821040590>: {
        s: "Namespace e2e-tests-services-5cc3p is active",
    }
    Namespace e2e-tests-services-5cc3p is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #34223

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:428
Expected error:
    <*net.OpError | 0xc82125df40>: {
        Op: "dial",
        Net: "tcp",
        Source: nil,
        Addr: {
            IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffhƆ,",
            Port: 443,
            Zone: "",
        },
        Err: {
            Syscall: "getsockopt",
            Err: 0x6f,
        },
    }
    dial tcp 104.198.134.44:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:394

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:275
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.134.44 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-d986h] []  0xc820b462c0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820b46a60 exit status 1 <nil> true [0xc820036e78 0xc820036ea0 0xc820036eb0] [0xc820036e78 0xc820036ea0 0xc820036eb0] [0xc820036e80 0xc820036e98 0xc820036ea8] [0xafa5c0 0xafa720 0xafa720] 0xc820c2a9c0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.134.44 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-d986h] []  0xc820b462c0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820b46a60 exit status 1 <nil> true [0xc820036e78 0xc820036ea0 0xc820036eb0] [0xc820036e78 0xc820036ea0 0xc820036eb0] [0xc820036e80 0xc820036e98 0xc820036ea8] [0xafa5c0 0xafa720 0xafa720] 0xc820c2a9c0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc820170af0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:197

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #37177

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821191a90>: {
        s: "Namespace e2e-tests-services-5cc3p is active",
    }
    Namespace e2e-tests-services-5cc3p is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8209601f0>: {
        s: "Namespace e2e-tests-services-5cc3p is active",
    }
    Namespace e2e-tests-services-5cc3p is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #28091

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820d22360>: {
        s: "Namespace e2e-tests-services-5cc3p is active",
    }
    Namespace e2e-tests-services-5cc3p is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:452
Expected error:
    <*errors.errorString | 0xc821976e90>: {
        s: "failed to wait for pods responding: pod with UID 14a4f189-b763-11e6-80a9-42010af00026 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-hs3kg/pods 36528} [{{ } {my-hostname-delete-node-1l84k my-hostname-delete-node- e2e-tests-resize-nodes-hs3kg /api/v1/namespaces/e2e-tests-resize-nodes-hs3kg/pods/my-hostname-delete-node-1l84k 14a4a215-b763-11e6-80a9-42010af00026 36239 0 2016-11-30 17:11:25 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-hs3kg\",\"name\":\"my-hostname-delete-node\",\"uid\":\"14a2b8b3-b763-11e6-80a9-42010af00026\",\"apiVersion\":\"v1\",\"resourceVersion\":\"36222\"}}\n] [{v1 ReplicationController my-hostname-delete-node 14a2b8b3-b763-11e6-80a9-42010af00026 0xc821b30f87}] [] } {[{default-token-2k868 {<nil> <nil> <nil> <nil> <nil> 0xc821a1d260 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-2k868 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821b31080 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-e25f3e7c-7mkj 0xc82225bfc0 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 17:11:25 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 17:11:27 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 17:11:25 -0800 PST  }]   10.240.0.5 10.124.4.35 2016-11-30 17:11:25 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821d14620 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://452c6769f955106364acc21d4b623c15f7610f01b5693638879506f3c0fe214e}]}} {{ } {my-hostname-delete-node-567rf my-hostname-delete-node- e2e-tests-resize-nodes-hs3kg /api/v1/namespaces/e2e-tests-resize-nodes-hs3kg/pods/my-hostname-delete-node-567rf 14a4dbcd-b763-11e6-80a9-42010af00026 36235 0 2016-11-30 17:11:25 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-hs3kg\",\"name\":\"my-hostname-delete-node\",\"uid\":\"14a2b8b3-b763-11e6-80a9-42010af00026\",\"apiVersion\":\"v1\",\"resourceVersion\":\"36222\"}}\n] [{v1 ReplicationController my-hostname-delete-node 14a2b8b3-b763-11e6-80a9-42010af00026 0xc821b31317}] [] } {[{default-token-2k868 {<nil> <nil> <nil> <nil> <nil> 0xc821a1d2c0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-2k868 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821b31410 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-e25f3e7c-epy1 0xc821cd0100 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 17:11:25 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 17:11:26 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 17:11:25 -0800 PST  }]   10.240.0.8 10.124.0.8 2016-11-30 17:11:25 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821d14640 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://f998252306a13b143da00b6b0d23e37cd75f7be3a9c34875671d4aeb2c56b82d}]}} {{ } {my-hostname-delete-node-svjzv my-hostname-delete-node- e2e-tests-resize-nodes-hs3kg /api/v1/namespaces/e2e-tests-resize-nodes-hs3kg/pods/my-hostname-delete-node-svjzv 495d65ee-b763-11e6-80a9-42010af00026 36381 0 2016-11-30 17:12:54 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-hs3kg\",\"name\":\"my-hostname-delete-node\",\"uid\":\"14a2b8b3-b763-11e6-80a9-42010af00026\",\"apiVersion\":\"v1\",\"resourceVersion\":\"36319\"}}\n] [{v1 ReplicationController my-hostname-delete-node 14a2b8b3-b763-11e6-80a9-42010af00026 0xc821b316a7}] [] } {[{default-token-2k868 {<nil> <nil> <nil> <nil> <nil> 0xc821a1d320 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-2k868 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821b317a0 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-e25f3e7c-7mkj 0xc821cd0240 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 17:12:54 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 17:12:55 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 17:12:54 -0800 PST  }]   10.240.0.5 10.124.4.36 2016-11-30 17:12:54 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821d14660 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://2b4642bb427c6e77eab6dc0b3266a760919d333c3b389434e26b1ef97cfb8b0d}]}}]}",
    }
    failed to wait for pods responding: pod with UID 14a4f189-b763-11e6-80a9-42010af00026 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-hs3kg/pods 36528} [{{ } {my-hostname-delete-node-1l84k my-hostname-delete-node- e2e-tests-resize-nodes-hs3kg /api/v1/namespaces/e2e-tests-resize-nodes-hs3kg/pods/my-hostname-delete-node-1l84k 14a4a215-b763-11e6-80a9-42010af00026 36239 0 2016-11-30 17:11:25 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-hs3kg","name":"my-hostname-delete-node","uid":"14a2b8b3-b763-11e6-80a9-42010af00026","apiVersion":"v1","resourceVersion":"36222"}}
    ] [{v1 ReplicationController my-hostname-delete-node 14a2b8b3-b763-11e6-80a9-42010af00026 0xc821b30f87}] [] } {[{default-token-2k868 {<nil> <nil> <nil> <nil> <nil> 0xc821a1d260 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-2k868 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821b31080 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-e25f3e7c-7mkj 0xc82225bfc0 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 17:11:25 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 17:11:27 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 17:11:25 -0800 PST  }]   10.240.0.5 10.124.4.35 2016-11-30 17:11:25 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821d14620 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://452c6769f955106364acc21d4b623c15f7610f01b5693638879506f3c0fe214e}]}} {{ } {my-hostname-delete-node-567rf my-hostname-delete-node- e2e-tests-resize-nodes-hs3kg /api/v1/namespaces/e2e-tests-resize-nodes-hs3kg/pods/my-hostname-delete-node-567rf 14a4dbcd-b763-11e6-80a9-42010af00026 36235 0 2016-11-30 17:11:25 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-hs3kg","name":"my-hostname-delete-node","uid":"14a2b8b3-b763-11e6-80a9-42010af00026","apiVersion":"v1","resourceVersion":"36222"}}
    ] [{v1 ReplicationController my-hostname-delete-node 14a2b8b3-b763-11e6-80a9-42010af00026 0xc821b31317}] [] } {[{default-token-2k868 {<nil> <nil> <nil> <nil> <nil> 0xc821a1d2c0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-2k868 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821b31410 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-e25f3e7c-epy1 0xc821cd0100 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 17:11:25 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 17:11:26 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 17:11:25 -0800 PST  }]   10.240.0.8 10.124.0.8 2016-11-30 17:11:25 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821d14640 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://f998252306a13b143da00b6b0d23e37cd75f7be3a9c34875671d4aeb2c56b82d}]}} {{ } {my-hostname-delete-node-svjzv my-hostname-delete-node- e2e-tests-resize-nodes-hs3kg /api/v1/namespaces/e2e-tests-resize-nodes-hs3kg/pods/my-hostname-delete-node-svjzv 495d65ee-b763-11e6-80a9-42010af00026 36381 0 2016-11-30 17:12:54 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-hs3kg","name":"my-hostname-delete-node","uid":"14a2b8b3-b763-11e6-80a9-42010af00026","apiVersion":"v1","resourceVersion":"36319"}}
    ] [{v1 ReplicationController my-hostname-delete-node 14a2b8b3-b763-11e6-80a9-42010af00026 0xc821b316a7}] [] } {[{default-token-2k868 {<nil> <nil> <nil> <nil> <nil> 0xc821a1d320 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-2k868 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821b317a0 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-e25f3e7c-7mkj 0xc821cd0240 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 17:12:54 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 17:12:55 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-30 17:12:54 -0800 PST  }]   10.240.0.5 10.124.4.36 2016-11-30 17:12:54 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821d14660 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://2b4642bb427c6e77eab6dc0b3266a760919d333c3b389434e26b1ef97cfb8b0d}]}}]}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:451

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820983330>: {
        s: "Namespace e2e-tests-services-5cc3p is active",
    }
    Namespace e2e-tests-services-5cc3p is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.134.44 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-7205g] []  0xc8207bc040  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8207bc720 exit status 1 <nil> true [0xc8208d86b8 0xc8208d8748 0xc8208d8758] [0xc8208d86b8 0xc8208d8748 0xc8208d8758] [0xc8208d86c8 0xc8208d8708 0xc8208d8750] [0xafa5c0 0xafa720 0xafa720] 0xc8219f7320}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.134.44 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-7205g] []  0xc8207bc040  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8207bc720 exit status 1 <nil> true [0xc8208d86b8 0xc8208d8748 0xc8208d8758] [0xc8208d86b8 0xc8208d8748 0xc8208d8758] [0xc8208d86c8 0xc8208d8708 0xc8208d8750] [0xafa5c0 0xafa720 0xafa720] 0xc8219f7320}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821019560>: {
        s: "Namespace e2e-tests-services-5cc3p is active",
    }
    Namespace e2e-tests-services-5cc3p is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8210675c0>: {
        s: "Namespace e2e-tests-services-5cc3p is active",
    }
    Namespace e2e-tests-services-5cc3p is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #29516

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:756
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.134.44 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-tkwzr] []  0xc8210c5f20  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8210e6760 exit status 1 <nil> true [0xc8200c3720 0xc8200c3748 0xc8200c3758] [0xc8200c3720 0xc8200c3748 0xc8200c3758] [0xc8200c3728 0xc8200c3740 0xc8200c3750] [0xafa5c0 0xafa720 0xafa720] 0xc820bc29c0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.134.44 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-tkwzr] []  0xc8210c5f20  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8210e6760 exit status 1 <nil> true [0xc8200c3720 0xc8200c3748 0xc8200c3758] [0xc8200c3720 0xc8200c3748 0xc8200c3758] [0xc8200c3728 0xc8200c3740 0xc8200c3750] [0xafa5c0 0xafa720 0xafa720] 0xc820bc29c0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28493 #29964

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.134.44 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-q7mm4] []  0xc821649780  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821974060 exit status 1 <nil> true [0xc821516258 0xc8215162b0 0xc8215162c8] [0xc821516258 0xc8215162b0 0xc8215162c8] [0xc821516280 0xc8215162a0 0xc8215162c0] [0xafa5c0 0xafa720 0xafa720] 0xc822094a80}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.134.44 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-q7mm4] []  0xc821649780  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821974060 exit status 1 <nil> true [0xc821516258 0xc8215162b0 0xc8215162c8] [0xc821516258 0xc8215162b0 0xc8215162c8] [0xc821516280 0xc8215162a0 0xc8215162c0] [0xafa5c0 0xafa720 0xafa720] 0xc822094a80}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #27156 #28979 #30489 #33649

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:219
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.134.44 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-vhjz1] []  0xc821611500  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821611e00 exit status 1 <nil> true [0xc821516330 0xc8215164d0 0xc8215164f0] [0xc821516330 0xc8215164d0 0xc8215164f0] [0xc821516370 0xc8215164c0 0xc8215164e0] [0xafa5c0 0xafa720 0xafa720] 0xc820dbc780}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.134.44 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-vhjz1] []  0xc821611500  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821611e00 exit status 1 <nil> true [0xc821516330 0xc8215164d0 0xc8215164f0] [0xc821516330 0xc8215164d0 0xc8215164f0] [0xc821516370 0xc8215164c0 0xc8215164e0] [0xafa5c0 0xafa720 0xafa720] 0xc820dbc780}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

@k8s-github-robot
Copy link
Author

@fejta fejta closed this as completed Dec 2, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

3 participants