Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-master: broken test run #37905

Closed
k8s-github-robot opened this issue Dec 2, 2016 · 2 comments
Assignees
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-master/120/

Multiple broken tests:

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:70
Expected error:
    <*errors.errorString | 0xc821664e10>: {
        s: "error waiting for deployment test-rollover-deployment status to match expectation: timed out waiting for the condition",
    }
    error waiting for deployment test-rollover-deployment status to match expectation: timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26509 #26834 #29780 #35355

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200c7060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200c7060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200c7060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070 #34383

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:453
Expected error:
    <*errors.errorString | 0xc822abce40>: {
        s: "failed to wait for pods responding: pod with UID b4af7a10-b84f-11e6-b47c-42010af0001b is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-12bw2/pods 44748} [{{ } {my-hostname-delete-node-2kxhc my-hostname-delete-node- e2e-tests-resize-nodes-12bw2 /api/v1/namespaces/e2e-tests-resize-nodes-12bw2/pods/my-hostname-delete-node-2kxhc b4af265e-b84f-11e6-b47c-42010af0001b 44468 0 {2016-12-01 21:25:15 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-12bw2\",\"name\":\"my-hostname-delete-node\",\"uid\":\"b4ad62e4-b84f-11e6-b47c-42010af0001b\",\"apiVersion\":\"v1\",\"resourceVersion\":\"44455\"}}\n] [{v1 ReplicationController my-hostname-delete-node b4ad62e4-b84f-11e6-b47c-42010af0001b 0xc820d832e7}] []} {[{default-token-th95k {<nil> <nil> <nil> <nil> <nil> 0xc8214f9860 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-th95k true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820d833e0 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-c104f775-m01a 0xc8228fee80 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-01 21:25:15 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-01 21:25:16 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-01 21:25:15 -0800 PST}  }]   10.240.0.5 10.124.3.3 2016-12-01T21:25:15-08:00 [] [{my-hostname-delete-node {<nil> 0xc821eb5b00 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://296bd206acb015336b7980969552e83f341fcbb675b69c2e3551daa1eefd1d28}]}} {{ } {my-hostname-delete-node-5pt8b my-hostname-delete-node- e2e-tests-resize-nodes-12bw2 /api/v1/namespaces/e2e-tests-resize-nodes-12bw2/pods/my-hostname-delete-node-5pt8b e544b5bd-b84f-11e6-b47c-42010af0001b 44600 0 {2016-12-01 21:26:36 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-12bw2\",\"name\":\"my-hostname-delete-node\",\"uid\":\"b4ad62e4-b84f-11e6-b47c-42010af0001b\",\"apiVersion\":\"v1\",\"resourceVersion\":\"44545\"}}\n] [{v1 ReplicationController my-hostname-delete-node b4ad62e4-b84f-11e6-b47c-42010af0001b 0xc820d83677}] []} {[{default-token-th95k {<nil> <nil> <nil> <nil> <nil> 0xc8214f98c0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-th95k true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820d83770 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-c104f775-pkbx 0xc8228fef80 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-01 21:26:37 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-01 21:26:38 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-01 21:26:37 -0800 PST}  }]   10.240.0.2 10.124.2.4 2016-12-01T21:26:37-08:00 [] [{my-hostname-delete-node {<nil> 0xc821eb5b20 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://3a23c16266f244189ab18cfee9d33603c6edc66d85a2ff2ac8f03c158e9be64f}]}} {{ } {my-hostname-delete-node-7xjpr my-hostname-delete-node- e2e-tests-resize-nodes-12bw2 /api/v1/namespaces/e2e-tests-resize-nodes-12bw2/pods/my-hostname-delete-node-7xjpr e54bbe84-b84f-11e6-b47c-42010af0001b 44602 0 {2016-12-01 21:26:37 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-12bw2\",\"name\":\"my-hostname-delete-node\",\"uid\":\"b4ad62e4-b84f-11e6-b47c-42010af0001b\",\"apiVersion\":\"v1\",\"resourceVersion\":\"44545\"}}\n] [{v1 ReplicationController my-hostname-delete-node b4ad62e4-b84f-11e6-b47c-42010af0001b 0xc820d83a07}] []} {[{default-token-th95k {<nil> <nil> <nil> <nil> <nil> 0xc8214f9920 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-th95k true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820d83b00 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-c104f775-m01a 0xc8228ff080 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-01 21:26:37 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-01 21:26:39 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-01 21:26:37 -0800 PST}  }]   10.240.0.5 10.124.3.4 2016-12-01T21:26:37-08:00 [] [{my-hostname-delete-node {<nil> 0xc821eb5b40 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://b33fcde6b015de27927d68a748497b5165bdfd56ec49497be5cf1923e92d807b}]}}]}",
    }
    failed to wait for pods responding: pod with UID b4af7a10-b84f-11e6-b47c-42010af0001b is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-12bw2/pods 44748} [{{ } {my-hostname-delete-node-2kxhc my-hostname-delete-node- e2e-tests-resize-nodes-12bw2 /api/v1/namespaces/e2e-tests-resize-nodes-12bw2/pods/my-hostname-delete-node-2kxhc b4af265e-b84f-11e6-b47c-42010af0001b 44468 0 {2016-12-01 21:25:15 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-12bw2","name":"my-hostname-delete-node","uid":"b4ad62e4-b84f-11e6-b47c-42010af0001b","apiVersion":"v1","resourceVersion":"44455"}}
    ] [{v1 ReplicationController my-hostname-delete-node b4ad62e4-b84f-11e6-b47c-42010af0001b 0xc820d832e7}] []} {[{default-token-th95k {<nil> <nil> <nil> <nil> <nil> 0xc8214f9860 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-th95k true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820d833e0 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-c104f775-m01a 0xc8228fee80 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-01 21:25:15 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-01 21:25:16 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-01 21:25:15 -0800 PST}  }]   10.240.0.5 10.124.3.3 2016-12-01T21:25:15-08:00 [] [{my-hostname-delete-node {<nil> 0xc821eb5b00 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://296bd206acb015336b7980969552e83f341fcbb675b69c2e3551daa1eefd1d28}]}} {{ } {my-hostname-delete-node-5pt8b my-hostname-delete-node- e2e-tests-resize-nodes-12bw2 /api/v1/namespaces/e2e-tests-resize-nodes-12bw2/pods/my-hostname-delete-node-5pt8b e544b5bd-b84f-11e6-b47c-42010af0001b 44600 0 {2016-12-01 21:26:36 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-12bw2","name":"my-hostname-delete-node","uid":"b4ad62e4-b84f-11e6-b47c-42010af0001b","apiVersion":"v1","resourceVersion":"44545"}}
    ] [{v1 ReplicationController my-hostname-delete-node b4ad62e4-b84f-11e6-b47c-42010af0001b 0xc820d83677}] []} {[{default-token-th95k {<nil> <nil> <nil> <nil> <nil> 0xc8214f98c0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-th95k true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820d83770 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-c104f775-pkbx 0xc8228fef80 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-01 21:26:37 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-01 21:26:38 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-01 21:26:37 -0800 PST}  }]   10.240.0.2 10.124.2.4 2016-12-01T21:26:37-08:00 [] [{my-hostname-delete-node {<nil> 0xc821eb5b20 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://3a23c16266f244189ab18cfee9d33603c6edc66d85a2ff2ac8f03c158e9be64f}]}} {{ } {my-hostname-delete-node-7xjpr my-hostname-delete-node- e2e-tests-resize-nodes-12bw2 /api/v1/namespaces/e2e-tests-resize-nodes-12bw2/pods/my-hostname-delete-node-7xjpr e54bbe84-b84f-11e6-b47c-42010af0001b 44602 0 {2016-12-01 21:26:37 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-12bw2","name":"my-hostname-delete-node","uid":"b4ad62e4-b84f-11e6-b47c-42010af0001b","apiVersion":"v1","resourceVersion":"44545"}}
    ] [{v1 ReplicationController my-hostname-delete-node b4ad62e4-b84f-11e6-b47c-42010af0001b 0xc820d83a07}] []} {[{default-token-th95k {<nil> <nil> <nil> <nil> <nil> 0xc8214f9920 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-th95k true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820d83b00 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-c104f775-m01a 0xc8228ff080 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-01 21:26:37 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-01 21:26:39 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-01 21:26:37 -0800 PST}  }]   10.240.0.5 10.124.3.4 2016-12-01T21:26:37-08:00 [] [{my-hostname-delete-node {<nil> 0xc821eb5b40 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://b33fcde6b015de27927d68a748497b5165bdfd56ec49497be5cf1923e92d807b}]}}]}
not to have occurred

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc820e8ee20>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.194.9 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-q97fv -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"namespace\":\"e2e-tests-kubectl-q97fv\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-q97fv/services/redis-master\", \"uid\":\"e176bfbc-b821-11e6-aa40-42010af00032\", \"resourceVersion\":\"4942\", \"creationTimestamp\":\"2016-12-01T23:57:13Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"targetPort\":\"redis-server\", \"protocol\":\"TCP\", \"port\":6379}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.254.39\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc8212411c0 exit status 1 <nil> true [0xc821200300 0xc821200318 0xc821200330] [0xc821200300 0xc821200318 0xc821200330] [0xc821200310 0xc821200328] [0xa975d0 0xa975d0] 0xc820f23320}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"namespace\":\"e2e-tests-kubectl-q97fv\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-q97fv/services/redis-master\", \"uid\":\"e176bfbc-b821-11e6-aa40-42010af00032\", \"resourceVersion\":\"4942\", \"creationTimestamp\":\"2016-12-01T23:57:13Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"targetPort\":\"redis-server\", \"protocol\":\"TCP\", \"port\":6379}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.254.39\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.194.9 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-q97fv -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"namespace":"e2e-tests-kubectl-q97fv", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-q97fv/services/redis-master", "uid":"e176bfbc-b821-11e6-aa40-42010af00032", "resourceVersion":"4942", "creationTimestamp":"2016-12-01T23:57:13Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"targetPort":"redis-server", "protocol":"TCP", "port":6379}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.254.39", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc8212411c0 exit status 1 <nil> true [0xc821200300 0xc821200318 0xc821200330] [0xc821200300 0xc821200318 0xc821200330] [0xc821200310 0xc821200328] [0xa975d0 0xa975d0] 0xc820f23320}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"namespace":"e2e-tests-kubectl-q97fv", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-q97fv/services/redis-master", "uid":"e176bfbc-b821-11e6-aa40-42010af00032", "resourceVersion":"4942", "creationTimestamp":"2016-12-01T23:57:13Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"targetPort":"redis-server", "protocol":"TCP", "port":6379}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.254.39", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200c7060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #37177

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Previous issues for this suite: #37734

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence. area/test-infra labels Dec 2, 2016
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-master/121/

Multiple broken tests:

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200bd060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #37177

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc820ab4fe0>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.194.9 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-7w278 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"spec\":map[string]interface {}{\"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.255.92\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"creationTimestamp\":\"2016-12-02T06:50:26Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-7w278\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-7w278/services/redis-master\", \"uid\":\"9b4f8b68-b85b-11e6-82ee-42010af0002f\", \"resourceVersion\":\"5737\"}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc8203127a0 exit status 1 <nil> true [0xc8200c4080 0xc8200c4250 0xc8200c4288] [0xc8200c4080 0xc8200c4250 0xc8200c4288] [0xc8200c4228 0xc8200c4278] [0xa975d0 0xa975d0] 0xc820ed2240}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"spec\":map[string]interface {}{\"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.255.92\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"creationTimestamp\":\"2016-12-02T06:50:26Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-7w278\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-7w278/services/redis-master\", \"uid\":\"9b4f8b68-b85b-11e6-82ee-42010af0002f\", \"resourceVersion\":\"5737\"}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.194.9 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-7w278 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"spec":map[string]interface {}{"selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.255.92", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"creationTimestamp":"2016-12-02T06:50:26Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-7w278", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-7w278/services/redis-master", "uid":"9b4f8b68-b85b-11e6-82ee-42010af0002f", "resourceVersion":"5737"}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc8203127a0 exit status 1 <nil> true [0xc8200c4080 0xc8200c4250 0xc8200c4288] [0xc8200c4080 0xc8200c4250 0xc8200c4288] [0xc8200c4228 0xc8200c4278] [0xa975d0 0xa975d0] 0xc820ed2240}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"spec":map[string]interface {}{"selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.255.92", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"creationTimestamp":"2016-12-02T06:50:26Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-7w278", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-7w278/services/redis-master", "uid":"9b4f8b68-b85b-11e6-82ee-42010af0002f", "resourceVersion":"5737"}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200bd060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200bd060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:70
Expected error:
    <*errors.errorString | 0xc821aa57d0>: {
        s: "error waiting for deployment test-rollover-deployment status to match expectation: timed out waiting for the condition",
    }
    error waiting for deployment test-rollover-deployment status to match expectation: timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26509 #26834 #29780 #35355

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200bd060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070 #34383

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-container_vm-1.3-gci-1.5-upgrade-master/122/

Multiple broken tests:

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200d5060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #37177

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc82202d460>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://107.178.209.176 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-w81lp -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"creationTimestamp\":\"2016-12-02T16:17:00Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-w81lp\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-w81lp/services/redis-master\", \"uid\":\"c12600b6-b8aa-11e6-ba9d-42010af00038\", \"resourceVersion\":\"26130\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.252.166\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc820b20f40 exit status 1 <nil> true [0xc821b70270 0xc821b70288 0xc821b702b0] [0xc821b70270 0xc821b70288 0xc821b702b0] [0xc821b70280 0xc821b702a0] [0xa975d0 0xa975d0] 0xc8225d2960}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"creationTimestamp\":\"2016-12-02T16:17:00Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-w81lp\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-w81lp/services/redis-master\", \"uid\":\"c12600b6-b8aa-11e6-ba9d-42010af00038\", \"resourceVersion\":\"26130\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.252.166\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://107.178.209.176 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-w81lp -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"creationTimestamp":"2016-12-02T16:17:00Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-w81lp", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-w81lp/services/redis-master", "uid":"c12600b6-b8aa-11e6-ba9d-42010af00038", "resourceVersion":"26130"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.252.166", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc820b20f40 exit status 1 <nil> true [0xc821b70270 0xc821b70288 0xc821b702b0] [0xc821b70270 0xc821b70288 0xc821b702b0] [0xc821b70280 0xc821b702a0] [0xa975d0 0xa975d0] 0xc8225d2960}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"creationTimestamp":"2016-12-02T16:17:00Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-w81lp", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-w81lp/services/redis-master", "uid":"c12600b6-b8aa-11e6-ba9d-42010af00038", "resourceVersion":"26130"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.252.166", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:70
Expected error:
    <*errors.errorString | 0xc82220e060>: {
        s: "error waiting for deployment test-rollover-deployment status to match expectation: timed out waiting for the condition",
    }
    error waiting for deployment test-rollover-deployment status to match expectation: timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26509 #26834 #29780 #35355

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200d5060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200d5060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070 #34383

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:453
Expected error:
    <*errors.errorString | 0xc820dab550>: {
        s: "failed to wait for pods responding: pod with UID f3691484-b895-11e6-ba9d-42010af00038 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-rn255/pods 10510} [{{ } {my-hostname-delete-node-0qvc0 my-hostname-delete-node- e2e-tests-resize-nodes-rn255 /api/v1/namespaces/e2e-tests-resize-nodes-rn255/pods/my-hostname-delete-node-0qvc0 2420c15c-b896-11e6-ba9d-42010af00038 10359 0 {2016-12-02 05:49:27 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-rn255\",\"name\":\"my-hostname-delete-node\",\"uid\":\"f367876a-b895-11e6-ba9d-42010af00038\",\"apiVersion\":\"v1\",\"resourceVersion\":\"10277\"}}\n] [{v1 ReplicationController my-hostname-delete-node f367876a-b895-11e6-ba9d-42010af00038 0xc82089f6b7}] []} {[{default-token-3l635 {<nil> <nil> <nil> <nil> <nil> 0xc821a0b710 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-3l635 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82089f7c0 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-ed679c1f-x4lk 0xc821c89200 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-02 05:49:27 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-02 05:49:29 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-02 05:49:27 -0800 PST}  }]   10.240.0.4 10.124.1.5 2016-12-02T05:49:27-08:00 [] [{my-hostname-delete-node {<nil> 0xc8210bc7c0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://3486f8a4ddae6c05c10f2f7093c8f8582b7c51b84bf53522d054731ba58b6e0f}]}} {{ } {my-hostname-delete-node-l99n0 my-hostname-delete-node- e2e-tests-resize-nodes-rn255 /api/v1/namespaces/e2e-tests-resize-nodes-rn255/pods/my-hostname-delete-node-l99n0 f36951f1-b895-11e6-ba9d-42010af00038 10214 0 {2016-12-02 05:48:05 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-rn255\",\"name\":\"my-hostname-delete-node\",\"uid\":\"f367876a-b895-11e6-ba9d-42010af00038\",\"apiVersion\":\"v1\",\"resourceVersion\":\"10196\"}}\n] [{v1 ReplicationController my-hostname-delete-node f367876a-b895-11e6-ba9d-42010af00038 0xc82089fb77}] []} {[{default-token-3l635 {<nil> <nil> <nil> <nil> <nil> 0xc821a0b770 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-3l635 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82089fd00 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-ed679c1f-r1im 0xc821c892c0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-02 05:48:05 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-02 05:48:07 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-02 05:48:05 -0800 PST}  }]   10.240.0.3 10.124.2.3 2016-12-02T05:48:05-08:00 [] [{my-hostname-delete-node {<nil> 0xc8210bc7e0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://ccb904affe3a0715e38ef5ed59640b5b5c609fec5eaea8635199fe66efbbef30}]}} {{ } {my-hostname-delete-node-qdt9m my-hostname-delete-node- e2e-tests-resize-nodes-rn255 /api/v1/namespaces/e2e-tests-resize-nodes-rn255/pods/my-hostname-delete-node-qdt9m f3692f0e-b895-11e6-ba9d-42010af00038 10212 0 {2016-12-02 05:48:05 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-rn255\",\"name\":\"my-hostname-delete-node\",\"uid\":\"f367876a-b895-11e6-ba9d-42010af00038\",\"apiVersion\":\"v1\",\"resourceVersion\":\"10196\"}}\n] [{v1 ReplicationController my-hostname-delete-node f367876a-b895-11e6-ba9d-42010af00038 0xc82089ffa7}] []} {[{default-token-3l635 {<nil> <nil> <nil> <nil> <nil> 0xc821a0b7d0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-3l635 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821660140 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-ed679c1f-x4lk 0xc821c89380 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-02 05:48:05 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-02 05:48:07 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-02 05:48:05 -0800 PST}  }]   10.240.0.4 10.124.1.4 2016-12-02T05:48:05-08:00 [] [{my-hostname-delete-node {<nil> 0xc8210bc800 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://73f5e4e495785425820fff3ea78a00658153f59f0342745d7d990fe0bb18c659}]}}]}",
    }
    failed to wait for pods responding: pod with UID f3691484-b895-11e6-ba9d-42010af00038 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-rn255/pods 10510} [{{ } {my-hostname-delete-node-0qvc0 my-hostname-delete-node- e2e-tests-resize-nodes-rn255 /api/v1/namespaces/e2e-tests-resize-nodes-rn255/pods/my-hostname-delete-node-0qvc0 2420c15c-b896-11e6-ba9d-42010af00038 10359 0 {2016-12-02 05:49:27 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-rn255","name":"my-hostname-delete-node","uid":"f367876a-b895-11e6-ba9d-42010af00038","apiVersion":"v1","resourceVersion":"10277"}}
    ] [{v1 ReplicationController my-hostname-delete-node f367876a-b895-11e6-ba9d-42010af00038 0xc82089f6b7}] []} {[{default-token-3l635 {<nil> <nil> <nil> <nil> <nil> 0xc821a0b710 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-3l635 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82089f7c0 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-ed679c1f-x4lk 0xc821c89200 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-02 05:49:27 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-02 05:49:29 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-02 05:49:27 -0800 PST}  }]   10.240.0.4 10.124.1.5 2016-12-02T05:49:27-08:00 [] [{my-hostname-delete-node {<nil> 0xc8210bc7c0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://3486f8a4ddae6c05c10f2f7093c8f8582b7c51b84bf53522d054731ba58b6e0f}]}} {{ } {my-hostname-delete-node-l99n0 my-hostname-delete-node- e2e-tests-resize-nodes-rn255 /api/v1/namespaces/e2e-tests-resize-nodes-rn255/pods/my-hostname-delete-node-l99n0 f36951f1-b895-11e6-ba9d-42010af00038 10214 0 {2016-12-02 05:48:05 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-rn255","name":"my-hostname-delete-node","uid":"f367876a-b895-11e6-ba9d-42010af00038","apiVersion":"v1","resourceVersion":"10196"}}
    ] [{v1 ReplicationController my-hostname-delete-node f367876a-b895-11e6-ba9d-42010af00038 0xc82089fb77}] []} {[{default-token-3l635 {<nil> <nil> <nil> <nil> <nil> 0xc821a0b770 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-3l635 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82089fd00 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-ed679c1f-r1im 0xc821c892c0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-02 05:48:05 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-02 05:48:07 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-02 05:48:05 -0800 PST}  }]   10.240.0.3 10.124.2.3 2016-12-02T05:48:05-08:00 [] [{my-hostname-delete-node {<nil> 0xc8210bc7e0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://ccb904affe3a0715e38ef5ed59640b5b5c609fec5eaea8635199fe66efbbef30}]}} {{ } {my-hostname-delete-node-qdt9m my-hostname-delete-node- e2e-tests-resize-nodes-rn255 /api/v1/namespaces/e2e-tests-resize-nodes-rn255/pods/my-hostname-delete-node-qdt9m f3692f0e-b895-11e6-ba9d-42010af00038 10212 0 {2016-12-02 05:48:05 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-rn255","name":"my-hostname-delete-node","uid":"f367876a-b895-11e6-ba9d-42010af00038","apiVersion":"v1","resourceVersion":"10196"}}
    ] [{v1 ReplicationController my-hostname-delete-node f367876a-b895-11e6-ba9d-42010af00038 0xc82089ffa7}] []} {[{default-token-3l635 {<nil> <nil> <nil> <nil> <nil> 0xc821a0b7d0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-3l635 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821660140 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-ed679c1f-x4lk 0xc821c89380 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-02 05:48:05 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-02 05:48:07 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-02 05:48:05 -0800 PST}  }]   10.240.0.4 10.124.1.4 2016-12-02T05:48:05-08:00 [] [{my-hostname-delete-node {<nil> 0xc8210bc800 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://73f5e4e495785425820fff3ea78a00658153f59f0342745d7d990fe0bb18c659}]}}]}
not to have occurred

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200d5060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

@fejta fejta closed this as completed Dec 7, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

3 participants