Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster: broken test run #37906

Closed
k8s-github-robot opened this issue Dec 2, 2016 · 5 comments
Closed
Assignees
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster/403/

Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc820c91ad0>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://130.211.232.209 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-zxkf2 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-zxkf2\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-zxkf2/services/redis-master\", \"uid\":\"79091b42-b830-11e6-bfe3-42010af0002e\", \"resourceVersion\":\"28941\", \"creationTimestamp\":\"2016-12-02T01:41:40Z\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.243.29\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc8216c6bc0 exit status 1 <nil> true [0xc8203fe770 0xc8203fe788 0xc8203ff2b0] [0xc8203fe770 0xc8203fe788 0xc8203ff2b0] [0xc8203fe780 0xc8203fe800] [0xa975d0 0xa975d0] 0xc8215ce2a0}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-zxkf2\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-zxkf2/services/redis-master\", \"uid\":\"79091b42-b830-11e6-bfe3-42010af0002e\", \"resourceVersion\":\"28941\", \"creationTimestamp\":\"2016-12-02T01:41:40Z\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.243.29\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://130.211.232.209 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-zxkf2 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-zxkf2", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-zxkf2/services/redis-master", "uid":"79091b42-b830-11e6-bfe3-42010af0002e", "resourceVersion":"28941", "creationTimestamp":"2016-12-02T01:41:40Z"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.243.29", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc8216c6bc0 exit status 1 <nil> true [0xc8203fe770 0xc8203fe788 0xc8203ff2b0] [0xc8203fe770 0xc8203fe788 0xc8203ff2b0] [0xc8203fe780 0xc8203fe800] [0xa975d0 0xa975d0] 0xc8215ce2a0}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-zxkf2", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-zxkf2/services/redis-master", "uid":"79091b42-b830-11e6-bfe3-42010af0002e", "resourceVersion":"28941", "creationTimestamp":"2016-12-02T01:41:40Z"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.243.29", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Expected error:
    <errors.aggregate | len:3, cap:4>: [
        {
            s: "Resource usage on node \"gke-jenkins-e2e-default-pool-2516ec33-r207\" is not ready yet",
        },
        {
            s: "Resource usage on node \"gke-jenkins-e2e-default-pool-2516ec33-wg25\" is not ready yet",
        },
        {
            s: "Resource usage on node \"gke-jenkins-e2e-default-pool-2516ec33-xqbq\" is not ready yet",
        },
    ]
    [Resource usage on node "gke-jenkins-e2e-default-pool-2516ec33-r207" is not ready yet, Resource usage on node "gke-jenkins-e2e-default-pool-2516ec33-wg25" is not ready yet, Resource usage on node "gke-jenkins-e2e-default-pool-2516ec33-xqbq" is not ready yet]
not to have occurred

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Expected error:
    <errors.aggregate | len:3, cap:4>: [
        {
            s: "Resource usage on node \"gke-jenkins-e2e-default-pool-2516ec33-r207\" is not ready yet",
        },
        {
            s: "Resource usage on node \"gke-jenkins-e2e-default-pool-2516ec33-wg25\" is not ready yet",
        },
        {
            s: "Resource usage on node \"gke-jenkins-e2e-default-pool-2516ec33-xqbq\" is not ready yet",
        },
    ]
    [Resource usage on node "gke-jenkins-e2e-default-pool-2516ec33-r207" is not ready yet, Resource usage on node "gke-jenkins-e2e-default-pool-2516ec33-wg25" is not ready yet, Resource usage on node "gke-jenkins-e2e-default-pool-2516ec33-xqbq" is not ready yet]
not to have occurred

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200b1060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070 #34383

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:70
Expected error:
    <*errors.errorString | 0xc82096cd40>: {
        s: "error waiting for deployment test-rollover-deployment status to match expectation: timed out waiting for the condition",
    }
    error waiting for deployment test-rollover-deployment status to match expectation: timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26509 #26834 #29780 #35355

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200b1060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #37177

Previous issues for this suite: #37766

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence. area/test-infra labels Dec 2, 2016
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster/404/

Multiple broken tests:

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200b3060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070 #34383

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Expected error:
    <errors.aggregate | len:3, cap:4>: [
        {
            s: "Resource usage on node \"gke-jenkins-e2e-default-pool-a5574945-m8fr\" is not ready yet",
        },
        {
            s: "Resource usage on node \"gke-jenkins-e2e-default-pool-a5574945-e3ip\" is not ready yet",
        },
        {
            s: "Resource usage on node \"gke-jenkins-e2e-default-pool-a5574945-lrau\" is not ready yet",
        },
    ]
    [Resource usage on node "gke-jenkins-e2e-default-pool-a5574945-m8fr" is not ready yet, Resource usage on node "gke-jenkins-e2e-default-pool-a5574945-e3ip" is not ready yet, Resource usage on node "gke-jenkins-e2e-default-pool-a5574945-lrau" is not ready yet]
not to have occurred

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc820db6b00>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.148.32 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-vhwtm -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"metadata\":map[string]interface {}{\"uid\":\"7d54ed10-b84b-11e6-9207-42010af0001d\", \"resourceVersion\":\"12131\", \"creationTimestamp\":\"2016-12-02T04:55:04Z\", \"labels\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-vhwtm\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-vhwtm/services/redis-master\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.243.215\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\"}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc82120cbe0 exit status 1 <nil> true [0xc8213b20f0 0xc8213b2108 0xc8213b2120] [0xc8213b20f0 0xc8213b2108 0xc8213b2120] [0xc8213b2100 0xc8213b2118] [0xa975d0 0xa975d0] 0xc8205acba0}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"metadata\":map[string]interface {}{\"uid\":\"7d54ed10-b84b-11e6-9207-42010af0001d\", \"resourceVersion\":\"12131\", \"creationTimestamp\":\"2016-12-02T04:55:04Z\", \"labels\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-vhwtm\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-vhwtm/services/redis-master\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.243.215\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\"}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.148.32 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-vhwtm -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"metadata":map[string]interface {}{"uid":"7d54ed10-b84b-11e6-9207-42010af0001d", "resourceVersion":"12131", "creationTimestamp":"2016-12-02T04:55:04Z", "labels":map[string]interface {}{"role":"master", "app":"redis"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-vhwtm", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-vhwtm/services/redis-master"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.243.215", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1"}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc82120cbe0 exit status 1 <nil> true [0xc8213b20f0 0xc8213b2108 0xc8213b2120] [0xc8213b20f0 0xc8213b2108 0xc8213b2120] [0xc8213b2100 0xc8213b2118] [0xa975d0 0xa975d0] 0xc8205acba0}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"metadata":map[string]interface {}{"uid":"7d54ed10-b84b-11e6-9207-42010af0001d", "resourceVersion":"12131", "creationTimestamp":"2016-12-02T04:55:04Z", "labels":map[string]interface {}{"role":"master", "app":"redis"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-vhwtm", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-vhwtm/services/redis-master"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.243.215", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1"}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_etc_hosts.go:56
Dec  1 22:18:47.900: Failed to read from kubectl exec stdout: EOF

Issues about this test specifically: #27023 #34604

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:70
Expected error:
    <*errors.errorString | 0xc8209c63c0>: {
        s: "error waiting for deployment test-rollover-deployment status to match expectation: timed out waiting for the condition",
    }
    error waiting for deployment test-rollover-deployment status to match expectation: timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26509 #26834 #29780 #35355

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200b3060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #37177

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:453
Expected error:
    <*errors.errorString | 0xc82135d0b0>: {
        s: "failed to wait for pods responding: pod with UID f45d6703-b85e-11e6-9207-42010af0001d is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-s4jvq/pods 28218} [{{ } {my-hostname-delete-node-43xtt my-hostname-delete-node- e2e-tests-resize-nodes-s4jvq /api/v1/namespaces/e2e-tests-resize-nodes-s4jvq/pods/my-hostname-delete-node-43xtt 288c4795-b85f-11e6-9207-42010af0001d 28060 0 {2016-12-01 23:15:52 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-s4jvq\",\"name\":\"my-hostname-delete-node\",\"uid\":\"f45a1f91-b85e-11e6-9207-42010af0001d\",\"apiVersion\":\"v1\",\"resourceVersion\":\"27981\"}}\n] [{v1 ReplicationController my-hostname-delete-node f45a1f91-b85e-11e6-9207-42010af0001d 0xc821909637}] []} {[{default-token-sq9c7 {<nil> <nil> <nil> <nil> <nil> 0xc821010c90 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-sq9c7 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821909730 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-a5574945-lrau 0xc821c3d440 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-01 23:15:52 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-01 23:15:53 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-01 23:15:52 -0800 PST}  }]   10.240.0.2 10.124.2.46 2016-12-01T23:15:52-08:00 [] [{my-hostname-delete-node {<nil> 0xc821840880 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://9dd93928ad586a361ee37e0a78527c296fac75ec57db9f18e41f7894d2262328}]}} {{ } {my-hostname-delete-node-jc13l my-hostname-delete-node- e2e-tests-resize-nodes-s4jvq /api/v1/namespaces/e2e-tests-resize-nodes-s4jvq/pods/my-hostname-delete-node-jc13l f45c9e76-b85e-11e6-9207-42010af0001d 27903 0 {2016-12-01 23:14:24 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-s4jvq\",\"name\":\"my-hostname-delete-node\",\"uid\":\"f45a1f91-b85e-11e6-9207-42010af0001d\",\"apiVersion\":\"v1\",\"resourceVersion\":\"27887\"}}\n] [{v1 ReplicationController my-hostname-delete-node f45a1f91-b85e-11e6-9207-42010af0001d 0xc8219099c7}] []} {[{default-token-sq9c7 {<nil> <nil> <nil> <nil> <nil> 0xc821010cf0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-sq9c7 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821909ac0 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-a5574945-lrau 0xc821c3d540 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-01 23:14:24 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-01 23:14:26 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-01 23:14:24 -0800 PST}  }]   10.240.0.2 10.124.2.44 2016-12-01T23:14:24-08:00 [] [{my-hostname-delete-node {<nil> 0xc8218408a0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://06f1e274e646a0346cf141e796c45549bf82090c1f6eab2eb4592e4d9e32f575}]}} {{ } {my-hostname-delete-node-rqtvj my-hostname-delete-node- e2e-tests-resize-nodes-s4jvq /api/v1/namespaces/e2e-tests-resize-nodes-s4jvq/pods/my-hostname-delete-node-rqtvj f45d30d1-b85e-11e6-9207-42010af0001d 27905 0 {2016-12-01 23:14:24 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-s4jvq\",\"name\":\"my-hostname-delete-node\",\"uid\":\"f45a1f91-b85e-11e6-9207-42010af0001d\",\"apiVersion\":\"v1\",\"resourceVersion\":\"27887\"}}\n] [{v1 ReplicationController my-hostname-delete-node f45a1f91-b85e-11e6-9207-42010af0001d 0xc821909d57}] []} {[{default-token-sq9c7 {<nil> <nil> <nil> <nil> <nil> 0xc821010d50 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-sq9c7 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821909e70 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-a5574945-m8fr 0xc821c3d600 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-01 23:14:24 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-01 23:14:26 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-01 23:14:24 -0800 PST}  }]   10.240.0.4 10.124.3.114 2016-12-01T23:14:24-08:00 [] [{my-hostname-delete-node {<nil> 0xc8218408c0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://33ad9fc8175ed2370a405f7e11dc8f06c56f1e470dc8f174b5d08f2af1c35828}]}}]}",
    }
    failed to wait for pods responding: pod with UID f45d6703-b85e-11e6-9207-42010af0001d is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-s4jvq/pods 28218} [{{ } {my-hostname-delete-node-43xtt my-hostname-delete-node- e2e-tests-resize-nodes-s4jvq /api/v1/namespaces/e2e-tests-resize-nodes-s4jvq/pods/my-hostname-delete-node-43xtt 288c4795-b85f-11e6-9207-42010af0001d 28060 0 {2016-12-01 23:15:52 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-s4jvq","name":"my-hostname-delete-node","uid":"f45a1f91-b85e-11e6-9207-42010af0001d","apiVersion":"v1","resourceVersion":"27981"}}
    ] [{v1 ReplicationController my-hostname-delete-node f45a1f91-b85e-11e6-9207-42010af0001d 0xc821909637}] []} {[{default-token-sq9c7 {<nil> <nil> <nil> <nil> <nil> 0xc821010c90 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-sq9c7 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821909730 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-a5574945-lrau 0xc821c3d440 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-01 23:15:52 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-01 23:15:53 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-01 23:15:52 -0800 PST}  }]   10.240.0.2 10.124.2.46 2016-12-01T23:15:52-08:00 [] [{my-hostname-delete-node {<nil> 0xc821840880 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://9dd93928ad586a361ee37e0a78527c296fac75ec57db9f18e41f7894d2262328}]}} {{ } {my-hostname-delete-node-jc13l my-hostname-delete-node- e2e-tests-resize-nodes-s4jvq /api/v1/namespaces/e2e-tests-resize-nodes-s4jvq/pods/my-hostname-delete-node-jc13l f45c9e76-b85e-11e6-9207-42010af0001d 27903 0 {2016-12-01 23:14:24 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-s4jvq","name":"my-hostname-delete-node","uid":"f45a1f91-b85e-11e6-9207-42010af0001d","apiVersion":"v1","resourceVersion":"27887"}}
    ] [{v1 ReplicationController my-hostname-delete-node f45a1f91-b85e-11e6-9207-42010af0001d 0xc8219099c7}] []} {[{default-token-sq9c7 {<nil> <nil> <nil> <nil> <nil> 0xc821010cf0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-sq9c7 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821909ac0 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-a5574945-lrau 0xc821c3d540 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-01 23:14:24 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-01 23:14:26 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-01 23:14:24 -0800 PST}  }]   10.240.0.2 10.124.2.44 2016-12-01T23:14:24-08:00 [] [{my-hostname-delete-node {<nil> 0xc8218408a0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://06f1e274e646a0346cf141e796c45549bf82090c1f6eab2eb4592e4d9e32f575}]}} {{ } {my-hostname-delete-node-rqtvj my-hostname-delete-node- e2e-tests-resize-nodes-s4jvq /api/v1/namespaces/e2e-tests-resize-nodes-s4jvq/pods/my-hostname-delete-node-rqtvj f45d30d1-b85e-11e6-9207-42010af0001d 27905 0 {2016-12-01 23:14:24 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-s4jvq","name":"my-hostname-delete-node","uid":"f45a1f91-b85e-11e6-9207-42010af0001d","apiVersion":"v1","resourceVersion":"27887"}}
    ] [{v1 ReplicationController my-hostname-delete-node f45a1f91-b85e-11e6-9207-42010af0001d 0xc821909d57}] []} {[{default-token-sq9c7 {<nil> <nil> <nil> <nil> <nil> 0xc821010d50 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-sq9c7 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821909e70 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-a5574945-m8fr 0xc821c3d600 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-01 23:14:24 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-01 23:14:26 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-01 23:14:24 -0800 PST}  }]   10.240.0.4 10.124.3.114 2016-12-01T23:14:24-08:00 [] [{my-hostname-delete-node {<nil> 0xc8218408c0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://33ad9fc8175ed2370a405f7e11dc8f06c56f1e470dc8f174b5d08f2af1c35828}]}}]}
not to have occurred

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Expected error:
    <errors.aggregate | len:3, cap:4>: [
        {
            s: "Resource usage on node \"gke-jenkins-e2e-default-pool-a5574945-h62y\" is not ready yet",
        },
        {
            s: "Resource usage on node \"gke-jenkins-e2e-default-pool-a5574945-lrau\" is not ready yet",
        },
        {
            s: "Resource usage on node \"gke-jenkins-e2e-default-pool-a5574945-m8fr\" is not ready yet",
        },
    ]
    [Resource usage on node "gke-jenkins-e2e-default-pool-a5574945-h62y" is not ready yet, Resource usage on node "gke-jenkins-e2e-default-pool-a5574945-lrau" is not ready yet, Resource usage on node "gke-jenkins-e2e-default-pool-a5574945-m8fr" is not ready yet]
not to have occurred

Issues about this test specifically: #26784 #28384 #31935 #33023

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster/405/

Multiple broken tests:

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200d7060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #37177

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Expected error:
    <errors.aggregate | len:3, cap:4>: [
        {
            s: "Resource usage on node \"gke-jenkins-e2e-default-pool-b5b1d84e-73bh\" is not ready yet",
        },
        {
            s: "Resource usage on node \"gke-jenkins-e2e-default-pool-b5b1d84e-k825\" is not ready yet",
        },
        {
            s: "Resource usage on node \"gke-jenkins-e2e-default-pool-b5b1d84e-zsi0\" is not ready yet",
        },
    ]
    [Resource usage on node "gke-jenkins-e2e-default-pool-b5b1d84e-73bh" is not ready yet, Resource usage on node "gke-jenkins-e2e-default-pool-b5b1d84e-k825" is not ready yet, Resource usage on node "gke-jenkins-e2e-default-pool-b5b1d84e-zsi0" is not ready yet]
not to have occurred

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:372
Expected error:
    <*errors.errorString | 0xc82087cb90>: {
        s: "error while stopping RC: service2: Get https://104.198.158.230/api/v1/namespaces/e2e-tests-services-nknlb/replicationcontrollers/service2: dial tcp 104.198.158.230:443: getsockopt: connection refused",
    }
    error while stopping RC: service2: Get https://104.198.158.230/api/v1/namespaces/e2e-tests-services-nknlb/replicationcontrollers/service2: dial tcp 104.198.158.230:443: getsockopt: connection refused
not to have occurred

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc821c3c070>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.158.230 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-jhqhz -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"metadata\":map[string]interface {}{\"resourceVersion\":\"7550\", \"creationTimestamp\":\"2016-12-02T11:25:59Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-jhqhz\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-jhqhz/services/redis-master\", \"uid\":\"19d0678f-b882-11e6-ad9e-42010af00015\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.245.240\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\"}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc8208586a0 exit status 1 <nil> true [0xc820094358 0xc820094388 0xc8200943b8] [0xc820094358 0xc820094388 0xc8200943b8] [0xc820094380 0xc8200943a8] [0xa975d0 0xa975d0] 0xc820589f80}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"metadata\":map[string]interface {}{\"resourceVersion\":\"7550\", \"creationTimestamp\":\"2016-12-02T11:25:59Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-jhqhz\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-jhqhz/services/redis-master\", \"uid\":\"19d0678f-b882-11e6-ad9e-42010af00015\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.245.240\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\"}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.158.230 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-jhqhz -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"metadata":map[string]interface {}{"resourceVersion":"7550", "creationTimestamp":"2016-12-02T11:25:59Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-jhqhz", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-jhqhz/services/redis-master", "uid":"19d0678f-b882-11e6-ad9e-42010af00015"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.245.240", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1"}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc8208586a0 exit status 1 <nil> true [0xc820094358 0xc820094388 0xc8200943b8] [0xc820094358 0xc820094388 0xc8200943b8] [0xc820094380 0xc8200943a8] [0xa975d0 0xa975d0] 0xc820589f80}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"metadata":map[string]interface {}{"resourceVersion":"7550", "creationTimestamp":"2016-12-02T11:25:59Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-jhqhz", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-jhqhz/services/redis-master", "uid":"19d0678f-b882-11e6-ad9e-42010af00015"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.245.240", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1"}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:70
Expected error:
    <*errors.errorString | 0xc821762340>: {
        s: "error waiting for deployment test-rollover-deployment status to match expectation: timed out waiting for the condition",
    }
    error waiting for deployment test-rollover-deployment status to match expectation: timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26509 #26834 #29780 #35355

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Expected error:
    <errors.aggregate | len:3, cap:4>: [
        {
            s: "Resource usage on node \"gke-jenkins-e2e-default-pool-b5b1d84e-ib6v\" is not ready yet",
        },
        {
            s: "Resource usage on node \"gke-jenkins-e2e-default-pool-b5b1d84e-k825\" is not ready yet",
        },
        {
            s: "Resource usage on node \"gke-jenkins-e2e-default-pool-b5b1d84e-zsi0\" is not ready yet",
        },
    ]
    [Resource usage on node "gke-jenkins-e2e-default-pool-b5b1d84e-ib6v" is not ready yet, Resource usage on node "gke-jenkins-e2e-default-pool-b5b1d84e-k825" is not ready yet, Resource usage on node "gke-jenkins-e2e-default-pool-b5b1d84e-zsi0" is not ready yet]
not to have occurred

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:453
Expected error:
    <*errors.errorString | 0xc8227b15d0>: {
        s: "failed to wait for pods responding: pod with UID 037551a5-b893-11e6-ad9e-42010af00015 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-kgd1t/pods 20381} [{{ } {my-hostname-delete-node-66sv5 my-hostname-delete-node- e2e-tests-resize-nodes-kgd1t /api/v1/namespaces/e2e-tests-resize-nodes-kgd1t/pods/my-hostname-delete-node-66sv5 0374f047-b893-11e6-ad9e-42010af00015 20093 0 {2016-12-02 05:27:03 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-kgd1t\",\"name\":\"my-hostname-delete-node\",\"uid\":\"037331a3-b893-11e6-ad9e-42010af00015\",\"apiVersion\":\"v1\",\"resourceVersion\":\"20079\"}}\n] [{v1 ReplicationController my-hostname-delete-node 037331a3-b893-11e6-ad9e-42010af00015 0xc8226c33f7}] []} {[{default-token-w4jt5 {<nil> <nil> <nil> <nil> <nil> 0xc8226c5e60 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-w4jt5 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8226c3500 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-b5b1d84e-zsi0 0xc82175ce40 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-02 05:27:03 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-02 05:27:04 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-02 05:27:03 -0800 PST}  }]   10.240.0.4 10.124.3.110 2016-12-02T05:27:03-08:00 [] [{my-hostname-delete-node {<nil> 0xc820fbcce0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://ecf2fe6dad0145102d7fef7a38f9892302992135201c6033e5255d3e56e0e857}]}} {{ } {my-hostname-delete-node-fx038 my-hostname-delete-node- e2e-tests-resize-nodes-kgd1t /api/v1/namespaces/e2e-tests-resize-nodes-kgd1t/pods/my-hostname-delete-node-fx038 36055274-b893-11e6-ad9e-42010af00015 20229 0 {2016-12-02 05:28:28 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-kgd1t\",\"name\":\"my-hostname-delete-node\",\"uid\":\"037331a3-b893-11e6-ad9e-42010af00015\",\"apiVersion\":\"v1\",\"resourceVersion\":\"20173\"}}\n] [{v1 ReplicationController my-hostname-delete-node 037331a3-b893-11e6-ad9e-42010af00015 0xc8226c37a7}] []} {[{default-token-w4jt5 {<nil> <nil> <nil> <nil> <nil> 0xc8226c5ec0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-w4jt5 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8226c38a0 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-b5b1d84e-zsi0 0xc82175cf80 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-02 05:28:28 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-02 05:28:29 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-02 05:28:28 -0800 PST}  }]   10.240.0.4 10.124.3.111 2016-12-02T05:28:28-08:00 [] [{my-hostname-delete-node {<nil> 0xc820fbcd00 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://18111a3b59a17e63fb86561cb2d3c870a9417ef770398936d4374a688c902c78}]}} {{ } {my-hostname-delete-node-hz5zz my-hostname-delete-node- e2e-tests-resize-nodes-kgd1t /api/v1/namespaces/e2e-tests-resize-nodes-kgd1t/pods/my-hostname-delete-node-hz5zz 36007ea7-b893-11e6-ad9e-42010af00015 20231 0 {2016-12-02 05:28:28 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-kgd1t\",\"name\":\"my-hostname-delete-node\",\"uid\":\"037331a3-b893-11e6-ad9e-42010af00015\",\"apiVersion\":\"v1\",\"resourceVersion\":\"20173\"}}\n] [{v1 ReplicationController my-hostname-delete-node 037331a3-b893-11e6-ad9e-42010af00015 0xc8226c3b47}] []} {[{default-token-w4jt5 {<nil> <nil> <nil> <nil> <nil> 0xc8226c5f20 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-w4jt5 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8226c3c50 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-b5b1d84e-k825 0xc82175d080 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-02 05:28:28 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-02 05:28:30 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-02 05:28:28 -0800 PST}  }]   10.240.0.2 10.124.2.34 2016-12-02T05:28:28-08:00 [] [{my-hostname-delete-node {<nil> 0xc820fbcd20 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://f77652650d9c4f6eec87cd218204dd3c7029cc9571cc0e13cda92a20703da83f}]}}]}",
    }
    failed to wait for pods responding: pod with UID 037551a5-b893-11e6-ad9e-42010af00015 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-kgd1t/pods 20381} [{{ } {my-hostname-delete-node-66sv5 my-hostname-delete-node- e2e-tests-resize-nodes-kgd1t /api/v1/namespaces/e2e-tests-resize-nodes-kgd1t/pods/my-hostname-delete-node-66sv5 0374f047-b893-11e6-ad9e-42010af00015 20093 0 {2016-12-02 05:27:03 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-kgd1t","name":"my-hostname-delete-node","uid":"037331a3-b893-11e6-ad9e-42010af00015","apiVersion":"v1","resourceVersion":"20079"}}
    ] [{v1 ReplicationController my-hostname-delete-node 037331a3-b893-11e6-ad9e-42010af00015 0xc8226c33f7}] []} {[{default-token-w4jt5 {<nil> <nil> <nil> <nil> <nil> 0xc8226c5e60 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-w4jt5 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8226c3500 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-b5b1d84e-zsi0 0xc82175ce40 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-02 05:27:03 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-02 05:27:04 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-02 05:27:03 -0800 PST}  }]   10.240.0.4 10.124.3.110 2016-12-02T05:27:03-08:00 [] [{my-hostname-delete-node {<nil> 0xc820fbcce0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://ecf2fe6dad0145102d7fef7a38f9892302992135201c6033e5255d3e56e0e857}]}} {{ } {my-hostname-delete-node-fx038 my-hostname-delete-node- e2e-tests-resize-nodes-kgd1t /api/v1/namespaces/e2e-tests-resize-nodes-kgd1t/pods/my-hostname-delete-node-fx038 36055274-b893-11e6-ad9e-42010af00015 20229 0 {2016-12-02 05:28:28 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-kgd1t","name":"my-hostname-delete-node","uid":"037331a3-b893-11e6-ad9e-42010af00015","apiVersion":"v1","resourceVersion":"20173"}}
    ] [{v1 ReplicationController my-hostname-delete-node 037331a3-b893-11e6-ad9e-42010af00015 0xc8226c37a7}] []} {[{default-token-w4jt5 {<nil> <nil> <nil> <nil> <nil> 0xc8226c5ec0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-w4jt5 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8226c38a0 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-b5b1d84e-zsi0 0xc82175cf80 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-02 05:28:28 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-02 05:28:29 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-02 05:28:28 -0800 PST}  }]   10.240.0.4 10.124.3.111 2016-12-02T05:28:28-08:00 [] [{my-hostname-delete-node {<nil> 0xc820fbcd00 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://18111a3b59a17e63fb86561cb2d3c870a9417ef770398936d4374a688c902c78}]}} {{ } {my-hostname-delete-node-hz5zz my-hostname-delete-node- e2e-tests-resize-nodes-kgd1t /api/v1/namespaces/e2e-tests-resize-nodes-kgd1t/pods/my-hostname-delete-node-hz5zz 36007ea7-b893-11e6-ad9e-42010af00015 20231 0 {2016-12-02 05:28:28 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-kgd1t","name":"my-hostname-delete-node","uid":"037331a3-b893-11e6-ad9e-42010af00015","apiVersion":"v1","resourceVersion":"20173"}}
    ] [{v1 ReplicationController my-hostname-delete-node 037331a3-b893-11e6-ad9e-42010af00015 0xc8226c3b47}] []} {[{default-token-w4jt5 {<nil> <nil> <nil> <nil> <nil> 0xc8226c5f20 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-w4jt5 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8226c3c50 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-b5b1d84e-k825 0xc82175d080 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-02 05:28:28 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-02 05:28:30 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-02 05:28:28 -0800 PST}  }]   10.240.0.2 10.124.2.34 2016-12-02T05:28:28-08:00 [] [{my-hostname-delete-node {<nil> 0xc820fbcd20 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://f77652650d9c4f6eec87cd218204dd3c7029cc9571cc0e13cda92a20703da83f}]}}]}
not to have occurred

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200d7060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070 #34383

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster/406/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Expected error:
    <errors.aggregate | len:3, cap:4>: [
        {
            s: "Resource usage on node \"gke-jenkins-e2e-default-pool-85ac9718-u4j2\" is not ready yet",
        },
        {
            s: "Resource usage on node \"gke-jenkins-e2e-default-pool-85ac9718-9rp1\" is not ready yet",
        },
        {
            s: "Resource usage on node \"gke-jenkins-e2e-default-pool-85ac9718-f3ej\" is not ready yet",
        },
    ]
    [Resource usage on node "gke-jenkins-e2e-default-pool-85ac9718-u4j2" is not ready yet, Resource usage on node "gke-jenkins-e2e-default-pool-85ac9718-9rp1" is not ready yet, Resource usage on node "gke-jenkins-e2e-default-pool-85ac9718-f3ej" is not ready yet]
not to have occurred

Issues about this test specifically: #26784 #28384 #31935 #33023

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc82169fa50>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.151.43 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-rb875 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-rb875\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-rb875/services/redis-master\", \"uid\":\"084d0a46-b8ca-11e6-88e9-42010af0002c\", \"resourceVersion\":\"25947\", \"creationTimestamp\":\"2016-12-02T20:00:54Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}}, \"spec\":map[string]interface {}{\"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"clusterIP\":\"10.127.247.206\", \"type\":\"ClusterIP\"}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc8207e7da0 exit status 1 <nil> true [0xc820dcc2a0 0xc820dcc2b8 0xc820dcc2d0] [0xc820dcc2a0 0xc820dcc2b8 0xc820dcc2d0] [0xc820dcc2b0 0xc820dcc2c8] [0xa975d0 0xa975d0] 0xc822b1f080}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-rb875\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-rb875/services/redis-master\", \"uid\":\"084d0a46-b8ca-11e6-88e9-42010af0002c\", \"resourceVersion\":\"25947\", \"creationTimestamp\":\"2016-12-02T20:00:54Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}}, \"spec\":map[string]interface {}{\"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"clusterIP\":\"10.127.247.206\", \"type\":\"ClusterIP\"}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.151.43 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-rb875 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"name":"redis-master", "namespace":"e2e-tests-kubectl-rb875", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-rb875/services/redis-master", "uid":"084d0a46-b8ca-11e6-88e9-42010af0002c", "resourceVersion":"25947", "creationTimestamp":"2016-12-02T20:00:54Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}}, "spec":map[string]interface {}{"sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"role":"master", "app":"redis"}, "clusterIP":"10.127.247.206", "type":"ClusterIP"}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc8207e7da0 exit status 1 <nil> true [0xc820dcc2a0 0xc820dcc2b8 0xc820dcc2d0] [0xc820dcc2a0 0xc820dcc2b8 0xc820dcc2d0] [0xc820dcc2b0 0xc820dcc2c8] [0xa975d0 0xa975d0] 0xc822b1f080}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"name":"redis-master", "namespace":"e2e-tests-kubectl-rb875", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-rb875/services/redis-master", "uid":"084d0a46-b8ca-11e6-88e9-42010af0002c", "resourceVersion":"25947", "creationTimestamp":"2016-12-02T20:00:54Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}}, "spec":map[string]interface {}{"sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"role":"master", "app":"redis"}, "clusterIP":"10.127.247.206", "type":"ClusterIP"}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:70
Expected error:
    <*errors.errorString | 0xc822ae0740>: {
        s: "error waiting for deployment test-rollover-deployment status to match expectation: timed out waiting for the condition",
    }
    error waiting for deployment test-rollover-deployment status to match expectation: timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26509 #26834 #29780 #35355

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200bf060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200bf060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Pod Disks should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:420
Expected
    <string>: 
to equal
    <string>: 226801073847695080

Issues about this test specifically: #26127 #28081

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Expected error:
    <errors.aggregate | len:3, cap:4>: [
        {
            s: "Resource usage on node \"gke-jenkins-e2e-default-pool-85ac9718-9rp1\" is not ready yet",
        },
        {
            s: "Resource usage on node \"gke-jenkins-e2e-default-pool-85ac9718-f3ej\" is not ready yet",
        },
        {
            s: "Resource usage on node \"gke-jenkins-e2e-default-pool-85ac9718-u4j2\" is not ready yet",
        },
    ]
    [Resource usage on node "gke-jenkins-e2e-default-pool-85ac9718-9rp1" is not ready yet, Resource usage on node "gke-jenkins-e2e-default-pool-85ac9718-f3ej" is not ready yet, Resource usage on node "gke-jenkins-e2e-default-pool-85ac9718-u4j2" is not ready yet]
not to have occurred

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster/407/

Multiple broken tests:

Failed: UpgradeTest {e2e.go}

exit status 1

Issues about this test specifically: #37745

Failed: kubectl version {e2e.go}

exit status 1

Issues about this test specifically: #34378

Failed: IsUp {e2e.go}

exit status 1

Issues about this test specifically: #33702

Failed: DiffResources {e2e.go}

Error: 3 leaked resources
+k8s-fw-a702cda81b8e311e685c842010af0002  jenkins-e2e  0.0.0.0/0     tcp:80                                  gke-jenkins-e2e-abe1d09a-node
+a702cda81b8e311e685c842010af0002  us-central1  104.154.19.196  TCP          us-central1/targetPools/a702cda81b8e311e685c842010af0002
+a702cda81b8e311e685c842010af0002  us-central1

Issues about this test specifically: #33373 #33416 #34060

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:101
Expected error:
    <*errors.errorString | 0xc420c2c150>: {
        s: "error running gcloud [container clusters --project=gke-up-g1-3-g1-5-up-clu --zone=us-central1-a upgrade jenkins-e2e --master --cluster-version=1.5.0-beta.2.52+116e7fb5b906d1 --quiet]; got error exit status 1, stdout \"\", stderr \"Upgrading jenkins-e2e...\\n.........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................done.\\nERROR: (gcloud.container.clusters.upgrade) Operation [<Operation\\n name: u'operation-1480719826654-4dd8059e'\\n operationType: OperationTypeValueValuesEnum(UPGRADE_MASTER, 3)\\n selfLink: u'https://test-container.sandbox.googleapis.com/v1/projects/1073366066365/zones/us-central1-a/operations/operation-1480719826654-4dd8059e'\\n status: StatusValueValuesEnum(DONE, 3)\\n statusMessage: u'Timed out waiting for cluster initialization. Cluster API may not be available.'\\n targetLink: u'https://test-container.sandbox.googleapis.com/v1/projects/1073366066365/zones/us-central1-a/clusters/jenkins-e2e'\\n zone: u'us-central1-a'>] finished with error: Timed out waiting for cluster initialization. Cluster API may not be available.\\n\"",
    }
    error running gcloud [container clusters --project=gke-up-g1-3-g1-5-up-clu --zone=us-central1-a upgrade jenkins-e2e --master --cluster-version=1.5.0-beta.2.52+116e7fb5b906d1 --quiet]; got error exit status 1, stdout "", stderr "Upgrading jenkins-e2e...\n.........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................done.\nERROR: (gcloud.container.clusters.upgrade) Operation [<Operation\n name: u'operation-1480719826654-4dd8059e'\n operationType: OperationTypeValueValuesEnum(UPGRADE_MASTER, 3)\n selfLink: u'https://test-container.sandbox.googleapis.com/v1/projects/1073366066365/zones/us-central1-a/operations/operation-1480719826654-4dd8059e'\n status: StatusValueValuesEnum(DONE, 3)\n statusMessage: u'Timed out waiting for cluster initialization. Cluster API may not be available.'\n targetLink: u'https://test-container.sandbox.googleapis.com/v1/projects/1073366066365/zones/us-central1-a/clusters/jenkins-e2e'\n zone: u'us-central1-a'>] finished with error: Timed out waiting for cluster initialization. Cluster API may not be available.\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:91

@k8s-github-robot
Copy link
Author

@fejta fejta closed this as completed Dec 7, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

3 participants