Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci-kubernetes-e2e-gke-gci-1.3-container_vm-1.5-upgrade-master: broken test run #38482

Closed
k8s-github-robot opened this issue Dec 9, 2016 · 220 comments
Assignees
Labels
kind/flake Categorizes issue or PR as related to a flaky test. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. sig/node Categorizes an issue or PR as relevant to SIG Node.
Milestone

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-container_vm-1.5-upgrade-master/21/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec  7 14:50:17.857: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-ed6c28d6-c2di:
 container "runtime": expected RSS memory (MB) < 314572800; got 522596352
node gke-bootstrap-e2e-default-pool-ed6c28d6-jzii:
 container "runtime": expected RSS memory (MB) < 314572800; got 541659136
node gke-bootstrap-e2e-default-pool-ed6c28d6-zlm6:
 container "runtime": expected RSS memory (MB) < 314572800; got 522506240

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc821ded570>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.55.111 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-fwc27 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"metadata\":map[string]interface {}{\"resourceVersion\":\"36679\", \"creationTimestamp\":\"2016-12-08T00:25:13Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-fwc27\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-fwc27/services/redis-master\", \"uid\":\"c90ff239-bcdc-11e6-8404-42010af0002c\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.247.56\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\"}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc820f99cc0 exit status 1 <nil> true [0xc8222cc5b8 0xc8222cc5d0 0xc8222cc5e8] [0xc8222cc5b8 0xc8222cc5d0 0xc8222cc5e8] [0xc8222cc5c8 0xc8222cc5e0] [0xa97590 0xa97590] 0xc82144d260}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"metadata\":map[string]interface {}{\"resourceVersion\":\"36679\", \"creationTimestamp\":\"2016-12-08T00:25:13Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-fwc27\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-fwc27/services/redis-master\", \"uid\":\"c90ff239-bcdc-11e6-8404-42010af0002c\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.247.56\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\"}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.55.111 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-fwc27 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"metadata":map[string]interface {}{"resourceVersion":"36679", "creationTimestamp":"2016-12-08T00:25:13Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-fwc27", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-fwc27/services/redis-master", "uid":"c90ff239-bcdc-11e6-8404-42010af0002c"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.247.56", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1"}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc820f99cc0 exit status 1 <nil> true [0xc8222cc5b8 0xc8222cc5d0 0xc8222cc5e8] [0xc8222cc5b8 0xc8222cc5d0 0xc8222cc5e8] [0xc8222cc5c8 0xc8222cc5e0] [0xa97590 0xa97590] 0xc82144d260}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"metadata":map[string]interface {}{"resourceVersion":"36679", "creationTimestamp":"2016-12-08T00:25:13Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-fwc27", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-fwc27/services/redis-master", "uid":"c90ff239-bcdc-11e6-8404-42010af0002c"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.247.56", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1"}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200e40b0>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200e40b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200e40b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] master upgrade should maintain responsive services [Feature:MasterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:53
Dec  7 11:11:49.535: Failed waiting for pods to be running: Timeout waiting for 1 pods to be ready
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2562

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:279
Expected error:
    <*errors.errorString | 0xc8214470a0>: {
        s: "service verification failed for: 10.99.244.137\nexpected [service1-8f5jj service1-kj44d service1-mm2p0]\nreceived [service1-kj44d service1-mm2p0]",
    }
    service verification failed for: 10.99.244.137
    expected [service1-8f5jj service1-kj44d service1-mm2p0]
    received [service1-kj44d service1-mm2p0]
not to have occurred

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:372
Expected error:
    <*errors.errorString | 0xc820860060>: {
        s: "service verification failed for: 10.99.241.158\nexpected [service2-9zdhp service2-jcn4s service2-z18sj]\nreceived [service2-9zdhp service2-z18sj]",
    }
    service verification failed for: 10.99.241.158
    expected [service2-9zdhp service2-jcn4s service2-z18sj]
    received [service2-9zdhp service2-z18sj]
not to have occurred

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200e40b0>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: UpgradeTest {e2e.go}

exit status 1

Issues about this test specifically: #37745

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/P2 labels Dec 9, 2016
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-container_vm-1.5-upgrade-master/22/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec  7 20:52:34.181: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-f1b5e9e1-pnwx:
 container "runtime": expected RSS memory (MB) < 314572800; got 527097856
node gke-bootstrap-e2e-default-pool-f1b5e9e1-xoo5:
 container "runtime": expected RSS memory (MB) < 314572800; got 531619840
node gke-bootstrap-e2e-default-pool-f1b5e9e1-mavj:
 container "runtime": expected RSS memory (MB) < 314572800; got 513667072

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc820081f80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable [replication controller] recreates pods scheduled on the unreachable node AND allows scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:544
Expected error:
    <*errors.errorString | 0xc821384b10>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27324 #35852 #35880

Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:725
Dec  7 19:43:19.360: Failed waiting for pods to be running: Timeout waiting for 1 pods to be ready

Issues about this test specifically: #26134

Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Dec  7 22:16:39.140: Couldn't delete ns "e2e-tests-nslifetest-83-r0fcp": Operation cannot be fulfilled on namespaces "e2e-tests-nslifetest-83-r0fcp": The system is ensuring all content is removed from this namespace.  Upon completion, this namespace will automatically be purged by the system.

Issues about this test specifically: #27957

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc820081f80>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc820081f80>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc820706740>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.187.226 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-85ffq -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"metadata\":map[string]interface {}{\"resourceVersion\":\"7758\", \"creationTimestamp\":\"2016-12-08T02:56:17Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-85ffq\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-85ffq/services/redis-master\", \"uid\":\"e3c0283a-bcf1-11e6-a73f-42010af0001a\"}, \"spec\":map[string]interface {}{\"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.251.137\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\"}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc8208767c0 exit status 1 <nil> true [0xc8201967d0 0xc820196808 0xc820196828] [0xc8201967d0 0xc820196808 0xc820196828] [0xc8201967f0 0xc820196820] [0xa97590 0xa97590] 0xc820fb4de0}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"metadata\":map[string]interface {}{\"resourceVersion\":\"7758\", \"creationTimestamp\":\"2016-12-08T02:56:17Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-85ffq\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-85ffq/services/redis-master\", \"uid\":\"e3c0283a-bcf1-11e6-a73f-42010af0001a\"}, \"spec\":map[string]interface {}{\"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.251.137\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\"}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.187.226 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-85ffq -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"metadata":map[string]interface {}{"resourceVersion":"7758", "creationTimestamp":"2016-12-08T02:56:17Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-85ffq", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-85ffq/services/redis-master", "uid":"e3c0283a-bcf1-11e6-a73f-42010af0001a"}, "spec":map[string]interface {}{"selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.251.137", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1"}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc8208767c0 exit status 1 <nil> true [0xc8201967d0 0xc820196808 0xc820196828] [0xc8201967d0 0xc820196808 0xc820196828] [0xc8201967f0 0xc820196820] [0xa97590 0xa97590] 0xc820fb4de0}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"metadata":map[string]interface {}{"resourceVersion":"7758", "creationTimestamp":"2016-12-08T02:56:17Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-85ffq", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-85ffq/services/redis-master", "uid":"e3c0283a-bcf1-11e6-a73f-42010af0001a"}, "spec":map[string]interface {}{"selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.251.137", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1"}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc820081f80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication between nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:267
Expected error:
    <*errors.errorString | 0xc820a7ff30>: {
        s: "pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-12-07 22:36:57 -0800 PST} FinishedAt:{Time:2016-12-07 22:37:07 -0800 PST} ContainerID:docker://0c5f77720a22e16dc63be293d24ba369b22127fbc479a5453fc65460e4d2843c}",
    }
    pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-12-07 22:36:57 -0800 PST} FinishedAt:{Time:2016-12-07 22:37:07 -0800 PST} ContainerID:docker://0c5f77720a22e16dc63be293d24ba369b22127fbc479a5453fc65460e4d2843c}
not to have occurred

Issues about this test specifically: #30131 #31402

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-container_vm-1.5-upgrade-master/23/

Multiple broken tests:

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc820081f80>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc820081f80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc820081f80>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc820081f80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc820787a40>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.70.250 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-6gxjt -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"clusterIP\":\"10.99.246.106\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-6gxjt/services/redis-master\", \"uid\":\"1c56605e-bd24-11e6-b686-42010af00038\", \"resourceVersion\":\"3413\", \"creationTimestamp\":\"2016-12-08T08:55:47Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-6gxjt\"}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc82086ed00 exit status 1 <nil> true [0xc82157e508 0xc82157e520 0xc82157e538] [0xc82157e508 0xc82157e520 0xc82157e538] [0xc82157e518 0xc82157e530] [0xa97590 0xa97590] 0xc821a223c0}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"clusterIP\":\"10.99.246.106\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-6gxjt/services/redis-master\", \"uid\":\"1c56605e-bd24-11e6-b686-42010af00038\", \"resourceVersion\":\"3413\", \"creationTimestamp\":\"2016-12-08T08:55:47Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-6gxjt\"}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.70.250 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-6gxjt -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"role":"master", "app":"redis"}, "clusterIP":"10.99.246.106", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"selfLink":"/api/v1/namespaces/e2e-tests-kubectl-6gxjt/services/redis-master", "uid":"1c56605e-bd24-11e6-b686-42010af00038", "resourceVersion":"3413", "creationTimestamp":"2016-12-08T08:55:47Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-6gxjt"}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc82086ed00 exit status 1 <nil> true [0xc82157e508 0xc82157e520 0xc82157e538] [0xc82157e508 0xc82157e520 0xc82157e538] [0xc82157e518 0xc82157e530] [0xa97590 0xa97590] 0xc821a223c0}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"role":"master", "app":"redis"}, "clusterIP":"10.99.246.106", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"selfLink":"/api/v1/namespaces/e2e-tests-kubectl-6gxjt/services/redis-master", "uid":"1c56605e-bd24-11e6-b686-42010af00038", "resourceVersion":"3413", "creationTimestamp":"2016-12-08T08:55:47Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-6gxjt"}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec  8 05:00:53.459: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-847b14e8-1dgd:
 container "runtime": expected RSS memory (MB) < 314572800; got 533291008
node gke-bootstrap-e2e-default-pool-847b14e8-4bex:
 container "runtime": expected RSS memory (MB) < 314572800; got 520409088
node gke-bootstrap-e2e-default-pool-847b14e8-xvan:
 container "runtime": expected RSS memory (MB) < 314572800; got 540319744

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:372
Expected error:
    <*errors.errorString | 0xc824865510>: {
        s: "error while stopping RC: service2: Scaling the resource failed with: Get https://104.197.70.250/api/v1/namespaces/e2e-tests-services-q8dww/replicationcontrollers/service2: dial tcp 104.197.70.250:443: getsockopt: connection refused; Current resource version Unknown",
    }
    error while stopping RC: service2: Scaling the resource failed with: Get https://104.197.70.250/api/v1/namespaces/e2e-tests-services-q8dww/replicationcontrollers/service2: dial tcp 104.197.70.250:443: getsockopt: connection refused; Current resource version Unknown
not to have occurred

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-container_vm-1.5-upgrade-master/24/

Multiple broken tests:

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200b3060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec  8 10:35:41.806: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-6442bb47-0u66:
 container "runtime": expected RSS memory (MB) < 314572800; got 531984384
node gke-bootstrap-e2e-default-pool-6442bb47-36bt:
 container "runtime": expected RSS memory (MB) < 314572800; got 521142272
node gke-bootstrap-e2e-default-pool-6442bb47-h0yk:
 container "runtime": expected RSS memory (MB) < 314572800; got 541286400

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200b3060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:453
Expected error:
    <*errors.errorString | 0xc8212d8d40>: {
        s: "failed to wait for pods responding: pod with UID f04b1e6d-bd69-11e6-937f-42010af00017 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-wjlp8/pods 14477} [{{ } {my-hostname-delete-node-gr2gs my-hostname-delete-node- e2e-tests-resize-nodes-wjlp8 /api/v1/namespaces/e2e-tests-resize-nodes-wjlp8/pods/my-hostname-delete-node-gr2gs f04b3b3e-bd69-11e6-937f-42010af00017 14166 0 {2016-12-08 09:15:38 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-wjlp8\",\"name\":\"my-hostname-delete-node\",\"uid\":\"f0491548-bd69-11e6-937f-42010af00017\",\"apiVersion\":\"v1\",\"resourceVersion\":\"14151\"}}\n] [{v1 ReplicationController my-hostname-delete-node f0491548-bd69-11e6-937f-42010af00017 0xc8213b8f57}] []} {[{default-token-r2bqv {<nil> <nil> <nil> <nil> <nil> 0xc82113b980 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-r2bqv true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8213b9050 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-6442bb47-0u66 0xc820f4ccc0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-08 09:15:38 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-08 09:15:39 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-08 09:15:38 -0800 PST}  }]   10.240.0.3 10.96.1.3 2016-12-08T09:15:38-08:00 [] [{my-hostname-delete-node {<nil> 0xc82152c300 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://366bf6601ece565c2800c19d6061a002639051bb1350de0a065e465a0f409f3b}]}} {{ } {my-hostname-delete-node-kk66r my-hostname-delete-node- e2e-tests-resize-nodes-wjlp8 /api/v1/namespaces/e2e-tests-resize-nodes-wjlp8/pods/my-hostname-delete-node-kk66r f04b50fd-bd69-11e6-937f-42010af00017 14164 0 {2016-12-08 09:15:38 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-wjlp8\",\"name\":\"my-hostname-delete-node\",\"uid\":\"f0491548-bd69-11e6-937f-42010af00017\",\"apiVersion\":\"v1\",\"resourceVersion\":\"14151\"}}\n] [{v1 ReplicationController my-hostname-delete-node f0491548-bd69-11e6-937f-42010af00017 0xc8213b9357}] []} {[{default-token-r2bqv {<nil> <nil> <nil> <nil> <nil> 0xc82113b9e0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-r2bqv true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8213b9560 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-6442bb47-0u66 0xc820f4cd80 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-08 09:15:38 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-08 09:15:39 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-08 09:15:38 -0800 PST}  }]   10.240.0.3 10.96.1.4 2016-12-08T09:15:38-08:00 [] [{my-hostname-delete-node {<nil> 0xc82152c320 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://07cbf791f774bb0150f44432a1831a17ca6f282a4529a2452617e6774e827c6b}]}} {{ } {my-hostname-delete-node-kt6l5 my-hostname-delete-node- e2e-tests-resize-nodes-wjlp8 /api/v1/namespaces/e2e-tests-resize-nodes-wjlp8/pods/my-hostname-delete-node-kt6l5 23af174e-bd6a-11e6-937f-42010af00017 14328 0 {2016-12-08 09:17:04 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-wjlp8\",\"name\":\"my-hostname-delete-node\",\"uid\":\"f0491548-bd69-11e6-937f-42010af00017\",\"apiVersion\":\"v1\",\"resourceVersion\":\"14241\"}}\n] [{v1 ReplicationController my-hostname-delete-node f0491548-bd69-11e6-937f-42010af00017 0xc8213b98f7}] []} {[{default-token-r2bqv {<nil> <nil> <nil> <nil> <nil> 0xc82113ba40 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-r2bqv true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8213b99f0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-6442bb47-h0yk 0xc820f4ce40 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-08 09:17:04 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-08 09:17:05 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-08 09:17:04 -0800 PST}  }]   10.240.0.4 10.96.2.4 2016-12-08T09:17:04-08:00 [] [{my-hostname-delete-node {<nil> 0xc82152c340 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://801d4a37da56f97332496c14bed3dd35f87194726fb27d2bc1d25ff20135c35c}]}}]}",
    }
    failed to wait for pods responding: pod with UID f04b1e6d-bd69-11e6-937f-42010af00017 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-wjlp8/pods 14477} [{{ } {my-hostname-delete-node-gr2gs my-hostname-delete-node- e2e-tests-resize-nodes-wjlp8 /api/v1/namespaces/e2e-tests-resize-nodes-wjlp8/pods/my-hostname-delete-node-gr2gs f04b3b3e-bd69-11e6-937f-42010af00017 14166 0 {2016-12-08 09:15:38 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-wjlp8","name":"my-hostname-delete-node","uid":"f0491548-bd69-11e6-937f-42010af00017","apiVersion":"v1","resourceVersion":"14151"}}
    ] [{v1 ReplicationController my-hostname-delete-node f0491548-bd69-11e6-937f-42010af00017 0xc8213b8f57}] []} {[{default-token-r2bqv {<nil> <nil> <nil> <nil> <nil> 0xc82113b980 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-r2bqv true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8213b9050 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-6442bb47-0u66 0xc820f4ccc0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-08 09:15:38 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-08 09:15:39 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-08 09:15:38 -0800 PST}  }]   10.240.0.3 10.96.1.3 2016-12-08T09:15:38-08:00 [] [{my-hostname-delete-node {<nil> 0xc82152c300 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://366bf6601ece565c2800c19d6061a002639051bb1350de0a065e465a0f409f3b}]}} {{ } {my-hostname-delete-node-kk66r my-hostname-delete-node- e2e-tests-resize-nodes-wjlp8 /api/v1/namespaces/e2e-tests-resize-nodes-wjlp8/pods/my-hostname-delete-node-kk66r f04b50fd-bd69-11e6-937f-42010af00017 14164 0 {2016-12-08 09:15:38 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-wjlp8","name":"my-hostname-delete-node","uid":"f0491548-bd69-11e6-937f-42010af00017","apiVersion":"v1","resourceVersion":"14151"}}
    ] [{v1 ReplicationController my-hostname-delete-node f0491548-bd69-11e6-937f-42010af00017 0xc8213b9357}] []} {[{default-token-r2bqv {<nil> <nil> <nil> <nil> <nil> 0xc82113b9e0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-r2bqv true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8213b9560 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-6442bb47-0u66 0xc820f4cd80 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-08 09:15:38 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-08 09:15:39 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-08 09:15:38 -0800 PST}  }]   10.240.0.3 10.96.1.4 2016-12-08T09:15:38-08:00 [] [{my-hostname-delete-node {<nil> 0xc82152c320 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://07cbf791f774bb0150f44432a1831a17ca6f282a4529a2452617e6774e827c6b}]}} {{ } {my-hostname-delete-node-kt6l5 my-hostname-delete-node- e2e-tests-resize-nodes-wjlp8 /api/v1/namespaces/e2e-tests-resize-nodes-wjlp8/pods/my-hostname-delete-node-kt6l5 23af174e-bd6a-11e6-937f-42010af00017 14328 0 {2016-12-08 09:17:04 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-wjlp8","name":"my-hostname-delete-node","uid":"f0491548-bd69-11e6-937f-42010af00017","apiVersion":"v1","resourceVersion":"14241"}}
    ] [{v1 ReplicationController my-hostname-delete-node f0491548-bd69-11e6-937f-42010af00017 0xc8213b98f7}] []} {[{default-token-r2bqv {<nil> <nil> <nil> <nil> <nil> 0xc82113ba40 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-r2bqv true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8213b99f0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-6442bb47-h0yk 0xc820f4ce40 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-08 09:17:04 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-08 09:17:05 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-08 09:17:04 -0800 PST}  }]   10.240.0.4 10.96.2.4 2016-12-08T09:17:04-08:00 [] [{my-hostname-delete-node {<nil> 0xc82152c340 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://801d4a37da56f97332496c14bed3dd35f87194726fb27d2bc1d25ff20135c35c}]}}]}
not to have occurred

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc824109690>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.37.143 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-8fdcr -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"namespace\":\"e2e-tests-kubectl-8fdcr\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-8fdcr/services/redis-master\", \"uid\":\"9120ec4c-bd86-11e6-b423-42010af00017\", \"resourceVersion\":\"40331\", \"creationTimestamp\":\"2016-12-08T20:40:33Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\"}, \"spec\":map[string]interface {}{\"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.243.76\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc820165100 exit status 1 <nil> true [0xc8214d2480 0xc8214d2498 0xc8214d24b0] [0xc8214d2480 0xc8214d2498 0xc8214d24b0] [0xc8214d2490 0xc8214d24a8] [0xa97590 0xa97590] 0xc821ad98c0}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"namespace\":\"e2e-tests-kubectl-8fdcr\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-8fdcr/services/redis-master\", \"uid\":\"9120ec4c-bd86-11e6-b423-42010af00017\", \"resourceVersion\":\"40331\", \"creationTimestamp\":\"2016-12-08T20:40:33Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\"}, \"spec\":map[string]interface {}{\"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.243.76\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.37.143 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-8fdcr -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"namespace":"e2e-tests-kubectl-8fdcr", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-8fdcr/services/redis-master", "uid":"9120ec4c-bd86-11e6-b423-42010af00017", "resourceVersion":"40331", "creationTimestamp":"2016-12-08T20:40:33Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master"}, "spec":map[string]interface {}{"type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.243.76"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc820165100 exit status 1 <nil> true [0xc8214d2480 0xc8214d2498 0xc8214d24b0] [0xc8214d2480 0xc8214d2498 0xc8214d24b0] [0xc8214d2490 0xc8214d24a8] [0xa97590 0xa97590] 0xc821ad98c0}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"namespace":"e2e-tests-kubectl-8fdcr", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-8fdcr/services/redis-master", "uid":"9120ec4c-bd86-11e6-b423-42010af00017", "resourceVersion":"40331", "creationTimestamp":"2016-12-08T20:40:33Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master"}, "spec":map[string]interface {}{"type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.243.76"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:279
Expected error:
    <*errors.errorString | 0xc8217c5d50>: {
        s: "service verification failed for: 10.99.246.13\nexpected [service1-fbv10 service1-mk0qc service1-z1hvl]\nreceived [service1-mk0qc service1-z1hvl]",
    }
    service verification failed for: 10.99.246.13
    expected [service1-fbv10 service1-mk0qc service1-z1hvl]
    received [service1-mk0qc service1-z1hvl]
not to have occurred

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200b3060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200b3060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-container_vm-1.5-upgrade-master/25/

Multiple broken tests:

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:279
Expected error:
    <*errors.errorString | 0xc821332900>: {
        s: "service verification failed for: 10.99.245.64\nexpected [service3-5kc7w service3-5qxl6 service3-zn3l2]\nreceived [service3-5qxl6 service3-zn3l2]",
    }
    service verification failed for: 10.99.245.64
    expected [service3-5kc7w service3-5qxl6 service3-zn3l2]
    received [service3-5qxl6 service3-zn3l2]
not to have occurred

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc820a4c200>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.28.75 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-5l903 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"creationTimestamp\":\"2016-12-08T23:28:39Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-5l903\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-5l903/services/redis-master\", \"uid\":\"0cc0bc3d-bd9e-11e6-a90a-42010af00031\", \"resourceVersion\":\"9896\"}, \"spec\":map[string]interface {}{\"clusterIP\":\"10.99.240.100\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\"}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc82140dbc0 exit status 1 <nil> true [0xc821616058 0xc821616070 0xc821616088] [0xc821616058 0xc821616070 0xc821616088] [0xc821616068 0xc821616080] [0xa97590 0xa97590] 0xc82199a720}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"creationTimestamp\":\"2016-12-08T23:28:39Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-5l903\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-5l903/services/redis-master\", \"uid\":\"0cc0bc3d-bd9e-11e6-a90a-42010af00031\", \"resourceVersion\":\"9896\"}, \"spec\":map[string]interface {}{\"clusterIP\":\"10.99.240.100\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\"}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.28.75 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-5l903 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"apiVersion":"v1", "metadata":map[string]interface {}{"creationTimestamp":"2016-12-08T23:28:39Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-5l903", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-5l903/services/redis-master", "uid":"0cc0bc3d-bd9e-11e6-a90a-42010af00031", "resourceVersion":"9896"}, "spec":map[string]interface {}{"clusterIP":"10.99.240.100", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"role":"master", "app":"redis"}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service"}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc82140dbc0 exit status 1 <nil> true [0xc821616058 0xc821616070 0xc821616088] [0xc821616058 0xc821616070 0xc821616088] [0xc821616068 0xc821616080] [0xa97590 0xa97590] 0xc82199a720}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"apiVersion":"v1", "metadata":map[string]interface {}{"creationTimestamp":"2016-12-08T23:28:39Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-5l903", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-5l903/services/redis-master", "uid":"0cc0bc3d-bd9e-11e6-a90a-42010af00031", "resourceVersion":"9896"}, "spec":map[string]interface {}{"clusterIP":"10.99.240.100", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"role":"master", "app":"redis"}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service"}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec  8 19:22:22.636: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-8e8ba453-5w2f:
 container "runtime": expected RSS memory (MB) < 314572800; got 541962240
node gke-bootstrap-e2e-default-pool-8e8ba453-hqo0:
 container "runtime": expected RSS memory (MB) < 314572800; got 510496768
node gke-bootstrap-e2e-default-pool-8e8ba453-u7aw:
 container "runtime": expected RSS memory (MB) < 314572800; got 534151168

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Pod Disks should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:420
Expected
    <string>: 
to equal
    <string>: 3307697297500895331

Issues about this test specifically: #26127 #28081

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-container_vm-1.5-upgrade-master/26/

Multiple broken tests:

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:372
Expected error:
    <*errors.errorString | 0xc821c06c50>: {
        s: "service verification failed for: 10.99.247.243\nexpected [service1-13r38 service1-6dfv2 service1-sm0hz]\nreceived [service1-6dfv2 service1-sm0hz]",
    }
    service verification failed for: 10.99.247.243
    expected [service1-13r38 service1-6dfv2 service1-sm0hz]
    received [service1-6dfv2 service1-sm0hz]
not to have occurred

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200c7060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:453
Expected error:
    <*errors.errorString | 0xc8219f5980>: {
        s: "failed to wait for pods responding: pod with UID 36207b9b-bde2-11e6-802a-42010af00016 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-13hq3/pods 21860} [{{ } {my-hostname-delete-node-922mj my-hostname-delete-node- e2e-tests-resize-nodes-13hq3 /api/v1/namespaces/e2e-tests-resize-nodes-13hq3/pods/my-hostname-delete-node-922mj 3623afbc-bde2-11e6-802a-42010af00016 21513 0 {2016-12-08 23:36:34 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-13hq3\",\"name\":\"my-hostname-delete-node\",\"uid\":\"361d78bb-bde2-11e6-802a-42010af00016\",\"apiVersion\":\"v1\",\"resourceVersion\":\"21498\"}}\n] [{v1 ReplicationController my-hostname-delete-node 361d78bb-bde2-11e6-802a-42010af00016 0xc8239a5597}] []} {[{default-token-61f1v {<nil> <nil> <nil> <nil> <nil> 0xc82185c210 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-61f1v true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8239a56a0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-0fb1531f-99g7 0xc8218d9100 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-08 23:36:35 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-08 23:36:35 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-08 23:36:35 -0800 PST}  }]   10.240.0.3 10.96.2.5 2016-12-08T23:36:35-08:00 [] [{my-hostname-delete-node {<nil> 0xc8237cda20 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://73fae2dad458f229ff57e4b2538ca56d8f96184dbc7a22109774380304237a63}]}} {{ } {my-hostname-delete-node-jq0nq my-hostname-delete-node- e2e-tests-resize-nodes-13hq3 /api/v1/namespaces/e2e-tests-resize-nodes-13hq3/pods/my-hostname-delete-node-jq0nq 73726bef-bde2-11e6-802a-42010af00016 21702 0 {2016-12-08 23:38:17 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-13hq3\",\"name\":\"my-hostname-delete-node\",\"uid\":\"361d78bb-bde2-11e6-802a-42010af00016\",\"apiVersion\":\"v1\",\"resourceVersion\":\"21603\"}}\n] [{v1 ReplicationController my-hostname-delete-node 361d78bb-bde2-11e6-802a-42010af00016 0xc8239a5987}] []} {[{default-token-61f1v {<nil> <nil> <nil> <nil> <nil> 0xc82185c270 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-61f1v true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8239a5a80 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-0fb1531f-ypkk 0xc8218d91c0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-08 23:38:17 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-08 23:38:18 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-08 23:38:17 -0800 PST}  }]   10.240.0.5 10.96.3.3 2016-12-08T23:38:17-08:00 [] [{my-hostname-delete-node {<nil> 0xc8237cda40 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://e9d56b736183dd6ebfe15c79fcf50734e5dfa3dfd44e5d1bccb8097b7525a771}]}} {{ } {my-hostname-delete-node-vqrpq my-hostname-delete-node- e2e-tests-resize-nodes-13hq3 /api/v1/namespaces/e2e-tests-resize-nodes-13hq3/pods/my-hostname-delete-node-vqrpq 362044f3-bde2-11e6-802a-42010af00016 21511 0 {2016-12-08 23:36:34 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-13hq3\",\"name\":\"my-hostname-delete-node\",\"uid\":\"361d78bb-bde2-11e6-802a-42010af00016\",\"apiVersion\":\"v1\",\"resourceVersion\":\"21498\"}}\n] [{v1 ReplicationController my-hostname-delete-node 361d78bb-bde2-11e6-802a-42010af00016 0xc8239a5d37}] []} {[{default-token-61f1v {<nil> <nil> <nil> <nil> <nil> 0xc82185c2d0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-61f1v true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8239a5e50 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-0fb1531f-99g7 0xc8218d9300 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-08 23:36:35 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-08 23:36:35 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-08 23:36:35 -0800 PST}  }]   10.240.0.3 10.96.2.3 2016-12-08T23:36:35-08:00 [] [{my-hostname-delete-node {<nil> 0xc8237cda60 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://4f3e6adf0e65b365bd8800868e42476bfa6ef05b74360a1add99020872b3d020}]}}]}",
    }
    failed to wait for pods responding: pod with UID 36207b9b-bde2-11e6-802a-42010af00016 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-13hq3/pods 21860} [{{ } {my-hostname-delete-node-922mj my-hostname-delete-node- e2e-tests-resize-nodes-13hq3 /api/v1/namespaces/e2e-tests-resize-nodes-13hq3/pods/my-hostname-delete-node-922mj 3623afbc-bde2-11e6-802a-42010af00016 21513 0 {2016-12-08 23:36:34 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-13hq3","name":"my-hostname-delete-node","uid":"361d78bb-bde2-11e6-802a-42010af00016","apiVersion":"v1","resourceVersion":"21498"}}
    ] [{v1 ReplicationController my-hostname-delete-node 361d78bb-bde2-11e6-802a-42010af00016 0xc8239a5597}] []} {[{default-token-61f1v {<nil> <nil> <nil> <nil> <nil> 0xc82185c210 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-61f1v true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8239a56a0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-0fb1531f-99g7 0xc8218d9100 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-08 23:36:35 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-08 23:36:35 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-08 23:36:35 -0800 PST}  }]   10.240.0.3 10.96.2.5 2016-12-08T23:36:35-08:00 [] [{my-hostname-delete-node {<nil> 0xc8237cda20 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://73fae2dad458f229ff57e4b2538ca56d8f96184dbc7a22109774380304237a63}]}} {{ } {my-hostname-delete-node-jq0nq my-hostname-delete-node- e2e-tests-resize-nodes-13hq3 /api/v1/namespaces/e2e-tests-resize-nodes-13hq3/pods/my-hostname-delete-node-jq0nq 73726bef-bde2-11e6-802a-42010af00016 21702 0 {2016-12-08 23:38:17 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-13hq3","name":"my-hostname-delete-node","uid":"361d78bb-bde2-11e6-802a-42010af00016","apiVersion":"v1","resourceVersion":"21603"}}
    ] [{v1 ReplicationController my-hostname-delete-node 361d78bb-bde2-11e6-802a-42010af00016 0xc8239a5987}] []} {[{default-token-61f1v {<nil> <nil> <nil> <nil> <nil> 0xc82185c270 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-61f1v true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8239a5a80 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-0fb1531f-ypkk 0xc8218d91c0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-08 23:38:17 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-08 23:38:18 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-08 23:38:17 -0800 PST}  }]   10.240.0.5 10.96.3.3 2016-12-08T23:38:17-08:00 [] [{my-hostname-delete-node {<nil> 0xc8237cda40 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://e9d56b736183dd6ebfe15c79fcf50734e5dfa3dfd44e5d1bccb8097b7525a771}]}} {{ } {my-hostname-delete-node-vqrpq my-hostname-delete-node- e2e-tests-resize-nodes-13hq3 /api/v1/namespaces/e2e-tests-resize-nodes-13hq3/pods/my-hostname-delete-node-vqrpq 362044f3-bde2-11e6-802a-42010af00016 21511 0 {2016-12-08 23:36:34 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-13hq3","name":"my-hostname-delete-node","uid":"361d78bb-bde2-11e6-802a-42010af00016","apiVersion":"v1","resourceVersion":"21498"}}
    ] [{v1 ReplicationController my-hostname-delete-node 361d78bb-bde2-11e6-802a-42010af00016 0xc8239a5d37}] []} {[{default-token-61f1v {<nil> <nil> <nil> <nil> <nil> 0xc82185c2d0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-61f1v true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8239a5e50 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-0fb1531f-99g7 0xc8218d9300 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-08 23:36:35 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-08 23:36:35 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-08 23:36:35 -0800 PST}  }]   10.240.0.3 10.96.2.3 2016-12-08T23:36:35-08:00 [] [{my-hostname-delete-node {<nil> 0xc8237cda60 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://4f3e6adf0e65b365bd8800868e42476bfa6ef05b74360a1add99020872b3d020}]}}]}
not to have occurred

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec  8 22:22:01.761: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-0fb1531f-12lz:
 container "runtime": expected RSS memory (MB) < 314572800; got 532832256
node gke-bootstrap-e2e-default-pool-0fb1531f-3w5v:
 container "runtime": expected RSS memory (MB) < 314572800; got 523395072
node gke-bootstrap-e2e-default-pool-0fb1531f-99g7:
 container "runtime": expected RSS memory (MB) < 314572800; got 511787008

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200c7060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200c7060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200c7060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc821a49110>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.35.124 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-vh432 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"creationTimestamp\":\"2016-12-09T07:08:34Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-vh432\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-vh432/services/redis-master\", \"uid\":\"4c93e7c6-bdde-11e6-802a-42010af00016\", \"resourceVersion\":\"18172\"}, \"spec\":map[string]interface {}{\"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"port\":6379, \"targetPort\":\"redis-server\", \"protocol\":\"TCP\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.252.134\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc820df9f00 exit status 1 <nil> true [0xc820f587c0 0xc820f587d8 0xc820f587f0] [0xc820f587c0 0xc820f587d8 0xc820f587f0] [0xc820f587d0 0xc820f587e8] [0xa97590 0xa97590] 0xc821330b40}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"creationTimestamp\":\"2016-12-09T07:08:34Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-vh432\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-vh432/services/redis-master\", \"uid\":\"4c93e7c6-bdde-11e6-802a-42010af00016\", \"resourceVersion\":\"18172\"}, \"spec\":map[string]interface {}{\"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"port\":6379, \"targetPort\":\"redis-server\", \"protocol\":\"TCP\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.252.134\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.35.124 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-vh432 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"creationTimestamp":"2016-12-09T07:08:34Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-vh432", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-vh432/services/redis-master", "uid":"4c93e7c6-bdde-11e6-802a-42010af00016", "resourceVersion":"18172"}, "spec":map[string]interface {}{"type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"port":6379, "targetPort":"redis-server", "protocol":"TCP"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.252.134"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc820df9f00 exit status 1 <nil> true [0xc820f587c0 0xc820f587d8 0xc820f587f0] [0xc820f587c0 0xc820f587d8 0xc820f587f0] [0xc820f587d0 0xc820f587e8] [0xa97590 0xa97590] 0xc821330b40}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"creationTimestamp":"2016-12-09T07:08:34Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-vh432", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-vh432/services/redis-master", "uid":"4c93e7c6-bdde-11e6-802a-42010af00016", "resourceVersion":"18172"}, "spec":map[string]interface {}{"type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"port":6379, "targetPort":"redis-server", "protocol":"TCP"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.252.134"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-container_vm-1.5-upgrade-master/27/

Multiple broken tests:

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200b3060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200b3060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc822652370>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.35.124 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-hqkks -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"uid\":\"db384d99-be28-11e6-a857-42010af0001e\", \"resourceVersion\":\"31342\", \"creationTimestamp\":\"2016-12-09T16:02:16Z\", \"labels\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-hqkks\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-hqkks/services/redis-master\"}, \"spec\":map[string]interface {}{\"clusterIP\":\"10.99.249.104\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc821e18720 exit status 1 <nil> true [0xc82093a000 0xc82093a028 0xc82093a048] [0xc82093a000 0xc82093a028 0xc82093a048] [0xc82093a018 0xc82093a040] [0xa97590 0xa97590] 0xc821c881e0}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"uid\":\"db384d99-be28-11e6-a857-42010af0001e\", \"resourceVersion\":\"31342\", \"creationTimestamp\":\"2016-12-09T16:02:16Z\", \"labels\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-hqkks\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-hqkks/services/redis-master\"}, \"spec\":map[string]interface {}{\"clusterIP\":\"10.99.249.104\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.35.124 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-hqkks -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"uid":"db384d99-be28-11e6-a857-42010af0001e", "resourceVersion":"31342", "creationTimestamp":"2016-12-09T16:02:16Z", "labels":map[string]interface {}{"role":"master", "app":"redis"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-hqkks", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-hqkks/services/redis-master"}, "spec":map[string]interface {}{"clusterIP":"10.99.249.104", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc821e18720 exit status 1 <nil> true [0xc82093a000 0xc82093a028 0xc82093a048] [0xc82093a000 0xc82093a028 0xc82093a048] [0xc82093a018 0xc82093a040] [0xa97590 0xa97590] 0xc821c881e0}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"uid":"db384d99-be28-11e6-a857-42010af0001e", "resourceVersion":"31342", "creationTimestamp":"2016-12-09T16:02:16Z", "labels":map[string]interface {}{"role":"master", "app":"redis"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-hqkks", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-hqkks/services/redis-master"}, "spec":map[string]interface {}{"clusterIP":"10.99.249.104", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:331
Expected error:
    <*errors.errorString | 0xc821e9e210>: {
        s: "service verification failed for: 10.99.244.249\nexpected [service1-21ddn service1-3shjm service1-kzmg4]\nreceived [service1-21ddn service1-kzmg4]",
    }
    service verification failed for: 10.99.244.249
    expected [service1-21ddn service1-3shjm service1-kzmg4]
    received [service1-21ddn service1-kzmg4]
not to have occurred

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec  9 06:38:11.232: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-d554cc65-wcju:
 container "runtime": expected RSS memory (MB) < 314572800; got 530599936
node gke-bootstrap-e2e-default-pool-d554cc65-qpzi:
 container "runtime": expected RSS memory (MB) < 314572800; got 522235904
node gke-bootstrap-e2e-default-pool-d554cc65-rnoo:
 container "runtime": expected RSS memory (MB) < 314572800; got 517201920

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:279
Expected error:
    <*errors.errorString | 0xc820b5c4f0>: {
        s: "service verification failed for: 10.99.241.27\nexpected [service3-2x1wn service3-44b6s service3-n0rkp]\nreceived [service3-44b6s service3-n0rkp]",
    }
    service verification failed for: 10.99.241.27
    expected [service3-2x1wn service3-44b6s service3-n0rkp]
    received [service3-44b6s service3-n0rkp]
not to have occurred

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200b3060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication between nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:267
Expected error:
    <*errors.errorString | 0xc821d30950>: {
        s: "pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-12-09 08:25:48 -0800 PST} FinishedAt:{Time:2016-12-09 08:25:58 -0800 PST} ContainerID:docker://15ec855a500f4b3bab57a03daf712102f1b26023b3e59f65eca31e58b4308e54}",
    }
    pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-12-09 08:25:48 -0800 PST} FinishedAt:{Time:2016-12-09 08:25:58 -0800 PST} ContainerID:docker://15ec855a500f4b3bab57a03daf712102f1b26023b3e59f65eca31e58b4308e54}
not to have occurred

Issues about this test specifically: #30131 #31402

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200b3060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-container_vm-1.5-upgrade-master/28/

Multiple broken tests:

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:453
Expected error:
    <*errors.errorString | 0xc820dd59b0>: {
        s: "failed to wait for pods responding: pod with UID 55e042a4-be47-11e6-8b4d-42010af00023 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-sblfb/pods 12926} [{{ } {my-hostname-delete-node-6n76b my-hostname-delete-node- e2e-tests-resize-nodes-sblfb /api/v1/namespaces/e2e-tests-resize-nodes-sblfb/pods/my-hostname-delete-node-6n76b 55e18094-be47-11e6-8b4d-42010af00023 12565 0 {2016-12-09 11:40:27 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-sblfb\",\"name\":\"my-hostname-delete-node\",\"uid\":\"55d9de28-be47-11e6-8b4d-42010af00023\",\"apiVersion\":\"v1\",\"resourceVersion\":\"12546\"}}\n] [{v1 ReplicationController my-hostname-delete-node 55d9de28-be47-11e6-8b4d-42010af00023 0xc820b31397}] []} {[{default-token-w9dfg {<nil> <nil> <nil> <nil> <nil> 0xc8215bdb60 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-w9dfg true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820b31490 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-5e7346d0-z7cu 0xc821db6c80 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-09 11:40:27 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-09 11:40:29 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-09 11:40:27 -0800 PST}  }]   10.240.0.2 10.96.2.3 2016-12-09T11:40:27-08:00 [] [{my-hostname-delete-node {<nil> 0xc82132be40 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://d88f6b1dd0e7b56a08fcc921dc2d140ca849c0a54e697337fdae373929003e6e}]}} {{ } {my-hostname-delete-node-dx5zj my-hostname-delete-node- e2e-tests-resize-nodes-sblfb /api/v1/namespaces/e2e-tests-resize-nodes-sblfb/pods/my-hostname-delete-node-dx5zj 917ff1f2-be47-11e6-8b4d-42010af00023 12770 0 {2016-12-09 11:42:07 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-sblfb\",\"name\":\"my-hostname-delete-node\",\"uid\":\"55d9de28-be47-11e6-8b4d-42010af00023\",\"apiVersion\":\"v1\",\"resourceVersion\":\"12657\"}}\n] [{v1 ReplicationController my-hostname-delete-node 55d9de28-be47-11e6-8b4d-42010af00023 0xc820b318d7}] []} {[{default-token-w9dfg {<nil> <nil> <nil> <nil> <nil> 0xc8215bdbc0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-w9dfg true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820b31a00 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-5e7346d0-rzyk 0xc821db6d40 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-09 11:42:07 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-09 11:42:08 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-09 11:42:07 -0800 PST}  }]   10.240.0.5 10.96.3.4 2016-12-09T11:42:07-08:00 [] [{my-hostname-delete-node {<nil> 0xc82132bf00 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://745bdeef035fc65820563b94534be72c7a876204aace020395a4e63b15f8043a}]}} {{ } {my-hostname-delete-node-rcm4d my-hostname-delete-node- e2e-tests-resize-nodes-sblfb /api/v1/namespaces/e2e-tests-resize-nodes-sblfb/pods/my-hostname-delete-node-rcm4d 55de3aa5-be47-11e6-8b4d-42010af00023 12563 0 {2016-12-09 11:40:27 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-sblfb\",\"name\":\"my-hostname-delete-node\",\"uid\":\"55d9de28-be47-11e6-8b4d-42010af00023\",\"apiVersion\":\"v1\",\"resourceVersion\":\"12546\"}}\n] [{v1 ReplicationController my-hostname-delete-node 55d9de28-be47-11e6-8b4d-42010af00023 0xc820b31e07}] []} {[{default-token-w9dfg {<nil> <nil> <nil> <nil> <nil> 0xc8215bdc20 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-w9dfg true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820b31f00 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-5e7346d0-z7cu 0xc821db6e00 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-09 11:40:27 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-09 11:40:29 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-09 11:40:27 -0800 PST}  }]   10.240.0.2 10.96.2.4 2016-12-09T11:40:27-08:00 [] [{my-hostname-delete-node {<nil> 0xc82132bf20 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://beb0327ff82a4d43a50080a44e6b6765023fd0529c80594d5f4657ef70d9ec52}]}}]}",
    }
    failed to wait for pods responding: pod with UID 55e042a4-be47-11e6-8b4d-42010af00023 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-sblfb/pods 12926} [{{ } {my-hostname-delete-node-6n76b my-hostname-delete-node- e2e-tests-resize-nodes-sblfb /api/v1/namespaces/e2e-tests-resize-nodes-sblfb/pods/my-hostname-delete-node-6n76b 55e18094-be47-11e6-8b4d-42010af00023 12565 0 {2016-12-09 11:40:27 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-sblfb","name":"my-hostname-delete-node","uid":"55d9de28-be47-11e6-8b4d-42010af00023","apiVersion":"v1","resourceVersion":"12546"}}
    ] [{v1 ReplicationController my-hostname-delete-node 55d9de28-be47-11e6-8b4d-42010af00023 0xc820b31397}] []} {[{default-token-w9dfg {<nil> <nil> <nil> <nil> <nil> 0xc8215bdb60 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-w9dfg true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820b31490 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-5e7346d0-z7cu 0xc821db6c80 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-09 11:40:27 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-09 11:40:29 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-09 11:40:27 -0800 PST}  }]   10.240.0.2 10.96.2.3 2016-12-09T11:40:27-08:00 [] [{my-hostname-delete-node {<nil> 0xc82132be40 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://d88f6b1dd0e7b56a08fcc921dc2d140ca849c0a54e697337fdae373929003e6e}]}} {{ } {my-hostname-delete-node-dx5zj my-hostname-delete-node- e2e-tests-resize-nodes-sblfb /api/v1/namespaces/e2e-tests-resize-nodes-sblfb/pods/my-hostname-delete-node-dx5zj 917ff1f2-be47-11e6-8b4d-42010af00023 12770 0 {2016-12-09 11:42:07 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-sblfb","name":"my-hostname-delete-node","uid":"55d9de28-be47-11e6-8b4d-42010af00023","apiVersion":"v1","resourceVersion":"12657"}}
    ] [{v1 ReplicationController my-hostname-delete-node 55d9de28-be47-11e6-8b4d-42010af00023 0xc820b318d7}] []} {[{default-token-w9dfg {<nil> <nil> <nil> <nil> <nil> 0xc8215bdbc0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-w9dfg true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820b31a00 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-5e7346d0-rzyk 0xc821db6d40 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-09 11:42:07 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-09 11:42:08 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-09 11:42:07 -0800 PST}  }]   10.240.0.5 10.96.3.4 2016-12-09T11:42:07-08:00 [] [{my-hostname-delete-node {<nil> 0xc82132bf00 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://745bdeef035fc65820563b94534be72c7a876204aace020395a4e63b15f8043a}]}} {{ } {my-hostname-delete-node-rcm4d my-hostname-delete-node- e2e-tests-resize-nodes-sblfb /api/v1/namespaces/e2e-tests-resize-nodes-sblfb/pods/my-hostname-delete-node-rcm4d 55de3aa5-be47-11e6-8b4d-42010af00023 12563 0 {2016-12-09 11:40:27 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-sblfb","name":"my-hostname-delete-node","uid":"55d9de28-be47-11e6-8b4d-42010af00023","apiVersion":"v1","resourceVersion":"12546"}}
    ] [{v1 ReplicationController my-hostname-delete-node 55d9de28-be47-11e6-8b4d-42010af00023 0xc820b31e07}] []} {[{default-token-w9dfg {<nil> <nil> <nil> <nil> <nil> 0xc8215bdc20 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-w9dfg true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820b31f00 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-5e7346d0-z7cu 0xc821db6e00 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-09 11:40:27 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-09 11:40:29 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-09 11:40:27 -0800 PST}  }]   10.240.0.2 10.96.2.4 2016-12-09T11:40:27-08:00 [] [{my-hostname-delete-node {<nil> 0xc82132bf20 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://beb0327ff82a4d43a50080a44e6b6765023fd0529c80594d5f4657ef70d9ec52}]}}]}
not to have occurred

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc820950800>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.35.124 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-nvq6s -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.251.127\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"namespace\":\"e2e-tests-kubectl-nvq6s\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-nvq6s/services/redis-master\", \"uid\":\"8631b501-be3e-11e6-8b4d-42010af00023\", \"resourceVersion\":\"3884\", \"creationTimestamp\":\"2016-12-09T18:37:23Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\"}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc820825cc0 exit status 1 <nil> true [0xc8200c6928 0xc8200c6a80 0xc8200c6aa0] [0xc8200c6928 0xc8200c6a80 0xc8200c6aa0] [0xc8200c6a78 0xc8200c6a98] [0xa97590 0xa97590] 0xc820c30840}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.251.127\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"namespace\":\"e2e-tests-kubectl-nvq6s\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-nvq6s/services/redis-master\", \"uid\":\"8631b501-be3e-11e6-8b4d-42010af00023\", \"resourceVersion\":\"3884\", \"creationTimestamp\":\"2016-12-09T18:37:23Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\"}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.35.124 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-nvq6s -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.251.127", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"namespace":"e2e-tests-kubectl-nvq6s", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-nvq6s/services/redis-master", "uid":"8631b501-be3e-11e6-8b4d-42010af00023", "resourceVersion":"3884", "creationTimestamp":"2016-12-09T18:37:23Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master"}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc820825cc0 exit status 1 <nil> true [0xc8200c6928 0xc8200c6a80 0xc8200c6aa0] [0xc8200c6928 0xc8200c6a80 0xc8200c6aa0] [0xc8200c6a78 0xc8200c6a98] [0xa97590 0xa97590] 0xc820c30840}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.251.127", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"namespace":"e2e-tests-kubectl-nvq6s", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-nvq6s/services/redis-master", "uid":"8631b501-be3e-11e6-8b4d-42010af00023", "resourceVersion":"3884", "creationTimestamp":"2016-12-09T18:37:23Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master"}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:331
Expected error:
    <*errors.errorString | 0xc821123fe0>: {
        s: "service verification failed for: 10.99.243.18\nexpected [service2-5rp8j service2-9cnml service2-ms9v1]\nreceived [service2-5rp8j service2-9cnml]",
    }
    service verification failed for: 10.99.243.18
    expected [service2-5rp8j service2-9cnml service2-ms9v1]
    received [service2-5rp8j service2-9cnml]
not to have occurred

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication between nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:267
Expected error:
    <*errors.errorString | 0xc8207aca30>: {
        s: "pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-12-09 15:09:13 -0800 PST} FinishedAt:{Time:2016-12-09 15:09:23 -0800 PST} ContainerID:docker://7f049362a985183c23cc7f141b52095e9a4f40437f3deb1a06116e269fb6651f}",
    }
    pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-12-09 15:09:13 -0800 PST} FinishedAt:{Time:2016-12-09 15:09:23 -0800 PST} ContainerID:docker://7f049362a985183c23cc7f141b52095e9a4f40437f3deb1a06116e269fb6651f}
not to have occurred

Issues about this test specifically: #30131 #31402

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec  9 13:39:03.168: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-5e7346d0-rzyk:
 container "runtime": expected RSS memory (MB) < 314572800; got 542572544
node gke-bootstrap-e2e-default-pool-5e7346d0-xw61:
 container "runtime": expected RSS memory (MB) < 314572800; got 523104256
node gke-bootstrap-e2e-default-pool-5e7346d0-z7cu:
 container "runtime": expected RSS memory (MB) < 314572800; got 533041152

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-container_vm-1.5-upgrade-master/29/

Multiple broken tests:

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:279
Expected error:
    <*errors.errorString | 0xc82247aac0>: {
        s: "service verification failed for: 10.99.241.69\nexpected [service1-03v8x service1-cdd83 service1-n6q7q]\nreceived [service1-03v8x service1-cdd83]",
    }
    service verification failed for: 10.99.241.69
    expected [service1-03v8x service1-cdd83 service1-n6q7q]
    received [service1-03v8x service1-cdd83]
not to have occurred

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec  9 20:08:28.097: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-03ffa51c-gpiu:
 container "runtime": expected RSS memory (MB) < 314572800; got 529297408
node gke-bootstrap-e2e-default-pool-03ffa51c-dih2:
 container "runtime": expected RSS memory (MB) < 314572800; got 528080896
node gke-bootstrap-e2e-default-pool-03ffa51c-gltu:
 container "runtime": expected RSS memory (MB) < 314572800; got 512270336

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable [replication controller] recreates pods scheduled on the unreachable node AND allows scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:544
Expected error:
    <*errors.errorString | 0xc821ccd5d0>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27324 #35852 #35880

Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication between nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:267
Expected error:
    <*errors.errorString | 0xc82098ae20>: {
        s: "pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-12-09 17:14:29 -0800 PST} FinishedAt:{Time:2016-12-09 17:14:39 -0800 PST} ContainerID:docker://f6f67955748fa22f9fa032ec47a81224b1fd04996510c66fd18cbcc33ed4cb0b}",
    }
    pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-12-09 17:14:29 -0800 PST} FinishedAt:{Time:2016-12-09 17:14:39 -0800 PST} ContainerID:docker://f6f67955748fa22f9fa032ec47a81224b1fd04996510c66fd18cbcc33ed4cb0b}
not to have occurred

Issues about this test specifically: #30131 #31402

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc8214234d0>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://107.178.214.88 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-n6ccm -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"metadata\":map[string]interface {}{\"uid\":\"1098d2c9-bea2-11e6-8528-42010af00018\", \"resourceVersion\":\"41638\", \"creationTimestamp\":\"2016-12-10T06:29:55Z\", \"labels\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-n6ccm\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-n6ccm/services/redis-master\"}, \"spec\":map[string]interface {}{\"clusterIP\":\"10.99.244.4\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\"}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc822efe3a0 exit status 1 <nil> true [0xc8212fa028 0xc8212fa070 0xc8212fa2d8] [0xc8212fa028 0xc8212fa070 0xc8212fa2d8] [0xc8212fa060 0xc8212fa2d0] [0xa97590 0xa97590] 0xc821db1980}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"metadata\":map[string]interface {}{\"uid\":\"1098d2c9-bea2-11e6-8528-42010af00018\", \"resourceVersion\":\"41638\", \"creationTimestamp\":\"2016-12-10T06:29:55Z\", \"labels\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-n6ccm\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-n6ccm/services/redis-master\"}, \"spec\":map[string]interface {}{\"clusterIP\":\"10.99.244.4\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\"}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://107.178.214.88 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-n6ccm -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"metadata":map[string]interface {}{"uid":"1098d2c9-bea2-11e6-8528-42010af00018", "resourceVersion":"41638", "creationTimestamp":"2016-12-10T06:29:55Z", "labels":map[string]interface {}{"role":"master", "app":"redis"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-n6ccm", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-n6ccm/services/redis-master"}, "spec":map[string]interface {}{"clusterIP":"10.99.244.4", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"role":"master", "app":"redis"}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1"}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc822efe3a0 exit status 1 <nil> true [0xc8212fa028 0xc8212fa070 0xc8212fa2d8] [0xc8212fa028 0xc8212fa070 0xc8212fa2d8] [0xc8212fa060 0xc8212fa2d0] [0xa97590 0xa97590] 0xc821db1980}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"metadata":map[string]interface {}{"uid":"1098d2c9-bea2-11e6-8528-42010af00018", "resourceVersion":"41638", "creationTimestamp":"2016-12-10T06:29:55Z", "labels":map[string]interface {}{"role":"master", "app":"redis"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-n6ccm", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-n6ccm/services/redis-master"}, "spec":map[string]interface {}{"clusterIP":"10.99.244.4", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"role":"master", "app":"redis"}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1"}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-container_vm-1.5-upgrade-master/30/

Multiple broken tests:

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200d80b0>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200d80b0>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:331
Expected error:
    <*errors.errorString | 0xc820943a50>: {
        s: "service verification failed for: 10.99.251.177\nexpected [service1-1z1tz service1-sltvd service1-sz52t]\nreceived [service1-sltvd service1-sz52t]",
    }
    service verification failed for: 10.99.251.177
    expected [service1-1z1tz service1-sltvd service1-sz52t]
    received [service1-sltvd service1-sz52t]
not to have occurred

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200d80b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc821b2c450>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.54.75 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-g11p7 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-g11p7/services/redis-master\", \"uid\":\"f740e7d2-bedc-11e6-a7b9-42010af00032\", \"resourceVersion\":\"43754\", \"creationTimestamp\":\"2016-12-10T13:31:33Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-g11p7\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"port\":6379, \"targetPort\":\"redis-server\", \"protocol\":\"TCP\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.242.16\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc82286f7a0 exit status 1 <nil> true [0xc8219fc0a0 0xc8219fc0b8 0xc8219fc0d0] [0xc8219fc0a0 0xc8219fc0b8 0xc8219fc0d0] [0xc8219fc0b0 0xc8219fc0c8] [0xa97590 0xa97590] 0xc822768a20}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-g11p7/services/redis-master\", \"uid\":\"f740e7d2-bedc-11e6-a7b9-42010af00032\", \"resourceVersion\":\"43754\", \"creationTimestamp\":\"2016-12-10T13:31:33Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-g11p7\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"port\":6379, \"targetPort\":\"redis-server\", \"protocol\":\"TCP\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.242.16\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.54.75 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-g11p7 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"selfLink":"/api/v1/namespaces/e2e-tests-kubectl-g11p7/services/redis-master", "uid":"f740e7d2-bedc-11e6-a7b9-42010af00032", "resourceVersion":"43754", "creationTimestamp":"2016-12-10T13:31:33Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-g11p7"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"port":6379, "targetPort":"redis-server", "protocol":"TCP"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.242.16", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc82286f7a0 exit status 1 <nil> true [0xc8219fc0a0 0xc8219fc0b8 0xc8219fc0d0] [0xc8219fc0a0 0xc8219fc0b8 0xc8219fc0d0] [0xc8219fc0b0 0xc8219fc0c8] [0xa97590 0xa97590] 0xc822768a20}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"selfLink":"/api/v1/namespaces/e2e-tests-kubectl-g11p7/services/redis-master", "uid":"f740e7d2-bedc-11e6-a7b9-42010af00032", "resourceVersion":"43754", "creationTimestamp":"2016-12-10T13:31:33Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-g11p7"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"port":6379, "targetPort":"redis-server", "protocol":"TCP"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.242.16", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec 10 02:47:05.574: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-b1f22772-y8ha:
 container "runtime": expected RSS memory (MB) < 314572800; got 536145920
node gke-bootstrap-e2e-default-pool-b1f22772-k9wh:
 container "runtime": expected RSS memory (MB) < 314572800; got 523137024
node gke-bootstrap-e2e-default-pool-b1f22772-tobe:
 container "runtime": expected RSS memory (MB) < 314572800; got 517951488

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200d80b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable [replication controller] recreates pods scheduled on the unreachable node AND allows scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:544
Expected error:
    <*errors.errorString | 0xc8224e85d0>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27324 #35852 #35880

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:279
Expected error:
    <*errors.errorString | 0xc820afdba0>: {
        s: "service verification failed for: 10.99.250.41\nexpected [service1-hlj8x service1-jsw7z service1-kd4gz]\nreceived [service1-hlj8x service1-kd4gz]",
    }
    service verification failed for: 10.99.250.41
    expected [service1-hlj8x service1-jsw7z service1-kd4gz]
    received [service1-hlj8x service1-kd4gz]
not to have occurred

Issues about this test specifically: #26128 #26685 #33408 #36298

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-container_vm-1.5-upgrade-master/31/

Multiple broken tests:

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:372
Expected error:
    <*errors.errorString | 0xc8214f69e0>: {
        s: "service verification failed for: 10.99.252.0\nexpected [service1-7jh51 service1-ftj3q service1-gmkw3]\nreceived [service1-7jh51 service1-gmkw3]",
    }
    service verification failed for: 10.99.252.0
    expected [service1-7jh51 service1-ftj3q service1-gmkw3]
    received [service1-7jh51 service1-gmkw3]
not to have occurred

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc820a00020>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.49.216 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-f1hd1 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"creationTimestamp\":\"2016-12-10T17:40:53Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-f1hd1\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-f1hd1/services/redis-master\", \"uid\":\"cc5439c4-beff-11e6-8457-42010af00020\", \"resourceVersion\":\"22657\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"targetPort\":\"redis-server\", \"protocol\":\"TCP\", \"port\":6379}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.248.55\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc82078e180 exit status 1 <nil> true [0xc8214b8450 0xc8214b8468 0xc8214b84f0] [0xc8214b8450 0xc8214b8468 0xc8214b84f0] [0xc8214b8460 0xc8214b84e8] [0xa97590 0xa97590] 0xc820e20600}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"creationTimestamp\":\"2016-12-10T17:40:53Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-f1hd1\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-f1hd1/services/redis-master\", \"uid\":\"cc5439c4-beff-11e6-8457-42010af00020\", \"resourceVersion\":\"22657\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"targetPort\":\"redis-server\", \"protocol\":\"TCP\", \"port\":6379}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.248.55\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.49.216 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-f1hd1 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"creationTimestamp":"2016-12-10T17:40:53Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-f1hd1", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-f1hd1/services/redis-master", "uid":"cc5439c4-beff-11e6-8457-42010af00020", "resourceVersion":"22657"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"targetPort":"redis-server", "protocol":"TCP", "port":6379}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.248.55", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc82078e180 exit status 1 <nil> true [0xc8214b8450 0xc8214b8468 0xc8214b84f0] [0xc8214b8450 0xc8214b8468 0xc8214b84f0] [0xc8214b8460 0xc8214b84e8] [0xa97590 0xa97590] 0xc820e20600}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"creationTimestamp":"2016-12-10T17:40:53Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-f1hd1", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-f1hd1/services/redis-master", "uid":"cc5439c4-beff-11e6-8457-42010af00020", "resourceVersion":"22657"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"targetPort":"redis-server", "protocol":"TCP", "port":6379}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.248.55", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:453
Expected error:
    <*errors.errorString | 0xc820d93450>: {
        s: "failed to wait for pods responding: pod with UID a0cbfa1a-bf06-11e6-8457-42010af00020 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-vlhw1/pods 30101} [{{ } {my-hostname-delete-node-mxk27 my-hostname-delete-node- e2e-tests-resize-nodes-vlhw1 /api/v1/namespaces/e2e-tests-resize-nodes-vlhw1/pods/my-hostname-delete-node-mxk27 a0cc1a27-bf06-11e6-8457-42010af00020 29691 0 {2016-12-10 10:29:46 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-vlhw1\",\"name\":\"my-hostname-delete-node\",\"uid\":\"a0ca18e5-bf06-11e6-8457-42010af00020\",\"apiVersion\":\"v1\",\"resourceVersion\":\"29675\"}}\n] [{v1 ReplicationController my-hostname-delete-node a0ca18e5-bf06-11e6-8457-42010af00020 0xc822209897}] []} {[{default-token-4tgzv {<nil> <nil> <nil> <nil> <nil> 0xc8228a1bf0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-4tgzv true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc822209990 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-b1f2264a-wzcv 0xc822448f80 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-10 10:29:47 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-10 10:29:48 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-10 10:29:46 -0800 PST}  }]   10.240.0.4 10.96.0.3 2016-12-10T10:29:47-08:00 [] [{my-hostname-delete-node {<nil> 0xc821ca6500 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://5382df73c3172efd7b99f4468a896b434f6c4d4bf1d817492001cb526c2e39ad}]}} {{ } {my-hostname-delete-node-ngk73 my-hostname-delete-node- e2e-tests-resize-nodes-vlhw1 /api/v1/namespaces/e2e-tests-resize-nodes-vlhw1/pods/my-hostname-delete-node-ngk73 fa51cdde-bf06-11e6-8457-42010af00020 29942 0 {2016-12-10 10:32:17 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-vlhw1\",\"name\":\"my-hostname-delete-node\",\"uid\":\"a0ca18e5-bf06-11e6-8457-42010af00020\",\"apiVersion\":\"v1\",\"resourceVersion\":\"29847\"}}\n] [{v1 ReplicationController my-hostname-delete-node a0ca18e5-bf06-11e6-8457-42010af00020 0xc822209c27}] []} {[{default-token-4tgzv {<nil> <nil> <nil> <nil> <nil> 0xc8228a1c50 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-4tgzv true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc822209d20 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-b1f2264a-q9mw 0xc822449040 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-10 10:32:17 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-10 10:32:18 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-10 10:32:17 -0800 PST}  }]   10.240.0.2 10.96.1.3 2016-12-10T10:32:17-08:00 [] [{my-hostname-delete-node {<nil> 0xc821ca6520 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://7c2b6ab4bd938862bed695c94ba5f65baabb227702cb3661abdaca99e2c0270f}]}} {{ } {my-hostname-delete-node-s9200 my-hostname-delete-node- e2e-tests-resize-nodes-vlhw1 /api/v1/namespaces/e2e-tests-resize-nodes-vlhw1/pods/my-hostname-delete-node-s9200 a0cc3a2a-bf06-11e6-8457-42010af00020 29693 0 {2016-12-10 10:29:46 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-vlhw1\",\"name\":\"my-hostname-delete-node\",\"uid\":\"a0ca18e5-bf06-11e6-8457-42010af00020\",\"apiVersion\":\"v1\",\"resourceVersion\":\"29675\"}}\n] [{v1 ReplicationController my-hostname-delete-node a0ca18e5-bf06-11e6-8457-42010af00020 0xc822209fb7}] []} {[{default-token-4tgzv {<nil> <nil> <nil> <nil> <nil> 0xc8228a1cb0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-4tgzv true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820d92180 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-b1f2264a-wzcv 0xc822449100 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-10 10:29:47 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-10 10:29:48 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-10 10:29:47 -0800 PST}  }]   10.240.0.4 10.96.0.4 2016-12-10T10:29:47-08:00 [] [{my-hostname-delete-node {<nil> 0xc821ca6540 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://954237dab4187505efcd4f6bb138f248c07bb186f88e9f0bc2091e8b77745a19}]}}]}",
    }
    failed to wait for pods responding: pod with UID a0cbfa1a-bf06-11e6-8457-42010af00020 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-vlhw1/pods 30101} [{{ } {my-hostname-delete-node-mxk27 my-hostname-delete-node- e2e-tests-resize-nodes-vlhw1 /api/v1/namespaces/e2e-tests-resize-nodes-vlhw1/pods/my-hostname-delete-node-mxk27 a0cc1a27-bf06-11e6-8457-42010af00020 29691 0 {2016-12-10 10:29:46 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-vlhw1","name":"my-hostname-delete-node","uid":"a0ca18e5-bf06-11e6-8457-42010af00020","apiVersion":"v1","resourceVersion":"29675"}}
    ] [{v1 ReplicationController my-hostname-delete-node a0ca18e5-bf06-11e6-8457-42010af00020 0xc822209897}] []} {[{default-token-4tgzv {<nil> <nil> <nil> <nil> <nil> 0xc8228a1bf0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-4tgzv true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc822209990 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-b1f2264a-wzcv 0xc822448f80 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-10 10:29:47 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-10 10:29:48 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-10 10:29:46 -0800 PST}  }]   10.240.0.4 10.96.0.3 2016-12-10T10:29:47-08:00 [] [{my-hostname-delete-node {<nil> 0xc821ca6500 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://5382df73c3172efd7b99f4468a896b434f6c4d4bf1d817492001cb526c2e39ad}]}} {{ } {my-hostname-delete-node-ngk73 my-hostname-delete-node- e2e-tests-resize-nodes-vlhw1 /api/v1/namespaces/e2e-tests-resize-nodes-vlhw1/pods/my-hostname-delete-node-ngk73 fa51cdde-bf06-11e6-8457-42010af00020 29942 0 {2016-12-10 10:32:17 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-vlhw1","name":"my-hostname-delete-node","uid":"a0ca18e5-bf06-11e6-8457-42010af00020","apiVersion":"v1","resourceVersion":"29847"}}
    ] [{v1 ReplicationController my-hostname-delete-node a0ca18e5-bf06-11e6-8457-42010af00020 0xc822209c27}] []} {[{default-token-4tgzv {<nil> <nil> <nil> <nil> <nil> 0xc8228a1c50 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-4tgzv true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc822209d20 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-b1f2264a-q9mw 0xc822449040 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-10 10:32:17 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-10 10:32:18 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-10 10:32:17 -0800 PST}  }]   10.240.0.2 10.96.1.3 2016-12-10T10:32:17-08:00 [] [{my-hostname-delete-node {<nil> 0xc821ca6520 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://7c2b6ab4bd938862bed695c94ba5f65baabb227702cb3661abdaca99e2c0270f}]}} {{ } {my-hostname-delete-node-s9200 my-hostname-delete-node- e2e-tests-resize-nodes-vlhw1 /api/v1/namespaces/e2e-tests-resize-nodes-vlhw1/pods/my-hostname-delete-node-s9200 a0cc3a2a-bf06-11e6-8457-42010af00020 29693 0 {2016-12-10 10:29:46 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-vlhw1","name":"my-hostname-delete-node","uid":"a0ca18e5-bf06-11e6-8457-42010af00020","apiVersion":"v1","resourceVersion":"29675"}}
    ] [{v1 ReplicationController my-hostname-delete-node a0ca18e5-bf06-11e6-8457-42010af00020 0xc822209fb7}] []} {[{default-token-4tgzv {<nil> <nil> <nil> <nil> <nil> 0xc8228a1cb0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-4tgzv true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820d92180 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-b1f2264a-wzcv 0xc822449100 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-10 10:29:47 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-10 10:29:48 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-10 10:29:47 -0800 PST}  }]   10.240.0.4 10.96.0.4 2016-12-10T10:29:47-08:00 [] [{my-hostname-delete-node {<nil> 0xc821ca6540 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://954237dab4187505efcd4f6bb138f248c07bb186f88e9f0bc2091e8b77745a19}]}}]}
not to have occurred

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec 10 12:28:22.706: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-b1f2264a-fke9:
 container "runtime": expected RSS memory (MB) < 314572800; got 510578688
node gke-bootstrap-e2e-default-pool-b1f2264a-q9mw:
 container "runtime": expected RSS memory (MB) < 314572800; got 535379968
node gke-bootstrap-e2e-default-pool-b1f2264a-wzcv:
 container "runtime": expected RSS memory (MB) < 314572800; got 540868608

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-container_vm-1.5-upgrade-master/32/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec 10 17:07:34.847: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-12bb8a91-5gns:
 container "runtime": expected RSS memory (MB) < 314572800; got 514785280
node gke-bootstrap-e2e-default-pool-12bb8a91-5s5r:
 container "runtime": expected RSS memory (MB) < 314572800; got 522010624
node gke-bootstrap-e2e-default-pool-12bb8a91-y7zv:
 container "runtime": expected RSS memory (MB) < 314572800; got 530599936

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:453
Expected error:
    <*errors.errorString | 0xc820a2dcc0>: {
        s: "failed to wait for pods responding: pod with UID 491fc143-bf29-11e6-be61-42010af00036 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-xgffc/pods 14261} [{{ } {my-hostname-delete-node-51lwl my-hostname-delete-node- e2e-tests-resize-nodes-xgffc /api/v1/namespaces/e2e-tests-resize-nodes-xgffc/pods/my-hostname-delete-node-51lwl 491ef799-bf29-11e6-be61-42010af00036 13925 0 {2016-12-10 14:37:52 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-xgffc\",\"name\":\"my-hostname-delete-node\",\"uid\":\"491c384c-bf29-11e6-be61-42010af00036\",\"apiVersion\":\"v1\",\"resourceVersion\":\"13908\"}}\n] [{v1 ReplicationController my-hostname-delete-node 491c384c-bf29-11e6-be61-42010af00036 0xc82199b607}] []} {[{default-token-dlglb {<nil> <nil> <nil> <nil> <nil> 0xc82052cd20 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-dlglb true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82199b800 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-12bb8a91-y7zv 0xc821e37a00 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-10 14:37:52 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-10 14:37:53 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-10 14:37:52 -0800 PST}  }]   10.240.0.4 10.96.2.3 2016-12-10T14:37:52-08:00 [] [{my-hostname-delete-node {<nil> 0xc821c49b00 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://fca91bfce4cb0f456ee95ca1333be374f5878b3ad87ca25beb5e62ded0510a29}]}} {{ } {my-hostname-delete-node-z13vs my-hostname-delete-node- e2e-tests-resize-nodes-xgffc /api/v1/namespaces/e2e-tests-resize-nodes-xgffc/pods/my-hostname-delete-node-z13vs 491f699b-bf29-11e6-be61-42010af00036 13921 0 {2016-12-10 14:37:52 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-xgffc\",\"name\":\"my-hostname-delete-node\",\"uid\":\"491c384c-bf29-11e6-be61-42010af00036\",\"apiVersion\":\"v1\",\"resourceVersion\":\"13908\"}}\n] [{v1 ReplicationController my-hostname-delete-node 491c384c-bf29-11e6-be61-42010af00036 0xc82199bbd7}] []} {[{default-token-dlglb {<nil> <nil> <nil> <nil> <nil> 0xc82052cd80 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-dlglb true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82199bd20 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-12bb8a91-5s5r 0xc821e37ac0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-10 14:37:52 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-10 14:37:53 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-10 14:37:52 -0800 PST}  }]   10.240.0.2 10.96.1.3 2016-12-10T14:37:52-08:00 [] [{my-hostname-delete-node {<nil> 0xc821c49b20 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://c2c7493ef9ece16c2c1bb039f3969002157d76f36994e2471572eb074737bd2d}]}} {{ } {my-hostname-delete-node-zggsc my-hostname-delete-node- e2e-tests-resize-nodes-xgffc /api/v1/namespaces/e2e-tests-resize-nodes-xgffc/pods/my-hostname-delete-node-zggsc 8788accc-bf29-11e6-be61-42010af00036 14104 0 {2016-12-10 14:39:36 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-xgffc\",\"name\":\"my-hostname-delete-node\",\"uid\":\"491c384c-bf29-11e6-be61-42010af00036\",\"apiVersion\":\"v1\",\"resourceVersion\":\"14019\"}}\n] [{v1 ReplicationController my-hostname-delete-node 491c384c-bf29-11e6-be61-42010af00036 0xc82199bfb7}] []} {[{default-token-dlglb {<nil> <nil> <nil> <nil> <nil> 0xc82052cde0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-dlglb true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821a700b0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-12bb8a91-5s5r 0xc821e37b80 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-10 14:39:37 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-10 14:39:38 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-10 14:39:37 -0800 PST}  }]   10.240.0.2 10.96.1.4 2016-12-10T14:39:37-08:00 [] [{my-hostname-delete-node {<nil> 0xc821c49b80 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://8283c90195f3b060ecb21b24b945a50ead16d0df1bdd4461f66e2b75f6895909}]}}]}",
    }
    failed to wait for pods responding: pod with UID 491fc143-bf29-11e6-be61-42010af00036 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-xgffc/pods 14261} [{{ } {my-hostname-delete-node-51lwl my-hostname-delete-node- e2e-tests-resize-nodes-xgffc /api/v1/namespaces/e2e-tests-resize-nodes-xgffc/pods/my-hostname-delete-node-51lwl 491ef799-bf29-11e6-be61-42010af00036 13925 0 {2016-12-10 14:37:52 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-xgffc","name":"my-hostname-delete-node","uid":"491c384c-bf29-11e6-be61-42010af00036","apiVersion":"v1","resourceVersion":"13908"}}
    ] [{v1 ReplicationController my-hostname-delete-node 491c384c-bf29-11e6-be61-42010af00036 0xc82199b607}] []} {[{default-token-dlglb {<nil> <nil> <nil> <nil> <nil> 0xc82052cd20 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-dlglb true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82199b800 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-12bb8a91-y7zv 0xc821e37a00 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-10 14:37:52 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-10 14:37:53 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-10 14:37:52 -0800 PST}  }]   10.240.0.4 10.96.2.3 2016-12-10T14:37:52-08:00 [] [{my-hostname-delete-node {<nil> 0xc821c49b00 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://fca91bfce4cb0f456ee95ca1333be374f5878b3ad87ca25beb5e62ded0510a29}]}} {{ } {my-hostname-delete-node-z13vs my-hostname-delete-node- e2e-tests-resize-nodes-xgffc /api/v1/namespaces/e2e-tests-resize-nodes-xgffc/pods/my-hostname-delete-node-z13vs 491f699b-bf29-11e6-be61-42010af00036 13921 0 {2016-12-10 14:37:52 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-xgffc","name":"my-hostname-delete-node","uid":"491c384c-bf29-11e6-be61-42010af00036","apiVersion":"v1","resourceVersion":"13908"}}
    ] [{v1 ReplicationController my-hostname-delete-node 491c384c-bf29-11e6-be61-42010af00036 0xc82199bbd7}] []} {[{default-token-dlglb {<nil> <nil> <nil> <nil> <nil> 0xc82052cd80 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-dlglb true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82199bd20 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-12bb8a91-5s5r 0xc821e37ac0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-10 14:37:52 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-10 14:37:53 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-10 14:37:52 -0800 PST}  }]   10.240.0.2 10.96.1.3 2016-12-10T14:37:52-08:00 [] [{my-hostname-delete-node {<nil> 0xc821c49b20 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://c2c7493ef9ece16c2c1bb039f3969002157d76f36994e2471572eb074737bd2d}]}} {{ } {my-hostname-delete-node-zggsc my-hostname-delete-node- e2e-tests-resize-nodes-xgffc /api/v1/namespaces/e2e-tests-resize-nodes-xgffc/pods/my-hostname-delete-node-zggsc 8788accc-bf29-11e6-be61-42010af00036 14104 0 {2016-12-10 14:39:36 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-xgffc","name":"my-hostname-delete-node","uid":"491c384c-bf29-11e6-be61-42010af00036","apiVersion":"v1","resourceVersion":"14019"}}
    ] [{v1 ReplicationController my-hostname-delete-node 491c384c-bf29-11e6-be61-42010af00036 0xc82199bfb7}] []} {[{default-token-dlglb {<nil> <nil> <nil> <nil> <nil> 0xc82052cde0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-dlglb true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821a700b0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-12bb8a91-5s5r 0xc821e37b80 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-10 14:39:37 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-10 14:39:38 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-10 14:39:37 -0800 PST}  }]   10.240.0.2 10.96.1.4 2016-12-10T14:39:37-08:00 [] [{my-hostname-delete-node {<nil> 0xc821c49b80 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://8283c90195f3b060ecb21b24b945a50ead16d0df1bdd4461f66e2b75f6895909}]}}]}
not to have occurred

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200b3060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200b3060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc82293c5c0>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.36.127 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-tj3b8 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"metadata\":map[string]interface {}{\"resourceVersion\":\"34999\", \"creationTimestamp\":\"2016-12-11T01:51:37Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-tj3b8\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-tj3b8/services/redis-master\", \"uid\":\"5a11767a-bf44-11e6-a3ef-42010af0001b\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.244.222\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\"}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc82180aa60 exit status 1 <nil> true [0xc821e6c0a8 0xc821e6c0d8 0xc821e6c0f0] [0xc821e6c0a8 0xc821e6c0d8 0xc821e6c0f0] [0xc821e6c0b8 0xc821e6c0e8] [0xa97590 0xa97590] 0xc8219c93e0}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"metadata\":map[string]interface {}{\"resourceVersion\":\"34999\", \"creationTimestamp\":\"2016-12-11T01:51:37Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-tj3b8\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-tj3b8/services/redis-master\", \"uid\":\"5a11767a-bf44-11e6-a3ef-42010af0001b\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.244.222\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\"}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.36.127 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-tj3b8 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"metadata":map[string]interface {}{"resourceVersion":"34999", "creationTimestamp":"2016-12-11T01:51:37Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-tj3b8", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-tj3b8/services/redis-master", "uid":"5a11767a-bf44-11e6-a3ef-42010af0001b"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.244.222", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1"}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc82180aa60 exit status 1 <nil> true [0xc821e6c0a8 0xc821e6c0d8 0xc821e6c0f0] [0xc821e6c0a8 0xc821e6c0d8 0xc821e6c0f0] [0xc821e6c0b8 0xc821e6c0e8] [0xa97590 0xa97590] 0xc8219c93e0}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"metadata":map[string]interface {}{"resourceVersion":"34999", "creationTimestamp":"2016-12-11T01:51:37Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-tj3b8", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-tj3b8/services/redis-master", "uid":"5a11767a-bf44-11e6-a3ef-42010af0001b"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.244.222", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1"}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200b3060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:331
Expected error:
    <*errors.errorString | 0xc820d28fa0>: {
        s: "service verification failed for: 10.99.242.17\nexpected [service1-3qnj5 service1-ht6zq service1-rvbw3]\nreceived [service1-3qnj5 service1-rvbw3]",
    }
    service verification failed for: 10.99.242.17
    expected [service1-3qnj5 service1-ht6zq service1-rvbw3]
    received [service1-3qnj5 service1-rvbw3]
not to have occurred

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200b3060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:725
Dec 10 13:28:44.835: Failed waiting for pods to be running: Timeout waiting for 1 pods to be ready

Issues about this test specifically: #26134

Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication between nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:267
Expected error:
    <*errors.errorString | 0xc8207aa300>: {
        s: "pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-12-10 13:48:31 -0800 PST} FinishedAt:{Time:2016-12-10 13:48:41 -0800 PST} ContainerID:docker://f2db2df03ab2e3076b1083f179b9357c3930897f81ad653c0cad9cbd83a0b356}",
    }
    pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-12-10 13:48:31 -0800 PST} FinishedAt:{Time:2016-12-10 13:48:41 -0800 PST} ContainerID:docker://f2db2df03ab2e3076b1083f179b9357c3930897f81ad653c0cad9cbd83a0b356}
not to have occurred

Issues about this test specifically: #30131 #31402

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:281
Expected
    <bool>: false
to be true

Issues about this test specifically: #28426 #32168 #33756 #34797

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-container_vm-1.5-upgrade-master/33/

Multiple broken tests:

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200d9060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200d9060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec 10 22:52:57.134: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-6e3d2277-p3bi:
 container "runtime": expected RSS memory (MB) < 314572800; got 535220224
node gke-bootstrap-e2e-default-pool-6e3d2277-x4po:
 container "runtime": expected RSS memory (MB) < 314572800; got 542044160
node gke-bootstrap-e2e-default-pool-6e3d2277-9p3n:
 container "runtime": expected RSS memory (MB) < 314572800; got 526708736

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] KubeProxy should test kube-proxy [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubeproxy.go:107
Expected error:
    <*errors.errorString | 0xc8200d9060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26490 #33669

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200d9060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200d9060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_etc_hosts.go:56
Dec 10 20:49:55.580: Failed to read from kubectl exec stdout: EOF

Issues about this test specifically: #27023 #34604 #38550

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc821f5cbc0>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.70.38 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-42tck -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"clusterIP\":\"10.99.247.156\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"resourceVersion\":\"11984\", \"creationTimestamp\":\"2016-12-11T05:29:52Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-42tck\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-42tck/services/redis-master\", \"uid\":\"d77d5c43-bf62-11e6-9fa3-42010af00027\"}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc82117c120 exit status 1 <nil> true [0xc821652010 0xc821652028 0xc821652048] [0xc821652010 0xc821652028 0xc821652048] [0xc821652020 0xc821652040] [0xa97590 0xa97590] 0xc821ed5f20}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"clusterIP\":\"10.99.247.156\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"resourceVersion\":\"11984\", \"creationTimestamp\":\"2016-12-11T05:29:52Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-42tck\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-42tck/services/redis-master\", \"uid\":\"d77d5c43-bf62-11e6-9fa3-42010af00027\"}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.70.38 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-42tck -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"role":"master", "app":"redis"}, "clusterIP":"10.99.247.156", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"resourceVersion":"11984", "creationTimestamp":"2016-12-11T05:29:52Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-42tck", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-42tck/services/redis-master", "uid":"d77d5c43-bf62-11e6-9fa3-42010af00027"}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc82117c120 exit status 1 <nil> true [0xc821652010 0xc821652028 0xc821652048] [0xc821652010 0xc821652028 0xc821652048] [0xc821652020 0xc821652040] [0xa97590 0xa97590] 0xc821ed5f20}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"role":"master", "app":"redis"}, "clusterIP":"10.99.247.156", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"resourceVersion":"11984", "creationTimestamp":"2016-12-11T05:29:52Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-42tck", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-42tck/services/redis-master", "uid":"d77d5c43-bf62-11e6-9fa3-42010af00027"}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-container_vm-1.5-upgrade-master/34/

Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82155ebd0>: {
        s: "Namespace e2e-tests-port-forwarding-gcm1m is active",
    }
    Namespace e2e-tests-port-forwarding-gcm1m is active
not to have occurred

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820aad280>: {
        s: "Namespace e2e-tests-port-forwarding-gcm1m is active",
    }
    Namespace e2e-tests-port-forwarding-gcm1m is active
not to have occurred

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820afed90>: {
        s: "Namespace e2e-tests-port-forwarding-gcm1m is active",
    }
    Namespace e2e-tests-port-forwarding-gcm1m is active
not to have occurred

Issues about this test specifically: #35279

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200c5060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82079d850>: {
        s: "Namespace e2e-tests-port-forwarding-gcm1m is active",
    }
    Namespace e2e-tests-port-forwarding-gcm1m is active
not to have occurred

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec 11 05:16:45.448: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-ccc160c0-41cz:
 container "runtime": expected RSS memory (MB) < 314572800; got 511213568
node gke-bootstrap-e2e-default-pool-ccc160c0-c9ct:
 container "runtime": expected RSS memory (MB) < 314572800; got 534622208
node gke-bootstrap-e2e-default-pool-ccc160c0-djht:
 container "runtime": expected RSS memory (MB) < 314572800; got 525463552

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc824769850>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.136.217 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-pfzsw -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-pfzsw/services/redis-master\", \"uid\":\"b3befd05-bfa4-11e6-b2ce-42010af00027\", \"resourceVersion\":\"23072\", \"creationTimestamp\":\"2016-12-11T13:21:19Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-pfzsw\"}, \"spec\":map[string]interface {}{\"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.251.179\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\"}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc821599ea0 exit status 1 <nil> true [0xc820fa0350 0xc820fa0368 0xc820fa0380] [0xc820fa0350 0xc820fa0368 0xc820fa0380] [0xc820fa0360 0xc820fa0378] [0xa97590 0xa97590] 0xc8218e37a0}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-pfzsw/services/redis-master\", \"uid\":\"b3befd05-bfa4-11e6-b2ce-42010af00027\", \"resourceVersion\":\"23072\", \"creationTimestamp\":\"2016-12-11T13:21:19Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-pfzsw\"}, \"spec\":map[string]interface {}{\"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.251.179\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\"}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.136.217 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-pfzsw -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"apiVersion":"v1", "metadata":map[string]interface {}{"selfLink":"/api/v1/namespaces/e2e-tests-kubectl-pfzsw/services/redis-master", "uid":"b3befd05-bfa4-11e6-b2ce-42010af00027", "resourceVersion":"23072", "creationTimestamp":"2016-12-11T13:21:19Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-pfzsw"}, "spec":map[string]interface {}{"selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.251.179", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service"}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc821599ea0 exit status 1 <nil> true [0xc820fa0350 0xc820fa0368 0xc820fa0380] [0xc820fa0350 0xc820fa0368 0xc820fa0380] [0xc820fa0360 0xc820fa0378] [0xa97590 0xa97590] 0xc8218e37a0}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"apiVersion":"v1", "metadata":map[string]interface {}{"selfLink":"/api/v1/namespaces/e2e-tests-kubectl-pfzsw/services/redis-master", "uid":"b3befd05-bfa4-11e6-b2ce-42010af00027", "resourceVersion":"23072", "creationTimestamp":"2016-12-11T13:21:19Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-pfzsw"}, "spec":map[string]interface {}{"selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.251.179", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service"}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200c5060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200c5060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Dec 11 02:13:31.825: All nodes should be ready after test, Get https://104.154.136.217/api/v1/nodes: read tcp 172.17.0.10:55732->104.154.136.217:443: read: connection reset by peer

Issues about this test specifically: #26955

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-container_vm-1.5-upgrade-master/35/

Multiple broken tests:

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc820081f80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc8208755f0>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.227.227 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-8v3ph -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"namespace\":\"e2e-tests-kubectl-8v3ph\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-8v3ph/services/redis-master\", \"uid\":\"ceca12f0-bfc5-11e6-a645-42010af00027\", \"resourceVersion\":\"4441\", \"creationTimestamp\":\"2016-12-11T17:18:18Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\"}, \"spec\":map[string]interface {}{\"clusterIP\":\"10.99.244.76\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc820ebbb80 exit status 1 <nil> true [0xc82019ea98 0xc82019eab8 0xc82019eae0] [0xc82019ea98 0xc82019eab8 0xc82019eae0] [0xc82019eab0 0xc82019ead8] [0xa97590 0xa97590] 0xc820fefe60}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"namespace\":\"e2e-tests-kubectl-8v3ph\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-8v3ph/services/redis-master\", \"uid\":\"ceca12f0-bfc5-11e6-a645-42010af00027\", \"resourceVersion\":\"4441\", \"creationTimestamp\":\"2016-12-11T17:18:18Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\"}, \"spec\":map[string]interface {}{\"clusterIP\":\"10.99.244.76\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.227.227 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-8v3ph -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"namespace":"e2e-tests-kubectl-8v3ph", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-8v3ph/services/redis-master", "uid":"ceca12f0-bfc5-11e6-a645-42010af00027", "resourceVersion":"4441", "creationTimestamp":"2016-12-11T17:18:18Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master"}, "spec":map[string]interface {}{"clusterIP":"10.99.244.76", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc820ebbb80 exit status 1 <nil> true [0xc82019ea98 0xc82019eab8 0xc82019eae0] [0xc82019ea98 0xc82019eab8 0xc82019eae0] [0xc82019eab0 0xc82019ead8] [0xa97590 0xa97590] 0xc820fefe60}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"namespace":"e2e-tests-kubectl-8v3ph", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-8v3ph/services/redis-master", "uid":"ceca12f0-bfc5-11e6-a645-42010af00027", "resourceVersion":"4441", "creationTimestamp":"2016-12-11T17:18:18Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master"}, "spec":map[string]interface {}{"clusterIP":"10.99.244.76", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc820081f80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc820081f80>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc820081f80>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec 11 10:49:18.744: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-9eec89df-5igi:
 container "runtime": expected RSS memory (MB) < 314572800; got 514121728
node gke-bootstrap-e2e-default-pool-9eec89df-lbnb:
 container "runtime": expected RSS memory (MB) < 314572800; got 528236544
node gke-bootstrap-e2e-default-pool-9eec89df-n0ef:
 container "runtime": expected RSS memory (MB) < 314572800; got 528109568

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:453
Expected error:
    <*errors.errorString | 0xc82695d070>: {
        s: "failed to wait for pods responding: pod with UID 2aea2eaf-bfef-11e6-a645-42010af00027 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-539tt/pods 37695} [{{ } {my-hostname-delete-node-ch5rz my-hostname-delete-node- e2e-tests-resize-nodes-539tt /api/v1/namespaces/e2e-tests-resize-nodes-539tt/pods/my-hostname-delete-node-ch5rz 5eda67d6-bfef-11e6-a645-42010af00027 37545 0 {2016-12-11 14:15:49 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-539tt\",\"name\":\"my-hostname-delete-node\",\"uid\":\"2ae6ee3e-bfef-11e6-a645-42010af00027\",\"apiVersion\":\"v1\",\"resourceVersion\":\"37487\"}}\n] [{v1 ReplicationController my-hostname-delete-node 2ae6ee3e-bfef-11e6-a645-42010af00027 0xc8219c7727}] []} {[{default-token-p48nf {<nil> <nil> <nil> <nil> <nil> 0xc821b86180 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-p48nf true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8219c7820 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-9eec89df-lbnb 0xc826930040 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 14:15:49 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 14:15:49 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 14:15:49 -0800 PST}  }]   10.240.0.4 10.96.2.7 2016-12-11T14:15:49-08:00 [] [{my-hostname-delete-node {<nil> 0xc821fb0200 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://4ca36eb56714505ac90298f25bbe1c20b82ca9fabb59873a6d18aa16f7ccb24c}]}} {{ } {my-hostname-delete-node-mzt7f my-hostname-delete-node- e2e-tests-resize-nodes-539tt /api/v1/namespaces/e2e-tests-resize-nodes-539tt/pods/my-hostname-delete-node-mzt7f 5ed644f2-bfef-11e6-a645-42010af00027 37547 0 {2016-12-11 14:15:49 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-539tt\",\"name\":\"my-hostname-delete-node\",\"uid\":\"2ae6ee3e-bfef-11e6-a645-42010af00027\",\"apiVersion\":\"v1\",\"resourceVersion\":\"37487\"}}\n] [{v1 ReplicationController my-hostname-delete-node 2ae6ee3e-bfef-11e6-a645-42010af00027 0xc8219c7b27}] []} {[{default-token-p48nf {<nil> <nil> <nil> <nil> <nil> 0xc821b86390 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-p48nf true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8219c7c40 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-9eec89df-n0ef 0xc826930180 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 14:15:49 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 14:15:50 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 14:15:49 -0800 PST}  }]   10.240.0.2 10.96.1.3 2016-12-11T14:15:49-08:00 [] [{my-hostname-delete-node {<nil> 0xc821fb0220 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://166a57de493b9f7a0ea6ba4f8ebb977f217301fa954f0e58d4f0dfab3a7e422e}]}} {{ } {my-hostname-delete-node-xk786 my-hostname-delete-node- e2e-tests-resize-nodes-539tt /api/v1/namespaces/e2e-tests-resize-nodes-539tt/pods/my-hostname-delete-node-xk786 2aea198d-bfef-11e6-a645-42010af00027 37412 0 {2016-12-11 14:14:21 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-539tt\",\"name\":\"my-hostname-delete-node\",\"uid\":\"2ae6ee3e-bfef-11e6-a645-42010af00027\",\"apiVersion\":\"v1\",\"resourceVersion\":\"37395\"}}\n] [{v1 ReplicationController my-hostname-delete-node 2ae6ee3e-bfef-11e6-a645-42010af00027 0xc8219c7ff7}] []} {[{default-token-p48nf {<nil> <nil> <nil> <nil> <nil> 0xc821b863f0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-p48nf true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82695c120 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-9eec89df-lbnb 0xc8269302c0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 14:14:21 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 14:14:23 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 14:14:21 -0800 PST}  }]   10.240.0.4 10.96.2.3 2016-12-11T14:14:21-08:00 [] [{my-hostname-delete-node {<nil> 0xc821fb0260 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://367537d4d4327071df03d4b5289cc651b08a931394de64617b0b8d1c330d4c1e}]}}]}",
    }
    failed to wait for pods responding: pod with UID 2aea2eaf-bfef-11e6-a645-42010af00027 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-539tt/pods 37695} [{{ } {my-hostname-delete-node-ch5rz my-hostname-delete-node- e2e-tests-resize-nodes-539tt /api/v1/namespaces/e2e-tests-resize-nodes-539tt/pods/my-hostname-delete-node-ch5rz 5eda67d6-bfef-11e6-a645-42010af00027 37545 0 {2016-12-11 14:15:49 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-539tt","name":"my-hostname-delete-node","uid":"2ae6ee3e-bfef-11e6-a645-42010af00027","apiVersion":"v1","resourceVersion":"37487"}}
    ] [{v1 ReplicationController my-hostname-delete-node 2ae6ee3e-bfef-11e6-a645-42010af00027 0xc8219c7727}] []} {[{default-token-p48nf {<nil> <nil> <nil> <nil> <nil> 0xc821b86180 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-p48nf true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8219c7820 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-9eec89df-lbnb 0xc826930040 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 14:15:49 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 14:15:49 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 14:15:49 -0800 PST}  }]   10.240.0.4 10.96.2.7 2016-12-11T14:15:49-08:00 [] [{my-hostname-delete-node {<nil> 0xc821fb0200 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://4ca36eb56714505ac90298f25bbe1c20b82ca9fabb59873a6d18aa16f7ccb24c}]}} {{ } {my-hostname-delete-node-mzt7f my-hostname-delete-node- e2e-tests-resize-nodes-539tt /api/v1/namespaces/e2e-tests-resize-nodes-539tt/pods/my-hostname-delete-node-mzt7f 5ed644f2-bfef-11e6-a645-42010af00027 37547 0 {2016-12-11 14:15:49 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-539tt","name":"my-hostname-delete-node","uid":"2ae6ee3e-bfef-11e6-a645-42010af00027","apiVersion":"v1","resourceVersion":"37487"}}
    ] [{v1 ReplicationController my-hostname-delete-node 2ae6ee3e-bfef-11e6-a645-42010af00027 0xc8219c7b27}] []} {[{default-token-p48nf {<nil> <nil> <nil> <nil> <nil> 0xc821b86390 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-p48nf true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8219c7c40 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-9eec89df-n0ef 0xc826930180 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 14:15:49 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 14:15:50 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 14:15:49 -0800 PST}  }]   10.240.0.2 10.96.1.3 2016-12-11T14:15:49-08:00 [] [{my-hostname-delete-node {<nil> 0xc821fb0220 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://166a57de493b9f7a0ea6ba4f8ebb977f217301fa954f0e58d4f0dfab3a7e422e}]}} {{ } {my-hostname-delete-node-xk786 my-hostname-delete-node- e2e-tests-resize-nodes-539tt /api/v1/namespaces/e2e-tests-resize-nodes-539tt/pods/my-hostname-delete-node-xk786 2aea198d-bfef-11e6-a645-42010af00027 37412 0 {2016-12-11 14:14:21 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-539tt","name":"my-hostname-delete-node","uid":"2ae6ee3e-bfef-11e6-a645-42010af00027","apiVersion":"v1","resourceVersion":"37395"}}
    ] [{v1 ReplicationController my-hostname-delete-node 2ae6ee3e-bfef-11e6-a645-42010af00027 0xc8219c7ff7}] []} {[{default-token-p48nf {<nil> <nil> <nil> <nil> <nil> 0xc821b863f0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-p48nf true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82695c120 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-9eec89df-lbnb 0xc8269302c0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 14:14:21 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 14:14:23 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 14:14:21 -0800 PST}  }]   10.240.0.4 10.96.2.3 2016-12-11T14:14:21-08:00 [] [{my-hostname-delete-node {<nil> 0xc821fb0260 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://367537d4d4327071df03d4b5289cc651b08a931394de64617b0b8d1c330d4c1e}]}}]}
not to have occurred

Issues about this test specifically: #27233 #36204

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-container_vm-1.5-upgrade-master/36/

Multiple broken tests:

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc820cc3530>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.207.205 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-300f3 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"uid\":\"6a86e975-c02d-11e6-802e-42010af00027\", \"resourceVersion\":\"45035\", \"creationTimestamp\":\"2016-12-12T05:39:57Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-300f3\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-300f3/services/redis-master\"}, \"spec\":map[string]interface {}{\"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.251.163\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc8234b2360 exit status 1 <nil> true [0xc82088c580 0xc82088c598 0xc82088c5b0] [0xc82088c580 0xc82088c598 0xc82088c5b0] [0xc82088c590 0xc82088c5a8] [0xa97590 0xa97590] 0xc821035e60}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"uid\":\"6a86e975-c02d-11e6-802e-42010af00027\", \"resourceVersion\":\"45035\", \"creationTimestamp\":\"2016-12-12T05:39:57Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-300f3\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-300f3/services/redis-master\"}, \"spec\":map[string]interface {}{\"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.251.163\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.207.205 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-300f3 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"uid":"6a86e975-c02d-11e6-802e-42010af00027", "resourceVersion":"45035", "creationTimestamp":"2016-12-12T05:39:57Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-300f3", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-300f3/services/redis-master"}, "spec":map[string]interface {}{"type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.251.163"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc8234b2360 exit status 1 <nil> true [0xc82088c580 0xc82088c598 0xc82088c5b0] [0xc82088c580 0xc82088c598 0xc82088c5b0] [0xc82088c590 0xc82088c5a8] [0xa97590 0xa97590] 0xc821035e60}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"uid":"6a86e975-c02d-11e6-802e-42010af00027", "resourceVersion":"45035", "creationTimestamp":"2016-12-12T05:39:57Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-300f3", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-300f3/services/redis-master"}, "spec":map[string]interface {}{"type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.251.163"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:453
Expected error:
    <*errors.errorString | 0xc82104bb60>: {
        s: "failed to wait for pods responding: pod with UID 91488ba3-c00b-11e6-b518-42010af00027 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-h899f/pods 16740} [{{ } {my-hostname-delete-node-dcxjn my-hostname-delete-node- e2e-tests-resize-nodes-h899f /api/v1/namespaces/e2e-tests-resize-nodes-h899f/pods/my-hostname-delete-node-dcxjn c410b5e3-c00b-11e6-b518-42010af00027 16580 0 {2016-12-11 17:39:04 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-h899f\",\"name\":\"my-hostname-delete-node\",\"uid\":\"9145fc7c-c00b-11e6-b518-42010af00027\",\"apiVersion\":\"v1\",\"resourceVersion\":\"16489\"}}\n] [{v1 ReplicationController my-hostname-delete-node 9145fc7c-c00b-11e6-b518-42010af00027 0xc821770fb7}] []} {[{default-token-f511v {<nil> <nil> <nil> <nil> <nil> 0xc820e2b1a0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-f511v true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8217710b0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-8cead9e7-5wg3 0xc8213bb100 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 17:39:04 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 17:39:06 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 17:39:04 -0800 PST}  }]   10.240.0.2 10.96.0.3 2016-12-11T17:39:04-08:00 [] [{my-hostname-delete-node {<nil> 0xc820fe96c0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://59a7ff02e50e530bbd84e54ebfa6f808214b9e6f22bf0e4877af96e908808449}]}} {{ } {my-hostname-delete-node-l9d84 my-hostname-delete-node- e2e-tests-resize-nodes-h899f /api/v1/namespaces/e2e-tests-resize-nodes-h899f/pods/my-hostname-delete-node-l9d84 914abd72-c00b-11e6-b518-42010af00027 16417 0 {2016-12-11 17:37:39 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-h899f\",\"name\":\"my-hostname-delete-node\",\"uid\":\"9145fc7c-c00b-11e6-b518-42010af00027\",\"apiVersion\":\"v1\",\"resourceVersion\":\"16401\"}}\n] [{v1 ReplicationController my-hostname-delete-node 9145fc7c-c00b-11e6-b518-42010af00027 0xc821771357}] []} {[{default-token-f511v {<nil> <nil> <nil> <nil> <nil> 0xc820e2b200 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-f511v true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821771450 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-8cead9e7-pvx0 0xc8213bb1c0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 17:37:39 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 17:37:40 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 17:37:39 -0800 PST}  }]   10.240.0.3 10.96.2.5 2016-12-11T17:37:39-08:00 [] [{my-hostname-delete-node {<nil> 0xc820fe96e0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://ffc80aa8dc2fc479dbad25c5f9633c479471e684d2ef2909b6b9642b584da44a}]}} {{ } {my-hostname-delete-node-nfxqj my-hostname-delete-node- e2e-tests-resize-nodes-h899f /api/v1/namespaces/e2e-tests-resize-nodes-h899f/pods/my-hostname-delete-node-nfxqj 91484019-c00b-11e6-b518-42010af00027 16415 0 {2016-12-11 17:37:39 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-h899f\",\"name\":\"my-hostname-delete-node\",\"uid\":\"9145fc7c-c00b-11e6-b518-42010af00027\",\"apiVersion\":\"v1\",\"resourceVersion\":\"16401\"}}\n] [{v1 ReplicationController my-hostname-delete-node 9145fc7c-c00b-11e6-b518-42010af00027 0xc821771707}] []} {[{default-token-f511v {<nil> <nil> <nil> <nil> <nil> 0xc820e2b260 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-f511v true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821771810 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-8cead9e7-pvx0 0xc8213bb2c0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 17:37:39 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 17:37:40 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 17:37:39 -0800 PST}  }]   10.240.0.3 10.96.2.4 2016-12-11T17:37:39-08:00 [] [{my-hostname-delete-node {<nil> 0xc820fe9700 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://f60df46c2d60de0f7f59cc050f1e645de5b9a4d30dd39d52e84196b06c1b8c03}]}}]}",
    }
    failed to wait for pods responding: pod with UID 91488ba3-c00b-11e6-b518-42010af00027 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-h899f/pods 16740} [{{ } {my-hostname-delete-node-dcxjn my-hostname-delete-node- e2e-tests-resize-nodes-h899f /api/v1/namespaces/e2e-tests-resize-nodes-h899f/pods/my-hostname-delete-node-dcxjn c410b5e3-c00b-11e6-b518-42010af00027 16580 0 {2016-12-11 17:39:04 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-h899f","name":"my-hostname-delete-node","uid":"9145fc7c-c00b-11e6-b518-42010af00027","apiVersion":"v1","resourceVersion":"16489"}}
    ] [{v1 ReplicationController my-hostname-delete-node 9145fc7c-c00b-11e6-b518-42010af00027 0xc821770fb7}] []} {[{default-token-f511v {<nil> <nil> <nil> <nil> <nil> 0xc820e2b1a0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-f511v true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8217710b0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-8cead9e7-5wg3 0xc8213bb100 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 17:39:04 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 17:39:06 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 17:39:04 -0800 PST}  }]   10.240.0.2 10.96.0.3 2016-12-11T17:39:04-08:00 [] [{my-hostname-delete-node {<nil> 0xc820fe96c0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://59a7ff02e50e530bbd84e54ebfa6f808214b9e6f22bf0e4877af96e908808449}]}} {{ } {my-hostname-delete-node-l9d84 my-hostname-delete-node- e2e-tests-resize-nodes-h899f /api/v1/namespaces/e2e-tests-resize-nodes-h899f/pods/my-hostname-delete-node-l9d84 914abd72-c00b-11e6-b518-42010af00027 16417 0 {2016-12-11 17:37:39 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-h899f","name":"my-hostname-delete-node","uid":"9145fc7c-c00b-11e6-b518-42010af00027","apiVersion":"v1","resourceVersion":"16401"}}
    ] [{v1 ReplicationController my-hostname-delete-node 9145fc7c-c00b-11e6-b518-42010af00027 0xc821771357}] []} {[{default-token-f511v {<nil> <nil> <nil> <nil> <nil> 0xc820e2b200 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-f511v true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821771450 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-8cead9e7-pvx0 0xc8213bb1c0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 17:37:39 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 17:37:40 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 17:37:39 -0800 PST}  }]   10.240.0.3 10.96.2.5 2016-12-11T17:37:39-08:00 [] [{my-hostname-delete-node {<nil> 0xc820fe96e0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://ffc80aa8dc2fc479dbad25c5f9633c479471e684d2ef2909b6b9642b584da44a}]}} {{ } {my-hostname-delete-node-nfxqj my-hostname-delete-node- e2e-tests-resize-nodes-h899f /api/v1/namespaces/e2e-tests-resize-nodes-h899f/pods/my-hostname-delete-node-nfxqj 91484019-c00b-11e6-b518-42010af00027 16415 0 {2016-12-11 17:37:39 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-h899f","name":"my-hostname-delete-node","uid":"9145fc7c-c00b-11e6-b518-42010af00027","apiVersion":"v1","resourceVersion":"16401"}}
    ] [{v1 ReplicationController my-hostname-delete-node 9145fc7c-c00b-11e6-b518-42010af00027 0xc821771707}] []} {[{default-token-f511v {<nil> <nil> <nil> <nil> <nil> 0xc820e2b260 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-f511v true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821771810 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-8cead9e7-pvx0 0xc8213bb2c0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 17:37:39 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 17:37:40 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 17:37:39 -0800 PST}  }]   10.240.0.3 10.96.2.4 2016-12-11T17:37:39-08:00 [] [{my-hostname-delete-node {<nil> 0xc820fe9700 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://f60df46c2d60de0f7f59cc050f1e645de5b9a4d30dd39d52e84196b06c1b8c03}]}}]}
not to have occurred

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:281
Expected
    <bool>: false
to be true

Issues about this test specifically: #27156 #28979 #30489 #33649

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec 11 15:56:37.535: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-8cead9e7-5wg3:
 container "runtime": expected RSS memory (MB) < 314572800; got 522739712
node gke-bootstrap-e2e-default-pool-8cead9e7-pvx0:
 container "runtime": expected RSS memory (MB) < 314572800; got 520814592
node gke-bootstrap-e2e-default-pool-8cead9e7-22vm:
 container "runtime": expected RSS memory (MB) < 314572800; got 522518528

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-container_vm-1.5-upgrade-master/37/

Multiple broken tests:

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:279
Expected error:
    <*errors.errorString | 0xc820c01770>: {
        s: "service verification failed for: 10.99.255.67\nexpected [service2-6d5gm service2-gx7dn service2-nfl3c]\nreceived [service2-gx7dn service2-nfl3c]",
    }
    service verification failed for: 10.99.255.67
    expected [service2-6d5gm service2-gx7dn service2-nfl3c]
    received [service2-gx7dn service2-nfl3c]
not to have occurred

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200d9060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc820bfa4f0>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.180.35 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-zqs1k -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-zqs1k/services/redis-master\", \"uid\":\"4f2a5159-c066-11e6-a7e7-42010af0001c\", \"resourceVersion\":\"45638\", \"creationTimestamp\":\"2016-12-12T12:27:12Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-zqs1k\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"clusterIP\":\"10.99.244.207\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\"}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc8219b22e0 exit status 1 <nil> true [0xc82003a908 0xc82003a920 0xc82003a968] [0xc82003a908 0xc82003a920 0xc82003a968] [0xc82003a918 0xc82003a960] [0xa97590 0xa97590] 0xc821ebc4e0}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-zqs1k/services/redis-master\", \"uid\":\"4f2a5159-c066-11e6-a7e7-42010af0001c\", \"resourceVersion\":\"45638\", \"creationTimestamp\":\"2016-12-12T12:27:12Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-zqs1k\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"clusterIP\":\"10.99.244.207\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\"}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.180.35 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-zqs1k -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"apiVersion":"v1", "metadata":map[string]interface {}{"selfLink":"/api/v1/namespaces/e2e-tests-kubectl-zqs1k/services/redis-master", "uid":"4f2a5159-c066-11e6-a7e7-42010af0001c", "resourceVersion":"45638", "creationTimestamp":"2016-12-12T12:27:12Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-zqs1k"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"role":"master", "app":"redis"}, "clusterIP":"10.99.244.207", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service"}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc8219b22e0 exit status 1 <nil> true [0xc82003a908 0xc82003a920 0xc82003a968] [0xc82003a908 0xc82003a920 0xc82003a968] [0xc82003a918 0xc82003a960] [0xa97590 0xa97590] 0xc821ebc4e0}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"apiVersion":"v1", "metadata":map[string]interface {}{"selfLink":"/api/v1/namespaces/e2e-tests-kubectl-zqs1k/services/redis-master", "uid":"4f2a5159-c066-11e6-a7e7-42010af0001c", "resourceVersion":"45638", "creationTimestamp":"2016-12-12T12:27:12Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-zqs1k"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"role":"master", "app":"redis"}, "clusterIP":"10.99.244.207", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service"}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication between nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:267
Expected error:
    <*errors.errorString | 0xc820baf5a0>: {
        s: "pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-12-11 22:45:37 -0800 PST} FinishedAt:{Time:2016-12-11 22:45:47 -0800 PST} ContainerID:docker://b1e70fa57a58555a31f64b29b89a8c4f810cb61bd29c39467e7e33307d0d342b}",
    }
    pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-12-11 22:45:37 -0800 PST} FinishedAt:{Time:2016-12-11 22:45:47 -0800 PST} ContainerID:docker://b1e70fa57a58555a31f64b29b89a8c4f810cb61bd29c39467e7e33307d0d342b}
not to have occurred

Issues about this test specifically: #30131 #31402

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200d9060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:453
Expected error:
    <*errors.errorString | 0xc82184d380>: {
        s: "failed to wait for pods responding: pod with UID fec93168-c038-11e6-a345-42010af0001c is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-dbsg1/pods 10159} [{{ } {my-hostname-delete-node-cb7ld my-hostname-delete-node- e2e-tests-resize-nodes-dbsg1 /api/v1/namespaces/e2e-tests-resize-nodes-dbsg1/pods/my-hostname-delete-node-cb7ld fec91476-c038-11e6-a345-42010af0001c 9795 0 {2016-12-11 23:02:50 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-dbsg1\",\"name\":\"my-hostname-delete-node\",\"uid\":\"fec74caf-c038-11e6-a345-42010af0001c\",\"apiVersion\":\"v1\",\"resourceVersion\":\"9776\"}}\n] [{v1 ReplicationController my-hostname-delete-node fec74caf-c038-11e6-a345-42010af0001c 0xc820f87087}] []} {[{default-token-mncxn {<nil> <nil> <nil> <nil> <nil> 0xc8212df800 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-mncxn true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820f871f0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-72dd2e11-3wmf 0xc82197ca40 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 23:02:50 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 23:02:52 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 23:02:50 -0800 PST}  }]   10.240.0.2 10.96.0.3 2016-12-11T23:02:50-08:00 [] [{my-hostname-delete-node {<nil> 0xc82104d720 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://3441340c0810516062ff07998a97ed6adbb63a35b24ae97f9216d88a4f3c2f2a}]}} {{ } {my-hostname-delete-node-cpvs8 my-hostname-delete-node- e2e-tests-resize-nodes-dbsg1 /api/v1/namespaces/e2e-tests-resize-nodes-dbsg1/pods/my-hostname-delete-node-cpvs8 4299a7ea-c039-11e6-a345-42010af0001c 9999 0 {2016-12-11 23:04:44 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-dbsg1\",\"name\":\"my-hostname-delete-node\",\"uid\":\"fec74caf-c038-11e6-a345-42010af0001c\",\"apiVersion\":\"v1\",\"resourceVersion\":\"9910\"}}\n] [{v1 ReplicationController my-hostname-delete-node fec74caf-c038-11e6-a345-42010af0001c 0xc820f875a7}] []} {[{default-token-mncxn {<nil> <nil> <nil> <nil> <nil> 0xc8212df860 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-mncxn true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820f87700 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-72dd2e11-lr1q 0xc82197cb00 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 23:04:44 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 23:04:46 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 23:04:44 -0800 PST}  }]   10.240.0.3 10.96.2.5 2016-12-11T23:04:44-08:00 [] [{my-hostname-delete-node {<nil> 0xc82104d740 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://204a9f569bf4836b4d9341d6c7f6501196630c5c3cea9e4ba7e0373f00ef1768}]}} {{ } {my-hostname-delete-node-vz09x my-hostname-delete-node- e2e-tests-resize-nodes-dbsg1 /api/v1/namespaces/e2e-tests-resize-nodes-dbsg1/pods/my-hostname-delete-node-vz09x fec903d9-c038-11e6-a345-42010af0001c 9790 0 {2016-12-11 23:02:50 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-dbsg1\",\"name\":\"my-hostname-delete-node\",\"uid\":\"fec74caf-c038-11e6-a345-42010af0001c\",\"apiVersion\":\"v1\",\"resourceVersion\":\"9776\"}}\n] [{v1 ReplicationController my-hostname-delete-node fec74caf-c038-11e6-a345-42010af0001c 0xc820f87a47}] []} {[{default-token-mncxn {<nil> <nil> <nil> <nil> <nil> 0xc8212df8c0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-mncxn true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820f87b40 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-72dd2e11-lr1q 0xc82197cbc0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 23:02:50 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 23:02:51 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 23:02:50 -0800 PST}  }]   10.240.0.3 10.96.2.3 2016-12-11T23:02:50-08:00 [] [{my-hostname-delete-node {<nil> 0xc82104d760 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://68b0fce5ebc85c2fa5b35aca0908216aebbe09ace1cf73428ae7b8636b949c22}]}}]}",
    }
    failed to wait for pods responding: pod with UID fec93168-c038-11e6-a345-42010af0001c is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-dbsg1/pods 10159} [{{ } {my-hostname-delete-node-cb7ld my-hostname-delete-node- e2e-tests-resize-nodes-dbsg1 /api/v1/namespaces/e2e-tests-resize-nodes-dbsg1/pods/my-hostname-delete-node-cb7ld fec91476-c038-11e6-a345-42010af0001c 9795 0 {2016-12-11 23:02:50 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-dbsg1","name":"my-hostname-delete-node","uid":"fec74caf-c038-11e6-a345-42010af0001c","apiVersion":"v1","resourceVersion":"9776"}}
    ] [{v1 ReplicationController my-hostname-delete-node fec74caf-c038-11e6-a345-42010af0001c 0xc820f87087}] []} {[{default-token-mncxn {<nil> <nil> <nil> <nil> <nil> 0xc8212df800 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-mncxn true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820f871f0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-72dd2e11-3wmf 0xc82197ca40 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 23:02:50 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 23:02:52 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 23:02:50 -0800 PST}  }]   10.240.0.2 10.96.0.3 2016-12-11T23:02:50-08:00 [] [{my-hostname-delete-node {<nil> 0xc82104d720 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://3441340c0810516062ff07998a97ed6adbb63a35b24ae97f9216d88a4f3c2f2a}]}} {{ } {my-hostname-delete-node-cpvs8 my-hostname-delete-node- e2e-tests-resize-nodes-dbsg1 /api/v1/namespaces/e2e-tests-resize-nodes-dbsg1/pods/my-hostname-delete-node-cpvs8 4299a7ea-c039-11e6-a345-42010af0001c 9999 0 {2016-12-11 23:04:44 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-dbsg1","name":"my-hostname-delete-node","uid":"fec74caf-c038-11e6-a345-42010af0001c","apiVersion":"v1","resourceVersion":"9910"}}
    ] [{v1 ReplicationController my-hostname-delete-node fec74caf-c038-11e6-a345-42010af0001c 0xc820f875a7}] []} {[{default-token-mncxn {<nil> <nil> <nil> <nil> <nil> 0xc8212df860 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-mncxn true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820f87700 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-72dd2e11-lr1q 0xc82197cb00 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 23:04:44 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 23:04:46 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 23:04:44 -0800 PST}  }]   10.240.0.3 10.96.2.5 2016-12-11T23:04:44-08:00 [] [{my-hostname-delete-node {<nil> 0xc82104d740 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://204a9f569bf4836b4d9341d6c7f6501196630c5c3cea9e4ba7e0373f00ef1768}]}} {{ } {my-hostname-delete-node-vz09x my-hostname-delete-node- e2e-tests-resize-nodes-dbsg1 /api/v1/namespaces/e2e-tests-resize-nodes-dbsg1/pods/my-hostname-delete-node-vz09x fec903d9-c038-11e6-a345-42010af0001c 9790 0 {2016-12-11 23:02:50 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-dbsg1","name":"my-hostname-delete-node","uid":"fec74caf-c038-11e6-a345-42010af0001c","apiVersion":"v1","resourceVersion":"9776"}}
    ] [{v1 ReplicationController my-hostname-delete-node fec74caf-c038-11e6-a345-42010af0001c 0xc820f87a47}] []} {[{default-token-mncxn {<nil> <nil> <nil> <nil> <nil> 0xc8212df8c0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-mncxn true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820f87b40 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-72dd2e11-lr1q 0xc82197cbc0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 23:02:50 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 23:02:51 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 23:02:50 -0800 PST}  }]   10.240.0.3 10.96.2.3 2016-12-11T23:02:50-08:00 [] [{my-hostname-delete-node {<nil> 0xc82104d760 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://68b0fce5ebc85c2fa5b35aca0908216aebbe09ace1cf73428ae7b8636b949c22}]}}]}
not to have occurred

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec 12 00:18:35.889: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-72dd2e11-lr1q:
 container "runtime": expected RSS memory (MB) < 314572800; got 533975040
node gke-bootstrap-e2e-default-pool-72dd2e11-3wmf:
 container "runtime": expected RSS memory (MB) < 314572800; got 541880320
node gke-bootstrap-e2e-default-pool-72dd2e11-4hkr:
 container "runtime": expected RSS memory (MB) < 314572800; got 513961984

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200d9060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200d9060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-container_vm-1.5-upgrade-master/38/

Multiple broken tests:

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:453
Expected error:
    <*errors.errorString | 0xc8213e97a0>: {
        s: "failed to wait for pods responding: pod with UID e8af0a74-c08c-11e6-8901-42010af00031 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-qls6s/pods 33249} [{{ } {my-hostname-delete-node-4l08d my-hostname-delete-node- e2e-tests-resize-nodes-qls6s /api/v1/namespaces/e2e-tests-resize-nodes-qls6s/pods/my-hostname-delete-node-4l08d e8aed86a-c08c-11e6-8901-42010af00031 32919 0 {2016-12-12 09:03:31 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-qls6s\",\"name\":\"my-hostname-delete-node\",\"uid\":\"e8ac558e-c08c-11e6-8901-42010af00031\",\"apiVersion\":\"v1\",\"resourceVersion\":\"32906\"}}\n] [{v1 ReplicationController my-hostname-delete-node e8ac558e-c08c-11e6-8901-42010af00031 0xc822987237}] []} {[{default-token-4g5r4 {<nil> <nil> <nil> <nil> <nil> 0xc8211b3290 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-4g5r4 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc822987400 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-9aeddeb0-p5ng 0xc82454b340 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 09:03:31 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 09:03:32 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 09:03:31 -0800 PST}  }]   10.240.0.5 10.96.3.2 2016-12-12T09:03:31-08:00 [] [{my-hostname-delete-node {<nil> 0xc821d177c0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://a3ab39f5de2ba7056adf765585a43e5396621834a8672561e9e6495390092059}]}} {{ } {my-hostname-delete-node-r890r my-hostname-delete-node- e2e-tests-resize-nodes-qls6s /api/v1/namespaces/e2e-tests-resize-nodes-qls6s/pods/my-hostname-delete-node-r890r e8afa73d-c08c-11e6-8901-42010af00031 32923 0 {2016-12-12 09:03:31 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-qls6s\",\"name\":\"my-hostname-delete-node\",\"uid\":\"e8ac558e-c08c-11e6-8901-42010af00031\",\"apiVersion\":\"v1\",\"resourceVersion\":\"32906\"}}\n] [{v1 ReplicationController my-hostname-delete-node e8ac558e-c08c-11e6-8901-42010af00031 0xc822987697}] []} {[{default-token-4g5r4 {<nil> <nil> <nil> <nil> <nil> 0xc8211b32f0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-4g5r4 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc822987790 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-9aeddeb0-fxb0 0xc82454b680 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 09:03:31 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 09:03:32 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 09:03:31 -0800 PST}  }]   10.240.0.3 10.96.0.3 2016-12-12T09:03:31-08:00 [] [{my-hostname-delete-node {<nil> 0xc821d177e0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://31df1b6c7e982e4ae706651b8bae9f162557b31ae5930b1201847ffd90fca20f}]}} {{ } {my-hostname-delete-node-xv7nd my-hostname-delete-node- e2e-tests-resize-nodes-qls6s /api/v1/namespaces/e2e-tests-resize-nodes-qls6s/pods/my-hostname-delete-node-xv7nd 1dca9a71-c08d-11e6-8901-42010af00031 33091 0 {2016-12-12 09:05:00 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-qls6s\",\"name\":\"my-hostname-delete-node\",\"uid\":\"e8ac558e-c08c-11e6-8901-42010af00031\",\"apiVersion\":\"v1\",\"resourceVersion\":\"32995\"}}\n] [{v1 ReplicationController my-hostname-delete-node e8ac558e-c08c-11e6-8901-42010af00031 0xc822987a27}] []} {[{default-token-4g5r4 {<nil> <nil> <nil> <nil> <nil> 0xc8211b3350 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-4g5r4 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc822987b20 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-9aeddeb0-p5ng 0xc82454b7c0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 09:05:00 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 09:05:01 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 09:05:00 -0800 PST}  }]   10.240.0.5 10.96.3.4 2016-12-12T09:05:00-08:00 [] [{my-hostname-delete-node {<nil> 0xc821d17800 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://799ba52906dcc41ff9972fdd2945e23582079bb3a4725522c3ab1e037ac1f875}]}}]}",
    }
    failed to wait for pods responding: pod with UID e8af0a74-c08c-11e6-8901-42010af00031 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-qls6s/pods 33249} [{{ } {my-hostname-delete-node-4l08d my-hostname-delete-node- e2e-tests-resize-nodes-qls6s /api/v1/namespaces/e2e-tests-resize-nodes-qls6s/pods/my-hostname-delete-node-4l08d e8aed86a-c08c-11e6-8901-42010af00031 32919 0 {2016-12-12 09:03:31 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-qls6s","name":"my-hostname-delete-node","uid":"e8ac558e-c08c-11e6-8901-42010af00031","apiVersion":"v1","resourceVersion":"32906"}}
    ] [{v1 ReplicationController my-hostname-delete-node e8ac558e-c08c-11e6-8901-42010af00031 0xc822987237}] []} {[{default-token-4g5r4 {<nil> <nil> <nil> <nil> <nil> 0xc8211b3290 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-4g5r4 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc822987400 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-9aeddeb0-p5ng 0xc82454b340 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 09:03:31 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 09:03:32 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 09:03:31 -0800 PST}  }]   10.240.0.5 10.96.3.2 2016-12-12T09:03:31-08:00 [] [{my-hostname-delete-node {<nil> 0xc821d177c0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://a3ab39f5de2ba7056adf765585a43e5396621834a8672561e9e6495390092059}]}} {{ } {my-hostname-delete-node-r890r my-hostname-delete-node- e2e-tests-resize-nodes-qls6s /api/v1/namespaces/e2e-tests-resize-nodes-qls6s/pods/my-hostname-delete-node-r890r e8afa73d-c08c-11e6-8901-42010af00031 32923 0 {2016-12-12 09:03:31 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-qls6s","name":"my-hostname-delete-node","uid":"e8ac558e-c08c-11e6-8901-42010af00031","apiVersion":"v1","resourceVersion":"32906"}}
    ] [{v1 ReplicationController my-hostname-delete-node e8ac558e-c08c-11e6-8901-42010af00031 0xc822987697}] []} {[{default-token-4g5r4 {<nil> <nil> <nil> <nil> <nil> 0xc8211b32f0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-4g5r4 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc822987790 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-9aeddeb0-fxb0 0xc82454b680 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 09:03:31 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 09:03:32 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 09:03:31 -0800 PST}  }]   10.240.0.3 10.96.0.3 2016-12-12T09:03:31-08:00 [] [{my-hostname-delete-node {<nil> 0xc821d177e0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://31df1b6c7e982e4ae706651b8bae9f162557b31ae5930b1201847ffd90fca20f}]}} {{ } {my-hostname-delete-node-xv7nd my-hostname-delete-node- e2e-tests-resize-nodes-qls6s /api/v1/namespaces/e2e-tests-resize-nodes-qls6s/pods/my-hostname-delete-node-xv7nd 1dca9a71-c08d-11e6-8901-42010af00031 33091 0 {2016-12-12 09:05:00 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-qls6s","name":"my-hostname-delete-node","uid":"e8ac558e-c08c-11e6-8901-42010af00031","apiVersion":"v1","resourceVersion":"32995"}}
    ] [{v1 ReplicationController my-hostname-delete-node e8ac558e-c08c-11e6-8901-42010af00031 0xc822987a27}] []} {[{default-token-4g5r4 {<nil> <nil> <nil> <nil> <nil> 0xc8211b3350 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-4g5r4 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc822987b20 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-9aeddeb0-p5ng 0xc82454b7c0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 09:05:00 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 09:05:01 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 09:05:00 -0800 PST}  }]   10.240.0.5 10.96.3.4 2016-12-12T09:05:00-08:00 [] [{my-hostname-delete-node {<nil> 0xc821d17800 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://799ba52906dcc41ff9972fdd2945e23582079bb3a4725522c3ab1e037ac1f875}]}}]}
not to have occurred

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc820081f80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc820081f80>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc822975f80>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.54.75 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-1r5jx -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-1r5jx/services/redis-master\", \"uid\":\"c602353b-c07d-11e6-8901-42010af00031\", \"resourceVersion\":\"22145\", \"creationTimestamp\":\"2016-12-12T15:15:10Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-1r5jx\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.251.227\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc82033ea20 exit status 1 <nil> true [0xc820f86578 0xc820f86590 0xc820f865a8] [0xc820f86578 0xc820f86590 0xc820f865a8] [0xc820f86588 0xc820f865a0] [0xa97590 0xa97590] 0xc821ed8e40}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-1r5jx/services/redis-master\", \"uid\":\"c602353b-c07d-11e6-8901-42010af00031\", \"resourceVersion\":\"22145\", \"creationTimestamp\":\"2016-12-12T15:15:10Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-1r5jx\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.251.227\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.54.75 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-1r5jx -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"selfLink":"/api/v1/namespaces/e2e-tests-kubectl-1r5jx/services/redis-master", "uid":"c602353b-c07d-11e6-8901-42010af00031", "resourceVersion":"22145", "creationTimestamp":"2016-12-12T15:15:10Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-1r5jx"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.251.227", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc82033ea20 exit status 1 <nil> true [0xc820f86578 0xc820f86590 0xc820f865a8] [0xc820f86578 0xc820f86590 0xc820f865a8] [0xc820f86588 0xc820f865a0] [0xa97590 0xa97590] 0xc821ed8e40}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"selfLink":"/api/v1/namespaces/e2e-tests-kubectl-1r5jx/services/redis-master", "uid":"c602353b-c07d-11e6-8901-42010af00031", "resourceVersion":"22145", "creationTimestamp":"2016-12-12T15:15:10Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-1r5jx"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.251.227", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc820081f80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc820081f80>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec 12 05:31:38.440: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-9aeddeb0-6o2y:
 container "runtime": expected RSS memory (MB) < 314572800; got 521224192
node gke-bootstrap-e2e-default-pool-9aeddeb0-a9bz:
 container "runtime": expected RSS memory (MB) < 314572800; got 523255808
node gke-bootstrap-e2e-default-pool-9aeddeb0-fxb0:
 container "runtime": expected RSS memory (MB) < 314572800; got 527822848

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-container_vm-1.5-upgrade-master/39/

Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc82171bdf0>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.58.18 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-kt7kw -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"resourceVersion\":\"23421\", \"creationTimestamp\":\"2016-12-12T23:07:05Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-kt7kw\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-kt7kw/services/redis-master\", \"uid\":\"b32735da-c0bf-11e6-bdd2-42010af00014\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"clusterIP\":\"10.99.252.88\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc82109d860 exit status 1 <nil> true [0xc820d1c0f0 0xc820d1c110 0xc820d1c180] [0xc820d1c0f0 0xc820d1c110 0xc820d1c180] [0xc820d1c108 0xc820d1c178] [0xa97590 0xa97590] 0xc8217d5140}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"resourceVersion\":\"23421\", \"creationTimestamp\":\"2016-12-12T23:07:05Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-kt7kw\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-kt7kw/services/redis-master\", \"uid\":\"b32735da-c0bf-11e6-bdd2-42010af00014\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"clusterIP\":\"10.99.252.88\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.58.18 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-kt7kw -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"resourceVersion":"23421", "creationTimestamp":"2016-12-12T23:07:05Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-kt7kw", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-kt7kw/services/redis-master", "uid":"b32735da-c0bf-11e6-bdd2-42010af00014"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"role":"master", "app":"redis"}, "clusterIP":"10.99.252.88", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc82109d860 exit status 1 <nil> true [0xc820d1c0f0 0xc820d1c110 0xc820d1c180] [0xc820d1c0f0 0xc820d1c110 0xc820d1c180] [0xc820d1c108 0xc820d1c178] [0xa97590 0xa97590] 0xc8217d5140}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"resourceVersion":"23421", "creationTimestamp":"2016-12-12T23:07:05Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-kt7kw", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-kt7kw/services/redis-master", "uid":"b32735da-c0bf-11e6-bdd2-42010af00014"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"role":"master", "app":"redis"}, "clusterIP":"10.99.252.88", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:725
Dec 12 16:49:56.257: Failed waiting for pods to be running: Timeout waiting for 1 pods to be ready

Issues about this test specifically: #26134

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200e40b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200e40b0>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: UpgradeTest {e2e.go}

exit status 1

Issues about this test specifically: #37745

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200e40b0>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] master upgrade should maintain responsive services [Feature:MasterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:53
Dec 12 11:25:58.127: Failed waiting for pods to be running: Timeout waiting for 1 pods to be ready
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2562

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec 12 15:58:28.283: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-31225ae9-sei4:
 container "runtime": expected RSS memory (MB) < 314572800; got 524947456
node gke-bootstrap-e2e-default-pool-31225ae9-vkvo:
 container "runtime": expected RSS memory (MB) < 314572800; got 510754816
node gke-bootstrap-e2e-default-pool-31225ae9-pfwr:
 container "runtime": expected RSS memory (MB) < 314572800; got 533725184

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200e40b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:331
Expected error:
    <*errors.errorString | 0xc821a872d0>: {
        s: "service verification failed for: 10.99.252.161\nexpected [service1-20fps service1-j4jv5 service1-k391j]\nreceived [service1-20fps service1-k391j]",
    }
    service verification failed for: 10.99.252.161
    expected [service1-20fps service1-j4jv5 service1-k391j]
    received [service1-20fps service1-k391j]
not to have occurred

Issues about this test specifically: #29514 #38288

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-container_vm-1.5-upgrade-master/40/

Multiple broken tests:

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable [replication controller] recreates pods scheduled on the unreachable node AND allows scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:544
Dec 12 22:16:47.483: Node gke-bootstrap-e2e-default-pool-9f5224b4-vnqp did not become ready within 2m0s

Issues about this test specifically: #27324 #35852 #35880

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc821df0330>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.69.160 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-hvfqz -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"namespace\":\"e2e-tests-kubectl-hvfqz\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-hvfqz/services/redis-master\", \"uid\":\"11dcccfa-c0e3-11e6-9a44-42010af0002d\", \"resourceVersion\":\"8500\", \"creationTimestamp\":\"2016-12-13T03:20:17Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\"}, \"spec\":map[string]interface {}{\"clusterIP\":\"10.99.244.168\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc8218546e0 exit status 1 <nil> true [0xc820096038 0xc820096210 0xc820096230] [0xc820096038 0xc820096210 0xc820096230] [0xc820096208 0xc820096228] [0xa97590 0xa97590] 0xc821eee1e0}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"namespace\":\"e2e-tests-kubectl-hvfqz\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-hvfqz/services/redis-master\", \"uid\":\"11dcccfa-c0e3-11e6-9a44-42010af0002d\", \"resourceVersion\":\"8500\", \"creationTimestamp\":\"2016-12-13T03:20:17Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\"}, \"spec\":map[string]interface {}{\"clusterIP\":\"10.99.244.168\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.69.160 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-hvfqz -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"namespace":"e2e-tests-kubectl-hvfqz", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-hvfqz/services/redis-master", "uid":"11dcccfa-c0e3-11e6-9a44-42010af0002d", "resourceVersion":"8500", "creationTimestamp":"2016-12-13T03:20:17Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master"}, "spec":map[string]interface {}{"clusterIP":"10.99.244.168", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc8218546e0 exit status 1 <nil> true [0xc820096038 0xc820096210 0xc820096230] [0xc820096038 0xc820096210 0xc820096230] [0xc820096208 0xc820096228] [0xa97590 0xa97590] 0xc821eee1e0}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"namespace":"e2e-tests-kubectl-hvfqz", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-hvfqz/services/redis-master", "uid":"11dcccfa-c0e3-11e6-9a44-42010af0002d", "resourceVersion":"8500", "creationTimestamp":"2016-12-13T03:20:17Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master"}, "spec":map[string]interface {}{"clusterIP":"10.99.244.168", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] KubeProxy should test kube-proxy [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubeproxy.go:107
Expected error:
    <*errors.errorString | 0xc8214021b0>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.69.160 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-e2e-kubeproxy-wndw8 host-test-container-pod -- /bin/sh -c curl -q 'http://10.96.0.4:8080/dial?request=hostName&protocol=udp&host=10.96.1.3&port=8081&tries=5'] []  <nil>    % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current\n                                 Dload  Upload   Total   Spent    Left  Speed\n\r  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:02 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:03 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:04 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:05 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:06 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:07 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:08 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:09 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:10 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:11 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:12 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:13 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:14 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:15 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:16 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:17 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:18 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:19 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:20 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:21 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:22 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:23 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:24 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:25 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:26 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:27 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:28 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:29 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:30 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:31 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:32 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:33 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:34 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:35 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:36 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:37 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:38 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:39 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:40 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:41 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:42 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:43 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:44 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:45 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:46 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:47 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:48 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:49 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:50 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:51 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:52 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:53 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:54 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:55 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:56 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:57 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:58 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:59 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:00 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:01 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:02 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:03 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:04 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:05 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:06 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:07 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:08 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:09 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:10 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:11 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:12 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:13 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:14 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:15 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:16 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:17 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:18 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:19 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:20 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:21 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:22 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:23 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:24 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:25 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:26 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:27 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:28 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:29 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:30 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:31 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:32 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:33 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:34 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:35 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:36 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:37 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:38 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:39 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:40 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:41 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:42 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:43 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:44 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:45 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:46 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:47 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:48 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:49 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:50 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:51 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:52 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:53 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:54 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:55 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:56 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:57 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:58 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:59 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:00 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:01 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:02 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:03 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:04 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:05 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:06 --:--:--     0curl: (7) Failed to connect to 10.96.0.4 port 8080: Operation timed out\nerror: error executing remote command: error executing command in container: Error executing in Docker Container: 7\n [] <nil> 0xc82017fd40 exit status 1 <nil> true [0xc82119c550 0xc82119c568 0xc82119c580] [0xc82119c550 0xc82119c568 0xc82119c580] [0xc82119c560 0xc82119c578] [0xa97590 0xa97590] 0xc821eed500}:\nCommand stdout:\n\nstderr:\n  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current\n                                 Dload  Upload   Total   Spent    Left  Speed\n\r  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:02 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:03 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:04 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:05 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:06 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:07 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:08 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:09 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:10 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:11 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:12 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:13 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:14 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:15 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:16 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:17 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:18 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:19 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:20 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:21 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:22 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:23 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:24 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:25 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:26 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:27 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:28 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:29 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:30 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:31 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:32 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:33 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:34 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:35 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:36 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:37 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:38 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:39 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:40 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:41 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:42 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:43 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:44 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:45 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:46 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:47 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:48 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:49 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:50 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:51 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:52 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:53 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:54 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:55 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:56 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:57 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:58 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:00:59 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:00 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:01 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:02 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:03 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:04 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:05 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:06 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:07 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:08 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:09 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:10 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:11 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:12 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:13 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:14 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:15 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:16 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:17 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:18 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:19 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:20 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:21 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:22 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:23 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:24 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:25 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:26 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:27 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:28 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:29 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:30 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:31 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:32 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:33 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:34 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:35 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:36 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:37 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:38 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:39 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:40 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:41 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:42 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:43 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:44 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:45 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:46 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:47 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:48 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:49 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:50 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:51 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:52 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:53 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:54 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:55 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:56 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:57 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:58 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:01:59 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:00 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:01 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:02 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:03 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:04 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:05 --:--:--     0\r  0     0    0     0    0     0      0      0 --:--:--  0:02:06 --:--:--     0curl: (7) Failed to connect to 10.96.0.4 port 8080: Operation timed out\nerror: error executing remote command: error executing command in container: Error executing in Docker Container: 7\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.69.160 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-e2e-kubeproxy-wndw8 host-test-container-pod -- /bin/sh -c curl -q 'http://10.96.0.4:8080/dial?request=hostName&protocol=udp&host=10.96.1.3&port=8081&tries=5'] []  <nil>    % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:02 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:03 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:04 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:05 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:06 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:07 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:08 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:09 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:10 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:11 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:12 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:13 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:14 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:15 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:16 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:17 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:18 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:19 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:20 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:21 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:22 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:23 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:24 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:25 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:26 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:27 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:28 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:29 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:30 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:31 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:32 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:33 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:34 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:35 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:36 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:37 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:38 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:39 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:40 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:41 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:42 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:43 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:44 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:45 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:46 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:47 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:48 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:49 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:50 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:51 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:52 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:53 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:54 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:55 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:56 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:57 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:58 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:59 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:00 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:01 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:02 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:03 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:04 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:05 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:06 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:07 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:08 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:09 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:10 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:11 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:12 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:13 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:14 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:15 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:16 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:17 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:18 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:19 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:20 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:21 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:22 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:23 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:24 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:25 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:26 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:27 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:28 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:29 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:30 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:31 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:32 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:33 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:34 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:35 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:36 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:37 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:38 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:39 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:40 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:41 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:42 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:43 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:44 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:45 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:46 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:47 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:48 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:49 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:50 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:51 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:52 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:53 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:54 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:55 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:56 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:57 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:58 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:59 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:00 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:01 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:02 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:03 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:04 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:05 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:06 --:--:--     0curl: (7) Failed to connect to 10.96.0.4 port 8080: Operation timed out
    error: error executing remote command: error executing command in container: Error executing in Docker Container: 7
     [] <nil> 0xc82017fd40 exit status 1 <nil> true [0xc82119c550 0xc82119c568 0xc82119c580] [0xc82119c550 0xc82119c568 0xc82119c580] [0xc82119c560 0xc82119c578] [0xa97590 0xa97590] 0xc821eed500}:
    Command stdout:
    
    stderr:
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:02 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:03 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:04 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:05 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:06 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:07 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:08 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:09 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:10 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:11 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:12 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:13 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:14 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:15 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:16 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:17 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:18 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:19 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:20 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:21 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:22 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:23 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:24 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:25 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:26 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:27 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:28 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:29 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:30 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:31 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:32 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:33 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:34 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:35 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:36 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:37 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:38 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:39 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:40 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:41 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:42 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:43 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:44 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:45 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:46 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:47 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:48 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:49 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:50 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:51 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:52 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:53 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:54 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:55 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:56 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:57 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:58 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:00:59 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:00 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:01 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:02 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:03 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:04 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:05 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:06 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:07 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:08 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:09 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:10 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:11 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:12 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:13 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:14 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:15 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:16 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:17 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:18 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:19 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:20 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:21 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:22 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:23 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:24 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:25 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:26 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:27 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:28 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:29 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:30 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:31 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:32 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:33 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:34 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:35 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:36 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:37 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:38 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:39 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:40 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:41 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:42 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:43 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:44 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:45 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:46 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:47 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:48 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:49 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:50 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:51 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:52 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:53 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:54 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:55 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:56 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:57 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:58 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:01:59 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:00 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:01 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:02 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:03 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:04 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:05 --:--:--     0
  0     0    0     0    0     0      0      0 --:--:--  0:02:06 --:--:--     0curl: (7) Failed to connect to 10.96.0.4 port 8080: Operation timed out
    error: error executing remote command: error executing command in container: Error executing in Docker Container: 7
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #26490 #33669

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200d9060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200d9060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:279
Expected error:
    <*errors.errorString | 0xc820fda790>: {
        s: "service verification failed for: 10.99.242.238\nexpected [service1-fq6xd service1-j05nk service1-nl4k8]\nreceived [service1-fq6xd service1-j05nk]",
    }
    service verification failed for: 10.99.242.238
    expected [service1-fq6xd service1-j05nk service1-nl4k8]
    received [service1-fq6xd service1-j05nk]
not to have occurred

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200d9060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200d9060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec 12 21:17:20.461: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-9f5224b4-rge9:
 container "runtime": expected RSS memory (MB) < 314572800; got 525524992
node gke-bootstrap-e2e-default-pool-9f5224b4-trty:
 container "runtime": expected RSS memory (MB) < 314572800; got 528445440
node gke-bootstrap-e2e-default-pool-9f5224b4-vnqp:
 container "runtime": expected RSS memory (MB) < 314572800; got 514641920

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-container_vm-1.5-upgrade-master/41/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec 13 03:19:48.040: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-d32a57fd-t9i5:
 container "runtime": expected RSS memory (MB) < 314572800; got 524849152
node gke-bootstrap-e2e-default-pool-d32a57fd-v1o1:
 container "runtime": expected RSS memory (MB) < 314572800; got 528449536
node gke-bootstrap-e2e-default-pool-d32a57fd-dqyu:
 container "runtime": expected RSS memory (MB) < 314572800; got 517287936

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: UpgradeTest {e2e.go}

exit status 1

Issues about this test specifically: #37745

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc820740a90>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.69.160 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-sfqtd -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-sfqtd/services/redis-master\", \"uid\":\"edf8f4bc-c10c-11e6-a7bf-42010af0002f\", \"resourceVersion\":\"557\", \"creationTimestamp\":\"2016-12-13T08:19:55Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-sfqtd\"}, \"spec\":map[string]interface {}{\"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.248.195\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\"}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc82077a860 exit status 1 <nil> true [0xc820aea000 0xc820aea038 0xc820aea050] [0xc820aea000 0xc820aea038 0xc820aea050] [0xc820aea030 0xc820aea048] [0xa97590 0xa97590] 0xc8208a61e0}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-sfqtd/services/redis-master\", \"uid\":\"edf8f4bc-c10c-11e6-a7bf-42010af0002f\", \"resourceVersion\":\"557\", \"creationTimestamp\":\"2016-12-13T08:19:55Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-sfqtd\"}, \"spec\":map[string]interface {}{\"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.248.195\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\"}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.69.160 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-sfqtd -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"apiVersion":"v1", "metadata":map[string]interface {}{"selfLink":"/api/v1/namespaces/e2e-tests-kubectl-sfqtd/services/redis-master", "uid":"edf8f4bc-c10c-11e6-a7bf-42010af0002f", "resourceVersion":"557", "creationTimestamp":"2016-12-13T08:19:55Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-sfqtd"}, "spec":map[string]interface {}{"selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.248.195", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service"}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc82077a860 exit status 1 <nil> true [0xc820aea000 0xc820aea038 0xc820aea050] [0xc820aea000 0xc820aea038 0xc820aea050] [0xc820aea030 0xc820aea048] [0xa97590 0xa97590] 0xc8208a61e0}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"apiVersion":"v1", "metadata":map[string]interface {}{"selfLink":"/api/v1/namespaces/e2e-tests-kubectl-sfqtd/services/redis-master", "uid":"edf8f4bc-c10c-11e6-a7bf-42010af0002f", "resourceVersion":"557", "creationTimestamp":"2016-12-13T08:19:55Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-sfqtd"}, "spec":map[string]interface {}{"selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.248.195", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service"}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] master upgrade should maintain responsive services [Feature:MasterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:53
Dec 13 00:14:41.617: Failed waiting for pods to be running: Timeout waiting for 1 pods to be ready
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2562

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:372
Expected error:
    <*errors.errorString | 0xc821083060>: {
        s: "service verification failed for: 10.99.254.228\nexpected [service1-2wrct service1-xd2sc service1-z8kls]\nreceived [service1-2wrct service1-z8kls]",
    }
    service verification failed for: 10.99.254.228
    expected [service1-2wrct service1-xd2sc service1-z8kls]
    received [service1-2wrct service1-z8kls]
not to have occurred

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:453
Expected error:
    <*errors.errorString | 0xc8221b3250>: {
        s: "failed to wait for pods responding: pod with UID 74c92d89-c12d-11e6-a7bf-42010af0002f is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-wp0s5/pods 26533} [{{ } {my-hostname-delete-node-756dx my-hostname-delete-node- e2e-tests-resize-nodes-wp0s5 /api/v1/namespaces/e2e-tests-resize-nodes-wp0s5/pods/my-hostname-delete-node-756dx 74c96874-c12d-11e6-a7bf-42010af0002f 26259 0 {2016-12-13 04:12:45 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-wp0s5\",\"name\":\"my-hostname-delete-node\",\"uid\":\"74c7766c-c12d-11e6-a7bf-42010af0002f\",\"apiVersion\":\"v1\",\"resourceVersion\":\"26241\"}}\n] [{v1 ReplicationController my-hostname-delete-node 74c7766c-c12d-11e6-a7bf-42010af0002f 0xc82170d227}] []} {[{default-token-q1tr0 {<nil> <nil> <nil> <nil> <nil> 0xc821911800 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-q1tr0 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82170d320 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-d32a57fd-t9i5 0xc821e23340 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 04:12:45 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 04:12:48 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 04:12:45 -0800 PST}  }]   10.240.0.2 10.96.1.3 2016-12-13T04:12:45-08:00 [] [{my-hostname-delete-node {<nil> 0xc8219fa0a0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://6098e27e9cb5174ee8a9a9ddbc0b40114191e549a4c2807cc14eeb51ce0c8770}]}} {{ } {my-hostname-delete-node-cqm6h my-hostname-delete-node- e2e-tests-resize-nodes-wp0s5 /api/v1/namespaces/e2e-tests-resize-nodes-wp0s5/pods/my-hostname-delete-node-cqm6h 74c9a8fa-c12d-11e6-a7bf-42010af0002f 26255 0 {2016-12-13 04:12:45 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-wp0s5\",\"name\":\"my-hostname-delete-node\",\"uid\":\"74c7766c-c12d-11e6-a7bf-42010af0002f\",\"apiVersion\":\"v1\",\"resourceVersion\":\"26241\"}}\n] [{v1 ReplicationController my-hostname-delete-node 74c7766c-c12d-11e6-a7bf-42010af0002f 0xc82170d5b7}] []} {[{default-token-q1tr0 {<nil> <nil> <nil> <nil> <nil> 0xc821911860 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-q1tr0 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82170d6b0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-d32a57fd-v1o1 0xc821e23400 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 04:12:45 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 04:12:46 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 04:12:45 -0800 PST}  }]   10.240.0.5 10.96.3.2 2016-12-13T04:12:45-08:00 [] [{my-hostname-delete-node {<nil> 0xc8219fa0c0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://8a22dd8143d24476882ad9af464f110c93bda942623d6b131018d902df2d2640}]}} {{ } {my-hostname-delete-node-fg2ch my-hostname-delete-node- e2e-tests-resize-nodes-wp0s5 /api/v1/namespaces/e2e-tests-resize-nodes-wp0s5/pods/my-hostname-delete-node-fg2ch aba5e242-c12d-11e6-a7bf-42010af0002f 26389 0 {2016-12-13 04:14:17 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-wp0s5\",\"name\":\"my-hostname-delete-node\",\"uid\":\"74c7766c-c12d-11e6-a7bf-42010af0002f\",\"apiVersion\":\"v1\",\"resourceVersion\":\"26332\"}}\n] [{v1 ReplicationController my-hostname-delete-node 74c7766c-c12d-11e6-a7bf-42010af0002f 0xc82170d947}] []} {[{default-token-q1tr0 {<nil> <nil> <nil> <nil> <nil> 0xc8219118c0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-q1tr0 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82170da40 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-d32a57fd-t9i5 0xc821e234c0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 04:14:17 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 04:14:18 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 04:14:17 -0800 PST}  }]   10.240.0.2 10.96.1.4 2016-12-13T04:14:17-08:00 [] [{my-hostname-delete-node {<nil> 0xc8219fa0e0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://a1898ff83406b264ad6a73e93d62ba710ea85bcee30ca26de99d4ef824b60c02}]}}]}",
    }
    failed to wait for pods responding: pod with UID 74c92d89-c12d-11e6-a7bf-42010af0002f is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-wp0s5/pods 26533} [{{ } {my-hostname-delete-node-756dx my-hostname-delete-node- e2e-tests-resize-nodes-wp0s5 /api/v1/namespaces/e2e-tests-resize-nodes-wp0s5/pods/my-hostname-delete-node-756dx 74c96874-c12d-11e6-a7bf-42010af0002f 26259 0 {2016-12-13 04:12:45 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-wp0s5","name":"my-hostname-delete-node","uid":"74c7766c-c12d-11e6-a7bf-42010af0002f","apiVersion":"v1","resourceVersion":"26241"}}
    ] [{v1 ReplicationController my-hostname-delete-node 74c7766c-c12d-11e6-a7bf-42010af0002f 0xc82170d227}] []} {[{default-token-q1tr0 {<nil> <nil> <nil> <nil> <nil> 0xc821911800 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-q1tr0 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82170d320 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-d32a57fd-t9i5 0xc821e23340 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 04:12:45 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 04:12:48 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 04:12:45 -0800 PST}  }]   10.240.0.2 10.96.1.3 2016-12-13T04:12:45-08:00 [] [{my-hostname-delete-node {<nil> 0xc8219fa0a0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://6098e27e9cb5174ee8a9a9ddbc0b40114191e549a4c2807cc14eeb51ce0c8770}]}} {{ } {my-hostname-delete-node-cqm6h my-hostname-delete-node- e2e-tests-resize-nodes-wp0s5 /api/v1/namespaces/e2e-tests-resize-nodes-wp0s5/pods/my-hostname-delete-node-cqm6h 74c9a8fa-c12d-11e6-a7bf-42010af0002f 26255 0 {2016-12-13 04:12:45 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-wp0s5","name":"my-hostname-delete-node","uid":"74c7766c-c12d-11e6-a7bf-42010af0002f","apiVersion":"v1","resourceVersion":"26241"}}
    ] [{v1 ReplicationController my-hostname-delete-node 74c7766c-c12d-11e6-a7bf-42010af0002f 0xc82170d5b7}] []} {[{default-token-q1tr0 {<nil> <nil> <nil> <nil> <nil> 0xc821911860 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-q1tr0 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82170d6b0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-d32a57fd-v1o1 0xc821e23400 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 04:12:45 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 04:12:46 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 04:12:45 -0800 PST}  }]   10.240.0.5 10.96.3.2 2016-12-13T04:12:45-08:00 [] [{my-hostname-delete-node {<nil> 0xc8219fa0c0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://8a22dd8143d24476882ad9af464f110c93bda942623d6b131018d902df2d2640}]}} {{ } {my-hostname-delete-node-fg2ch my-hostname-delete-node- e2e-tests-resize-nodes-wp0s5 /api/v1/namespaces/e2e-tests-resize-nodes-wp0s5/pods/my-hostname-delete-node-fg2ch aba5e242-c12d-11e6-a7bf-42010af0002f 26389 0 {2016-12-13 04:14:17 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-wp0s5","name":"my-hostname-delete-node","uid":"74c7766c-c12d-11e6-a7bf-42010af0002f","apiVersion":"v1","resourceVersion":"26332"}}
    ] [{v1 ReplicationController my-hostname-delete-node 74c7766c-c12d-11e6-a7bf-42010af0002f 0xc82170d947}] []} {[{default-token-q1tr0 {<nil> <nil> <nil> <nil> <nil> 0xc8219118c0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-q1tr0 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82170da40 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-d32a57fd-t9i5 0xc821e234c0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 04:14:17 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 04:14:18 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 04:14:17 -0800 PST}  }]   10.240.0.2 10.96.1.4 2016-12-13T04:14:17-08:00 [] [{my-hostname-delete-node {<nil> 0xc8219fa0e0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://a1898ff83406b264ad6a73e93d62ba710ea85bcee30ca26de99d4ef824b60c02}]}}]}
not to have occurred

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-container_vm-1.5-upgrade-master/42/

Multiple broken tests:

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200b5060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200b5060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc820777a70>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.169.47 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-fbq79 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"spec\":map[string]interface {}{\"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.251.60\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"targetPort\":\"redis-server\", \"protocol\":\"TCP\", \"port\":6379}}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-fbq79\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-fbq79/services/redis-master\", \"uid\":\"9064e56f-c162-11e6-84be-42010af00014\", \"resourceVersion\":\"25800\", \"creationTimestamp\":\"2016-12-13T18:32:55Z\"}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc820827d80 exit status 1 <nil> true [0xc82043a938 0xc82043a978 0xc82043a990] [0xc82043a938 0xc82043a978 0xc82043a990] [0xc82043a970 0xc82043a988] [0xa97590 0xa97590] 0xc8211553e0}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"spec\":map[string]interface {}{\"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.251.60\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"targetPort\":\"redis-server\", \"protocol\":\"TCP\", \"port\":6379}}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-fbq79\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-fbq79/services/redis-master\", \"uid\":\"9064e56f-c162-11e6-84be-42010af00014\", \"resourceVersion\":\"25800\", \"creationTimestamp\":\"2016-12-13T18:32:55Z\"}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.169.47 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-fbq79 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"spec":map[string]interface {}{"selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.251.60", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"targetPort":"redis-server", "protocol":"TCP", "port":6379}}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-fbq79", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-fbq79/services/redis-master", "uid":"9064e56f-c162-11e6-84be-42010af00014", "resourceVersion":"25800", "creationTimestamp":"2016-12-13T18:32:55Z"}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc820827d80 exit status 1 <nil> true [0xc82043a938 0xc82043a978 0xc82043a990] [0xc82043a938 0xc82043a978 0xc82043a990] [0xc82043a970 0xc82043a988] [0xa97590 0xa97590] 0xc8211553e0}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"spec":map[string]interface {}{"selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.251.60", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"targetPort":"redis-server", "protocol":"TCP", "port":6379}}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-fbq79", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-fbq79/services/redis-master", "uid":"9064e56f-c162-11e6-84be-42010af00014", "resourceVersion":"25800", "creationTimestamp":"2016-12-13T18:32:55Z"}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200b5060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec 13 10:59:07.319: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-a02af5fc-80p7:
 container "runtime": expected RSS memory (MB) < 314572800; got 510742528
node gke-bootstrap-e2e-default-pool-a02af5fc-iufl:
 container "runtime": expected RSS memory (MB) < 314572800; got 535355392
node gke-bootstrap-e2e-default-pool-a02af5fc-vyg5:
 container "runtime": expected RSS memory (MB) < 314572800; got 535568384

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200b5060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-container_vm-1.5-upgrade-master/43/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec 13 15:42:26.618: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-bc1ef049-pm8i:
 container "runtime": expected RSS memory (MB) < 314572800; got 527831040
node gke-bootstrap-e2e-default-pool-bc1ef049-xhkc:
 container "runtime": expected RSS memory (MB) < 314572800; got 529993728
node gke-bootstrap-e2e-default-pool-bc1ef049-mq1n:
 container "runtime": expected RSS memory (MB) < 314572800; got 536166400

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200c7060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200c7060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200c7060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:279
Expected error:
    <*errors.errorString | 0xc8211ca980>: {
        s: "service verification failed for: 10.99.248.190\nexpected [service1-9qtg3 service1-vnh0d service1-wt7sr]\nreceived [service1-9qtg3 service1-vnh0d]",
    }
    service verification failed for: 10.99.248.190
    expected [service1-9qtg3 service1-vnh0d service1-wt7sr]
    received [service1-9qtg3 service1-vnh0d]
not to have occurred

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc821ea6ec0>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://130.211.198.143 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-30760 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-30760/services/redis-master\", \"uid\":\"4041cff4-c1ac-11e6-8861-42010af0002e\", \"resourceVersion\":\"40197\", \"creationTimestamp\":\"2016-12-14T03:20:23Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-30760\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.245.108\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc8224a88e0 exit status 1 <nil> true [0xc820db6840 0xc820db6858 0xc820db6870] [0xc820db6840 0xc820db6858 0xc820db6870] [0xc820db6850 0xc820db6868] [0xa97590 0xa97590] 0xc820cd34a0}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-30760/services/redis-master\", \"uid\":\"4041cff4-c1ac-11e6-8861-42010af0002e\", \"resourceVersion\":\"40197\", \"creationTimestamp\":\"2016-12-14T03:20:23Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-30760\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.245.108\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://130.211.198.143 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-30760 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"selfLink":"/api/v1/namespaces/e2e-tests-kubectl-30760/services/redis-master", "uid":"4041cff4-c1ac-11e6-8861-42010af0002e", "resourceVersion":"40197", "creationTimestamp":"2016-12-14T03:20:23Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-30760"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.245.108", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc8224a88e0 exit status 1 <nil> true [0xc820db6840 0xc820db6858 0xc820db6870] [0xc820db6840 0xc820db6858 0xc820db6870] [0xc820db6850 0xc820db6868] [0xa97590 0xa97590] 0xc820cd34a0}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"selfLink":"/api/v1/namespaces/e2e-tests-kubectl-30760/services/redis-master", "uid":"4041cff4-c1ac-11e6-8861-42010af0002e", "resourceVersion":"40197", "creationTimestamp":"2016-12-14T03:20:23Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-30760"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.245.108", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200c7060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-container_vm-1.5-upgrade-master/44/

Multiple broken tests:

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc82007ff80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Pod Disks should schedule a pod w/ a RW PD, ungracefully remove it, then schedule it on another host [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:139
Expected
    <string>: 
to equal
    <string>: 17503464020920738

Issues about this test specifically: #28984 #33827 #36917

Failed: [k8s.io] KubeProxy should test kube-proxy [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubeproxy.go:107
Expected error:
    <*errors.errorString | 0xc82007ff80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26490 #33669

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:331
Expected error:
    <*errors.errorString | 0xc822d91010>: {
        s: "service verification failed for: 10.99.242.50\nexpected [service2-3q7np service2-s9gw0 service2-sb7s6]\nreceived [service2-3q7np service2-s9gw0]",
    }
    service verification failed for: 10.99.242.50
    expected [service2-3q7np service2-s9gw0 service2-sb7s6]
    received [service2-3q7np service2-s9gw0]
not to have occurred

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc82007ff80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc82007ff80>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc821887240>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.151.135 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-z2c70 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"uid\":\"bf7efb16-c1ca-11e6-8f05-42010af0002d\", \"resourceVersion\":\"15115\", \"creationTimestamp\":\"2016-12-14T06:58:42Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-z2c70\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-z2c70/services/redis-master\"}, \"spec\":map[string]interface {}{\"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.247.204\", \"type\":\"ClusterIP\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc8212ef900 exit status 1 <nil> true [0xc82018a770 0xc82018a790 0xc82018a8e8] [0xc82018a770 0xc82018a790 0xc82018a8e8] [0xc82018a788 0xc82018a7a8] [0xa97590 0xa97590] 0xc821a1c360}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"uid\":\"bf7efb16-c1ca-11e6-8f05-42010af0002d\", \"resourceVersion\":\"15115\", \"creationTimestamp\":\"2016-12-14T06:58:42Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-z2c70\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-z2c70/services/redis-master\"}, \"spec\":map[string]interface {}{\"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.247.204\", \"type\":\"ClusterIP\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.151.135 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-z2c70 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"uid":"bf7efb16-c1ca-11e6-8f05-42010af0002d", "resourceVersion":"15115", "creationTimestamp":"2016-12-14T06:58:42Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-z2c70", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-z2c70/services/redis-master"}, "spec":map[string]interface {}{"sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.247.204", "type":"ClusterIP"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc8212ef900 exit status 1 <nil> true [0xc82018a770 0xc82018a790 0xc82018a8e8] [0xc82018a770 0xc82018a790 0xc82018a8e8] [0xc82018a788 0xc82018a7a8] [0xa97590 0xa97590] 0xc821a1c360}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"uid":"bf7efb16-c1ca-11e6-8f05-42010af0002d", "resourceVersion":"15115", "creationTimestamp":"2016-12-14T06:58:42Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-z2c70", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-z2c70/services/redis-master"}, "spec":map[string]interface {}{"sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.247.204", "type":"ClusterIP"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec 14 02:33:09.596: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-9af4957d-gs4u:
 container "runtime": expected RSS memory (MB) < 314572800; got 535183360
node gke-bootstrap-e2e-default-pool-9af4957d-n6ea:
 container "runtime": expected RSS memory (MB) < 314572800; got 531615744
node gke-bootstrap-e2e-default-pool-9af4957d-xmq4:
 container "runtime": expected RSS memory (MB) < 314572800; got 525778944

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc82007ff80>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-container_vm-1.5-upgrade-master/45/

Multiple broken tests:

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200c1060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200c1060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec 14 04:59:48.887: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-15a80d11-h2o4:
 container "runtime": expected RSS memory (MB) < 314572800; got 528797696
node gke-bootstrap-e2e-default-pool-15a80d11-em3z:
 container "runtime": expected RSS memory (MB) < 314572800; got 516517888
node gke-bootstrap-e2e-default-pool-15a80d11-g9g5:
 container "runtime": expected RSS memory (MB) < 314572800; got 522969088

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc820563f30>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.151.135 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-bvjbw -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-bvjbw\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-bvjbw/services/redis-master\", \"uid\":\"525a282a-c1f3-11e6-b555-42010af00016\", \"resourceVersion\":\"4103\", \"creationTimestamp\":\"2016-12-14T11:49:08Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}}, \"spec\":map[string]interface {}{\"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"port\":6379, \"targetPort\":\"redis-server\", \"protocol\":\"TCP\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.252.174\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc82084b500 exit status 1 <nil> true [0xc820f601e0 0xc820f601f8 0xc820f60210] [0xc820f601e0 0xc820f601f8 0xc820f60210] [0xc820f601f0 0xc820f60208] [0xa97590 0xa97590] 0xc8207d9c20}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-bvjbw\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-bvjbw/services/redis-master\", \"uid\":\"525a282a-c1f3-11e6-b555-42010af00016\", \"resourceVersion\":\"4103\", \"creationTimestamp\":\"2016-12-14T11:49:08Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}}, \"spec\":map[string]interface {}{\"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"port\":6379, \"targetPort\":\"redis-server\", \"protocol\":\"TCP\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.252.174\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.151.135 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-bvjbw -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"name":"redis-master", "namespace":"e2e-tests-kubectl-bvjbw", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-bvjbw/services/redis-master", "uid":"525a282a-c1f3-11e6-b555-42010af00016", "resourceVersion":"4103", "creationTimestamp":"2016-12-14T11:49:08Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}}, "spec":map[string]interface {}{"type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"port":6379, "targetPort":"redis-server", "protocol":"TCP"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.252.174"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc82084b500 exit status 1 <nil> true [0xc820f601e0 0xc820f601f8 0xc820f60210] [0xc820f601e0 0xc820f601f8 0xc820f60210] [0xc820f601f0 0xc820f60208] [0xa97590 0xa97590] 0xc8207d9c20}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"name":"redis-master", "namespace":"e2e-tests-kubectl-bvjbw", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-bvjbw/services/redis-master", "uid":"525a282a-c1f3-11e6-b555-42010af00016", "resourceVersion":"4103", "creationTimestamp":"2016-12-14T11:49:08Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}}, "spec":map[string]interface {}{"type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"port":6379, "targetPort":"redis-server", "protocol":"TCP"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.252.174"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200c1060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:453
Expected error:
    <*errors.errorString | 0xc820921450>: {
        s: "failed to wait for pods responding: pod with UID 5ca51813-c220-11e6-b555-42010af00016 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-pxrs1/pods 42928} [{{ } {my-hostname-delete-node-2sjcw my-hostname-delete-node- e2e-tests-resize-nodes-pxrs1 /api/v1/namespaces/e2e-tests-resize-nodes-pxrs1/pods/my-hostname-delete-node-2sjcw 5ca5748a-c220-11e6-b555-42010af00016 42638 0 {2016-12-14 09:11:33 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-pxrs1\",\"name\":\"my-hostname-delete-node\",\"uid\":\"5ca35b0e-c220-11e6-b555-42010af00016\",\"apiVersion\":\"v1\",\"resourceVersion\":\"42625\"}}\n] [{v1 ReplicationController my-hostname-delete-node 5ca35b0e-c220-11e6-b555-42010af00016 0xc8211e18a7}] []} {[{default-token-cfz23 {<nil> <nil> <nil> <nil> <nil> 0xc8216eb350 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-cfz23 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8211e1a50 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-15a80d11-g9g5 0xc8210e0d80 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-14 09:11:33 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-14 09:11:33 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-14 09:11:33 -0800 PST}  }]   10.240.0.2 10.96.2.111 2016-12-14T09:11:33-08:00 [] [{my-hostname-delete-node {<nil> 0xc821971b40 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://0d4ac74a2261c3145259dc7886357f46f3a8333d7234f8523508e1ed1a02c707}]}} {{ } {my-hostname-delete-node-d1qt2 my-hostname-delete-node- e2e-tests-resize-nodes-pxrs1 /api/v1/namespaces/e2e-tests-resize-nodes-pxrs1/pods/my-hostname-delete-node-d1qt2 5ca5493f-c220-11e6-b555-42010af00016 42643 0 {2016-12-14 09:11:33 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-pxrs1\",\"name\":\"my-hostname-delete-node\",\"uid\":\"5ca35b0e-c220-11e6-b555-42010af00016\",\"apiVersion\":\"v1\",\"resourceVersion\":\"42625\"}}\n] [{v1 ReplicationController my-hostname-delete-node 5ca35b0e-c220-11e6-b555-42010af00016 0xc8211e1e17}] []} {[{default-token-cfz23 {<nil> <nil> <nil> <nil> <nil> 0xc8216eb3b0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-cfz23 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8211e1f60 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-15a80d11-h2o4 0xc8210e0e40 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-14 09:11:33 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-14 09:11:34 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-14 09:11:33 -0800 PST}  }]   10.240.0.3 10.96.1.12 2016-12-14T09:11:33-08:00 [] [{my-hostname-delete-node {<nil> 0xc821971b60 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://18231eb07efc265187cbaaa86335d8393648b84c18bab44955d478aa9f40ca55}]}} {{ } {my-hostname-delete-node-x9v47 my-hostname-delete-node- e2e-tests-resize-nodes-pxrs1 /api/v1/namespaces/e2e-tests-resize-nodes-pxrs1/pods/my-hostname-delete-node-x9v47 9a6eac37-c220-11e6-b555-42010af00016 42791 0 {2016-12-14 09:13:16 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-pxrs1\",\"name\":\"my-hostname-delete-node\",\"uid\":\"5ca35b0e-c220-11e6-b555-42010af00016\",\"apiVersion\":\"v1\",\"resourceVersion\":\"42726\"}}\n] [{v1 ReplicationController my-hostname-delete-node 5ca35b0e-c220-11e6-b555-42010af00016 0xc820e6e1f7}] []} {[{default-token-cfz23 {<nil> <nil> <nil> <nil> <nil> 0xc8216eb410 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-cfz23 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820e6e320 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-15a80d11-h2o4 0xc8210e0f00 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-14 09:13:16 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-14 09:13:18 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-14 09:13:16 -0800 PST}  }]   10.240.0.3 10.96.1.14 2016-12-14T09:13:16-08:00 [] [{my-hostname-delete-node {<nil> 0xc821971b80 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://b6f04b65283d5e0f881f7eb5ff2f8303c4f07ed18a1c01d5f69c9750021e2fd4}]}}]}",
    }
    failed to wait for pods responding: pod with UID 5ca51813-c220-11e6-b555-42010af00016 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-pxrs1/pods 42928} [{{ } {my-hostname-delete-node-2sjcw my-hostname-delete-node- e2e-tests-resize-nodes-pxrs1 /api/v1/namespaces/e2e-tests-resize-nodes-pxrs1/pods/my-hostname-delete-node-2sjcw 5ca5748a-c220-11e6-b555-42010af00016 42638 0 {2016-12-14 09:11:33 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-pxrs1","name":"my-hostname-delete-node","uid":"5ca35b0e-c220-11e6-b555-42010af00016","apiVersion":"v1","resourceVersion":"42625"}}
    ] [{v1 ReplicationController my-hostname-delete-node 5ca35b0e-c220-11e6-b555-42010af00016 0xc8211e18a7}] []} {[{default-token-cfz23 {<nil> <nil> <nil> <nil> <nil> 0xc8216eb350 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-cfz23 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8211e1a50 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-15a80d11-g9g5 0xc8210e0d80 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-14 09:11:33 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-14 09:11:33 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-14 09:11:33 -0800 PST}  }]   10.240.0.2 10.96.2.111 2016-12-14T09:11:33-08:00 [] [{my-hostname-delete-node {<nil> 0xc821971b40 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://0d4ac74a2261c3145259dc7886357f46f3a8333d7234f8523508e1ed1a02c707}]}} {{ } {my-hostname-delete-node-d1qt2 my-hostname-delete-node- e2e-tests-resize-nodes-pxrs1 /api/v1/namespaces/e2e-tests-resize-nodes-pxrs1/pods/my-hostname-delete-node-d1qt2 5ca5493f-c220-11e6-b555-42010af00016 42643 0 {2016-12-14 09:11:33 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-pxrs1","name":"my-hostname-delete-node","uid":"5ca35b0e-c220-11e6-b555-42010af00016","apiVersion":"v1","resourceVersion":"42625"}}
    ] [{v1 ReplicationController my-hostname-delete-node 5ca35b0e-c220-11e6-b555-42010af00016 0xc8211e1e17}] []} {[{default-token-cfz23 {<nil> <nil> <nil> <nil> <nil> 0xc8216eb3b0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-cfz23 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8211e1f60 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-15a80d11-h2o4 0xc8210e0e40 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-14 09:11:33 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-14 09:11:34 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-14 09:11:33 -0800 PST}  }]   10.240.0.3 10.96.1.12 2016-12-14T09:11:33-08:00 [] [{my-hostname-delete-node {<nil> 0xc821971b60 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://18231eb07efc265187cbaaa86335d8393648b84c18bab44955d478aa9f40ca55}]}} {{ } {my-hostname-delete-node-x9v47 my-hostname-delete-node- e2e-tests-resize-nodes-pxrs1 /api/v1/namespaces/e2e-tests-resize-nodes-pxrs1/pods/my-hostname-delete-node-x9v47 9a6eac37-c220-11e6-b555-42010af00016 42791 0 {2016-12-14 09:13:16 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-pxrs1","name":"my-hostname-delete-node","uid":"5ca35b0e-c220-11e6-b555-42010af00016","apiVersion":"v1","resourceVersion":"42726"}}
    ] [{v1 ReplicationController my-hostname-delete-node 5ca35b0e-c220-11e6-b555-42010af00016 0xc820e6e1f7}] []} {[{default-token-cfz23 {<nil> <nil> <nil> <nil> <nil> 0xc8216eb410 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-cfz23 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820e6e320 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-15a80d11-h2o4 0xc8210e0f00 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-14 09:13:16 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-14 09:13:18 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-14 09:13:16 -0800 PST}  }]   10.240.0.3 10.96.1.14 2016-12-14T09:13:16-08:00 [] [{my-hostname-delete-node {<nil> 0xc821971b80 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://b6f04b65283d5e0f881f7eb5ff2f8303c4f07ed18a1c01d5f69c9750021e2fd4}]}}]}
not to have occurred

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200c1060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-container_vm-1.5-upgrade-master/241/
Multiple broken tests:

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200d9060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Feb  1 20:47:14.974: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-46623b01-4pq7:
 container "runtime": expected RSS memory (MB) < 314572800; got 515325952
node gke-bootstrap-e2e-default-pool-46623b01-4tff:
 container "runtime": expected RSS memory (MB) < 314572800; got 524242944
node gke-bootstrap-e2e-default-pool-46623b01-5d2s:
 container "runtime": expected RSS memory (MB) < 314572800; got 531390464

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200d9060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200d9060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200d9060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] KubeProxy should test kube-proxy [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubeproxy.go:107
Expected error:
    <*errors.errorString | 0xc8200d9060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26490 #33669

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-container_vm-1.5-upgrade-master/242/
Multiple broken tests:

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc820081f90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Pod Disks should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:420
Expected
    <string>: 
to equal
    <string>: 4329152383571182427

Issues about this test specifically: #26127 #28081

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc820081f90>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:279
Expected error:
    <*errors.errorString | 0xc821a72080>: {
        s: "service verification failed for: 10.99.243.166\nexpected [service2-855h4 service2-p1njj service2-q5q7j]\nreceived [service2-855h4 service2-p1njj]",
    }
    service verification failed for: 10.99.243.166
    expected [service2-855h4 service2-p1njj service2-q5q7j]
    received [service2-855h4 service2-p1njj]
not to have occurred

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc820081f90>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:372
Expected error:
    <*errors.errorString | 0xc821524e20>: {
        s: "service verification failed for: 10.99.241.157\nexpected [service2-8p1h1 service2-b5qwc service2-gnd7p]\nreceived [service2-8p1h1 service2-gnd7p]",
    }
    service verification failed for: 10.99.241.157
    expected [service2-8p1h1 service2-b5qwc service2-gnd7p]
    received [service2-8p1h1 service2-gnd7p]
not to have occurred

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc820081f90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Feb  2 03:48:26.762: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-02a53c51-bscr:
 container "runtime": expected RSS memory (MB) < 314572800; got 521797632
node gke-bootstrap-e2e-default-pool-02a53c51-jdnx:
 container "runtime": expected RSS memory (MB) < 314572800; got 532672512
node gke-bootstrap-e2e-default-pool-02a53c51-zs8c:
 container "runtime": expected RSS memory (MB) < 314572800; got 512225280

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-container_vm-1.5-upgrade-master/243/
Multiple broken tests:

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200d9060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:279
Expected error:
    <*errors.errorString | 0xc821ed8c40>: {
        s: "service verification failed for: 10.99.241.182\nexpected [service1-0s4bt service1-3qv29 service1-5r8mt]\nreceived [service1-0s4bt service1-5r8mt]",
    }
    service verification failed for: 10.99.241.182
    expected [service1-0s4bt service1-3qv29 service1-5r8mt]
    received [service1-0s4bt service1-5r8mt]
not to have occurred

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:331
Expected error:
    <*errors.errorString | 0xc820554030>: {
        s: "service verification failed for: 10.99.241.152\nexpected [service1-5f50k service1-k88j4 service1-rdv4v]\nreceived [service1-k88j4 service1-rdv4v]",
    }
    service verification failed for: 10.99.241.152
    expected [service1-5f50k service1-k88j4 service1-rdv4v]
    received [service1-k88j4 service1-rdv4v]
not to have occurred

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Feb  2 10:07:35.329: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-1f2240ca-4c14:
 container "runtime": expected RSS memory (MB) < 314572800; got 525320192
node gke-bootstrap-e2e-default-pool-1f2240ca-l1h8:
 container "runtime": expected RSS memory (MB) < 314572800; got 516771840
node gke-bootstrap-e2e-default-pool-1f2240ca-mwg6:
 container "runtime": expected RSS memory (MB) < 314572800; got 529059840

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200d9060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200d9060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200d9060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-container_vm-1.5-upgrade-master/244/
Multiple broken tests:

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc820081f80>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable [replication controller] recreates pods scheduled on the unreachable node AND allows scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:544
Feb  2 20:50:59.439: Node gke-bootstrap-e2e-default-pool-d00f40e8-t74p did not become ready within 2m0s

Issues about this test specifically: #27324 #35852 #35880

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc820081f80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Feb  2 19:04:12.355: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-d00f40e8-p9gc:
 container "runtime": expected RSS memory (MB) < 314572800; got 523087872
node gke-bootstrap-e2e-default-pool-d00f40e8-t74p:
 container "runtime": expected RSS memory (MB) < 314572800; got 518488064
node gke-bootstrap-e2e-default-pool-d00f40e8-w0tw:
 container "runtime": expected RSS memory (MB) < 314572800; got 529940480

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc820081f80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc820081f80>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-container_vm-1.5-upgrade-master/245/
Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication between nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:267
Expected error:
    <*errors.errorString | 0xc8214f9e60>: {
        s: "pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2017-02-03 00:23:45 -0800 PST} FinishedAt:{Time:2017-02-03 00:23:55 -0800 PST} ContainerID:docker://75e42c5a15f2e64ee94fd93487359d1a83268fb369227866db8c960c78e9034e}",
    }
    pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2017-02-03 00:23:45 -0800 PST} FinishedAt:{Time:2017-02-03 00:23:55 -0800 PST} ContainerID:docker://75e42c5a15f2e64ee94fd93487359d1a83268fb369227866db8c960c78e9034e}
not to have occurred

Issues about this test specifically: #30131 #31402

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200b5060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200b5060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200b5060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Feb  3 00:11:38.233: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-6fe87ce1-1gc5:
 container "runtime": expected RSS memory (MB) < 314572800; got 525840384
node gke-bootstrap-e2e-default-pool-6fe87ce1-r8nr:
 container "runtime": expected RSS memory (MB) < 314572800; got 510513152
node gke-bootstrap-e2e-default-pool-6fe87ce1-rvzf:
 container "runtime": expected RSS memory (MB) < 314572800; got 526651392

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] KubeProxy should test kube-proxy [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubeproxy.go:107
Expected error:
    <*errors.errorString | 0xc8200b5060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26490 #33669

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200b5060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-container_vm-1.5-upgrade-master/246/
Multiple broken tests:

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200cb060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200cb060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:279
Expected error:
    <*errors.errorString | 0xc822d3df20>: {
        s: "service verification failed for: 10.99.250.139\nexpected [service1-6pjpv service1-xtzxq service1-zr5mw]\nreceived [service1-6pjpv service1-xtzxq]",
    }
    service verification failed for: 10.99.250.139
    expected [service1-6pjpv service1-xtzxq service1-zr5mw]
    received [service1-6pjpv service1-xtzxq]
not to have occurred

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200cb060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Feb  3 08:37:10.371: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-44ad942a-j0q6:
 container "runtime": expected RSS memory (MB) < 314572800; got 525791232
node gke-bootstrap-e2e-default-pool-44ad942a-wd6w:
 container "runtime": expected RSS memory (MB) < 314572800; got 521760768
node gke-bootstrap-e2e-default-pool-44ad942a-wtk5:
 container "runtime": expected RSS memory (MB) < 314572800; got 542695424

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200cb060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-container_vm-1.5-upgrade-master/247/
Multiple broken tests:

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200c9060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200c9060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:279
Expected error:
    <*errors.errorString | 0xc8222d3f60>: {
        s: "service verification failed for: 10.99.240.153\nexpected [service3-4rcl8 service3-mb3b7 service3-mtlrt]\nreceived [service3-4rcl8 service3-mtlrt]",
    }
    service verification failed for: 10.99.240.153
    expected [service3-4rcl8 service3-mb3b7 service3-mtlrt]
    received [service3-4rcl8 service3-mtlrt]
not to have occurred

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200c9060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200c9060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Feb  3 16:31:14.093: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-36024ddc-897b:
 container "runtime": expected RSS memory (MB) < 314572800; got 536559616
node gke-bootstrap-e2e-default-pool-36024ddc-mvlt:
 container "runtime": expected RSS memory (MB) < 314572800; got 532910080
node gke-bootstrap-e2e-default-pool-36024ddc-ntz9:
 container "runtime": expected RSS memory (MB) < 314572800; got 526839808

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] master upgrade should maintain responsive services [Feature:MasterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:53
Feb  3 10:50:45.261: Failed waiting for pods to be running: Timeout waiting for 1 pods to be ready
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2649

Failed: UpgradeTest {e2e.go}

exit status 1

Issues about this test specifically: #37745 #40486

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-container_vm-1.5-upgrade-master/248/
Multiple broken tests:

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc820081f80>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:331
Expected error:
    <*errors.errorString | 0xc82287eec0>: {
        s: "service verification failed for: 10.99.242.76\nexpected [service1-01m6l service1-0kjc6 service1-k8mcf]\nreceived [service1-01m6l service1-0kjc6]",
    }
    service verification failed for: 10.99.242.76
    expected [service1-01m6l service1-0kjc6 service1-k8mcf]
    received [service1-01m6l service1-0kjc6]
not to have occurred

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication between nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:267
Expected error:
    <*errors.errorString | 0xc82050df00>: {
        s: "pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2017-02-03 18:42:43 -0800 PST} FinishedAt:{Time:2017-02-03 18:42:53 -0800 PST} ContainerID:docker://b62253b38a8c9750a5d623aea72a68058d6eaf307bfa0a462bf03245fd2dd135}",
    }
    pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2017-02-03 18:42:43 -0800 PST} FinishedAt:{Time:2017-02-03 18:42:53 -0800 PST} ContainerID:docker://b62253b38a8c9750a5d623aea72a68058d6eaf307bfa0a462bf03245fd2dd135}
not to have occurred

Issues about this test specifically: #30131 #31402

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Feb  4 00:36:25.181: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-7fa67051-f3hm:
 container "runtime": expected RSS memory (MB) < 314572800; got 526581760
node gke-bootstrap-e2e-default-pool-7fa67051-mlm9:
 container "runtime": expected RSS memory (MB) < 314572800; got 511365120
node gke-bootstrap-e2e-default-pool-7fa67051-8mcr:
 container "runtime": expected RSS memory (MB) < 314572800; got 538271744

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc820081f80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable [replication controller] recreates pods scheduled on the unreachable node AND allows scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:544
Expected error:
    <*errors.errorString | 0xc821282120>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27324 #35852 #35880

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc820081f80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc820081f80>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:279
Expected error:
    <*errors.errorString | 0xc820530650>: {
        s: "service verification failed for: 10.99.252.242\nexpected [service1-4cmzz service1-gqgll service1-z65ng]\nreceived [service1-gqgll service1-z65ng]",
    }
    service verification failed for: 10.99.252.242
    expected [service1-4cmzz service1-gqgll service1-z65ng]
    received [service1-gqgll service1-z65ng]
not to have occurred

Issues about this test specifically: #26128 #26685 #33408 #36298

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-container_vm-1.5-upgrade-master/249/
Multiple broken tests:

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200bf060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200bf060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200bf060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200bf060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Feb  4 04:36:07.946: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-3d569fa6-4qsh:
 container "runtime": expected RSS memory (MB) < 314572800; got 541024256
node gke-bootstrap-e2e-default-pool-3d569fa6-54kk:
 container "runtime": expected RSS memory (MB) < 314572800; got 530419712
node gke-bootstrap-e2e-default-pool-3d569fa6-zwlx:
 container "runtime": expected RSS memory (MB) < 314572800; got 521723904

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-container_vm-1.5-upgrade-master/250/
Multiple broken tests:

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200c9060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: UpgradeTest {e2e.go}

exit status 1

Issues about this test specifically: #37745 #40486

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200c9060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] master upgrade should maintain responsive services [Feature:MasterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:53
Feb  4 07:45:58.482: Failed waiting for pods to be running: Timeout waiting for 1 pods to be ready
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2649

Failed: [k8s.io] Pod Disks should schedule a pod w/ a RW PD shared between multiple containers, write to PD, delete pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:360
Expected
    <string>: 
to equal
    <string>: 6591562028860864187

Issues about this test specifically: #28010 #28427 #33997 #37952

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200c9060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200c9060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Feb  4 09:09:21.565: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-9ab778d3-43d3:
 container "runtime": expected RSS memory (MB) < 314572800; got 523980800
node gke-bootstrap-e2e-default-pool-9ab778d3-vncj:
 container "runtime": expected RSS memory (MB) < 314572800; got 519569408
node gke-bootstrap-e2e-default-pool-9ab778d3-xp47:
 container "runtime": expected RSS memory (MB) < 314572800; got 529047552

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-container_vm-1.5-upgrade-master/251/
Multiple broken tests:

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8203b7e10>: {
        s: "Not all pods in namespace 'kube-system' running and ready within 5m0s",
    }
    Not all pods in namespace 'kube-system' running and ready within 5m0s
not to have occurred

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Feb  4 15:13:19.755: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-9760aa28-0s25:
 container "runtime": expected RSS memory (MB) < 314572800; got 539291648
node gke-bootstrap-e2e-default-pool-9760aa28-r1hc:
 container "runtime": expected RSS memory (MB) < 314572800; got 512208896
node gke-bootstrap-e2e-default-pool-9760aa28-zds0:
 container "runtime": expected RSS memory (MB) < 314572800; got 516837376

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200c1060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc823606580>: {
        s: "Not all pods in namespace 'kube-system' running and ready within 5m0s",
    }
    Not all pods in namespace 'kube-system' running and ready within 5m0s
not to have occurred

Issues about this test specifically: #33883

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200c1060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98
Feb  4 16:53:10.825: At least one pod wasn't running and ready or succeeded at test start.

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200c1060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc822dd31d0>: {
        s: "Not all pods in namespace 'kube-system' running and ready within 5m0s",
    }
    Not all pods in namespace 'kube-system' running and ready within 5m0s
not to have occurred

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8207e02a0>: {
        s: "Not all pods in namespace 'kube-system' running and ready within 5m0s",
    }
    Not all pods in namespace 'kube-system' running and ready within 5m0s
not to have occurred

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:431
Expected error:
    <*errors.errorString | 0xc823748520>: {
        s: "Not all pods in namespace 'kube-system' running and ready within 5m0s",
    }
    Not all pods in namespace 'kube-system' running and ready within 5m0s
not to have occurred

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8220e3190>: {
        s: "Not all pods in namespace 'kube-system' running and ready within 5m0s",
    }
    Not all pods in namespace 'kube-system' running and ready within 5m0s
not to have occurred

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82076aa30>: {
        s: "Not all pods in namespace 'kube-system' running and ready within 5m0s",
    }
    Not all pods in namespace 'kube-system' running and ready within 5m0s
not to have occurred

Issues about this test specifically: #28019

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200c1060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82017ace0>: {
        s: "Not all pods in namespace 'kube-system' running and ready within 5m0s",
    }
    Not all pods in namespace 'kube-system' running and ready within 5m0s
not to have occurred

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8208f1d70>: {
        s: "Not all pods in namespace 'kube-system' running and ready within 5m0s",
    }
    Not all pods in namespace 'kube-system' running and ready within 5m0s
not to have occurred

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821b9cef0>: {
        s: "Not all pods in namespace 'kube-system' running and ready within 5m0s",
    }
    Not all pods in namespace 'kube-system' running and ready within 5m0s
not to have occurred

Issues about this test specifically: #29516

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:372
Expected error:
    <*errors.errorString | 0xc820dc1940>: {
        s: "service verification failed for: 10.99.250.167\nexpected [service1-7035x service1-7xd2w service1-bw257]\nreceived [service1-7035x service1-bw257]",
    }
    service verification failed for: 10.99.250.167
    expected [service1-7035x service1-7xd2w service1-bw257]
    received [service1-7035x service1-bw257]
not to have occurred

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-container_vm-1.5-upgrade-master/252/
Multiple broken tests:

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Feb  5 04:11:01.827: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-f8f18712-6h5f:
 container "runtime": expected RSS memory (MB) < 314572800; got 529674240
node gke-bootstrap-e2e-default-pool-f8f18712-v86n:
 container "runtime": expected RSS memory (MB) < 314572800; got 526671872
node gke-bootstrap-e2e-default-pool-f8f18712-xl7v:
 container "runtime": expected RSS memory (MB) < 314572800; got 531681280

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-container_vm-1.5-upgrade-master/253/
Multiple broken tests:

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Feb  5 09:19:26.068: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-e1dcfa94-3wq9:
 container "runtime": expected RSS memory (MB) < 314572800; got 519090176
node gke-bootstrap-e2e-default-pool-e1dcfa94-fgsj:
 container "runtime": expected RSS memory (MB) < 314572800; got 539729920
node gke-bootstrap-e2e-default-pool-e1dcfa94-hxkx:
 container "runtime": expected RSS memory (MB) < 314572800; got 536002560

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:453
Expected error:
    <*errors.errorString | 0xc821c6f420>: {
        s: "failed to wait for pods responding: pod with UID ba3449ec-ebb2-11e6-ad7a-42010af00014 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-kl3gr/pods 10359} [{{ } {my-hostname-delete-node-dlx2j my-hostname-delete-node- e2e-tests-resize-nodes-kl3gr /api/v1/namespaces/e2e-tests-resize-nodes-kl3gr/pods/my-hostname-delete-node-dlx2j ba3480fd-ebb2-11e6-ad7a-42010af00014 9965 0 {2017-02-05 06:52:34 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-kl3gr\",\"name\":\"my-hostname-delete-node\",\"uid\":\"ba32ba50-ebb2-11e6-ad7a-42010af00014\",\"apiVersion\":\"v1\",\"resourceVersion\":\"9951\"}}\n] [{v1 ReplicationController my-hostname-delete-node ba32ba50-ebb2-11e6-ad7a-42010af00014 0xc821f192a7}] []} {[{default-token-1gp5s {<nil> <nil> <nil> <nil> <nil> 0xc821a0bc50 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-1gp5s true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821f193a0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-e1dcfa94-fgsj 0xc821717f80 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-02-05 06:52:34 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-02-05 06:52:34 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-02-05 06:52:34 -0800 PST}  }]   10.240.0.3 10.96.0.4 2017-02-05T06:52:34-08:00 [] [{my-hostname-delete-node {<nil> 0xc820e96620 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://12c1958e85cd5e307a1def8441b7247418e7a8d4068cb159ccd052beed3ea03b}]}} {{ } {my-hostname-delete-node-h6gfp my-hostname-delete-node- e2e-tests-resize-nodes-kl3gr /api/v1/namespaces/e2e-tests-resize-nodes-kl3gr/pods/my-hostname-delete-node-h6gfp fff0f966-ebb2-11e6-ad7a-42010af00014 10197 0 {2017-02-05 06:54:31 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-kl3gr\",\"name\":\"my-hostname-delete-node\",\"uid\":\"ba32ba50-ebb2-11e6-ad7a-42010af00014\",\"apiVersion\":\"v1\",\"resourceVersion\":\"10083\"}}\n] [{v1 ReplicationController my-hostname-delete-node ba32ba50-ebb2-11e6-ad7a-42010af00014 0xc821f19637}] []} {[{default-token-1gp5s {<nil> <nil> <nil> <nil> <nil> 0xc821a0bcb0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-1gp5s true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821f19730 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-e1dcfa94-hxkx 0xc82131c040 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-02-05 06:54:31 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-02-05 06:54:32 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-02-05 06:54:31 -0800 PST}  }]   10.240.0.2 10.96.1.3 2017-02-05T06:54:31-08:00 [] [{my-hostname-delete-node {<nil> 0xc820e96640 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://f602eef59f02372aff8a438abdabcf2c70ace74f2bfdaa67cbea643342849872}]}} {{ } {my-hostname-delete-node-zgdl4 my-hostname-delete-node- e2e-tests-resize-nodes-kl3gr /api/v1/namespaces/e2e-tests-resize-nodes-kl3gr/pods/my-hostname-delete-node-zgdl4 ba345f4a-ebb2-11e6-ad7a-42010af00014 9963 0 {2017-02-05 06:52:34 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-kl3gr\",\"name\":\"my-hostname-delete-node\",\"uid\":\"ba32ba50-ebb2-11e6-ad7a-42010af00014\",\"apiVersion\":\"v1\",\"resourceVersion\":\"9951\"}}\n] [{v1 ReplicationController my-hostname-delete-node ba32ba50-ebb2-11e6-ad7a-42010af00014 0xc821f199c7}] []} {[{default-token-1gp5s {<nil> <nil> <nil> <nil> <nil> 0xc821a0bd10 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-1gp5s true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821f19ac0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-e1dcfa94-fgsj 0xc82131c100 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-02-05 06:52:34 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-02-05 06:52:34 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-02-05 06:52:34 -0800 PST}  }]   10.240.0.3 10.96.0.3 2017-02-05T06:52:34-08:00 [] [{my-hostname-delete-node {<nil> 0xc820e96660 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://3f2cf010893156397cbc534989fe543c069ae103a8c17c0659400070f4172630}]}}]}",
    }
    failed to wait for pods responding: pod with UID ba3449ec-ebb2-11e6-ad7a-42010af00014 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-kl3gr/pods 10359} [{{ } {my-hostname-delete-node-dlx2j my-hostname-delete-node- e2e-tests-resize-nodes-kl3gr /api/v1/namespaces/e2e-tests-resize-nodes-kl3gr/pods/my-hostname-delete-node-dlx2j ba3480fd-ebb2-11e6-ad7a-42010af00014 9965 0 {2017-02-05 06:52:34 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-kl3gr","name":"my-hostname-delete-node","uid":"ba32ba50-ebb2-11e6-ad7a-42010af00014","apiVersion":"v1","resourceVersion":"9951"}}
    ] [{v1 ReplicationController my-hostname-delete-node ba32ba50-ebb2-11e6-ad7a-42010af00014 0xc821f192a7}] []} {[{default-token-1gp5s {<nil> <nil> <nil> <nil> <nil> 0xc821a0bc50 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-1gp5s true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821f193a0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-e1dcfa94-fgsj 0xc821717f80 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-02-05 06:52:34 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-02-05 06:52:34 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-02-05 06:52:34 -0800 PST}  }]   10.240.0.3 10.96.0.4 2017-02-05T06:52:34-08:00 [] [{my-hostname-delete-node {<nil> 0xc820e96620 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://12c1958e85cd5e307a1def8441b7247418e7a8d4068cb159ccd052beed3ea03b}]}} {{ } {my-hostname-delete-node-h6gfp my-hostname-delete-node- e2e-tests-resize-nodes-kl3gr /api/v1/namespaces/e2e-tests-resize-nodes-kl3gr/pods/my-hostname-delete-node-h6gfp fff0f966-ebb2-11e6-ad7a-42010af00014 10197 0 {2017-02-05 06:54:31 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-kl3gr","name":"my-hostname-delete-node","uid":"ba32ba50-ebb2-11e6-ad7a-42010af00014","apiVersion":"v1","resourceVersion":"10083"}}
    ] [{v1 ReplicationController my-hostname-delete-node ba32ba50-ebb2-11e6-ad7a-42010af00014 0xc821f19637}] []} {[{default-token-1gp5s {<nil> <nil> <nil> <nil> <nil> 0xc821a0bcb0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-1gp5s true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821f19730 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-e1dcfa94-hxkx 0xc82131c040 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-02-05 06:54:31 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-02-05 06:54:32 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-02-05 06:54:31 -0800 PST}  }]   10.240.0.2 10.96.1.3 2017-02-05T06:54:31-08:00 [] [{my-hostname-delete-node {<nil> 0xc820e96640 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://f602eef59f02372aff8a438abdabcf2c70ace74f2bfdaa67cbea643342849872}]}} {{ } {my-hostname-delete-node-zgdl4 my-hostname-delete-node- e2e-tests-resize-nodes-kl3gr /api/v1/namespaces/e2e-tests-resize-nodes-kl3gr/pods/my-hostname-delete-node-zgdl4 ba345f4a-ebb2-11e6-ad7a-42010af00014 9963 0 {2017-02-05 06:52:34 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-kl3gr","name":"my-hostname-delete-node","uid":"ba32ba50-ebb2-11e6-ad7a-42010af00014","apiVersion":"v1","resourceVersion":"9951"}}
    ] [{v1 ReplicationController my-hostname-delete-node ba32ba50-ebb2-11e6-ad7a-42010af00014 0xc821f199c7}] []} {[{default-token-1gp5s {<nil> <nil> <nil> <nil> <nil> 0xc821a0bd10 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-1gp5s true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821f19ac0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-e1dcfa94-fgsj 0xc82131c100 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-02-05 06:52:34 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-02-05 06:52:34 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-02-05 06:52:34 -0800 PST}  }]   10.240.0.3 10.96.0.3 2017-02-05T06:52:34-08:00 [] [{my-hostname-delete-node {<nil> 0xc820e96660 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://3f2cf010893156397cbc534989fe543c069ae103a8c17c0659400070f4172630}]}}]}
not to have occurred

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-container_vm-1.5-upgrade-master/254/
Multiple broken tests:

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200bf060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: UpgradeTest {e2e.go}

exit status 1

Issues about this test specifically: #37745 #40486

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Feb  5 14:01:20.284: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-a3e3f362-xq2n:
 container "runtime": expected RSS memory (MB) < 314572800; got 524681216
node gke-bootstrap-e2e-default-pool-a3e3f362-kv1k:
 container "runtime": expected RSS memory (MB) < 314572800; got 513589248
node gke-bootstrap-e2e-default-pool-a3e3f362-pb1s:
 container "runtime": expected RSS memory (MB) < 314572800; got 530821120

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200bf060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200bf060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] master upgrade should maintain responsive services [Feature:MasterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:53
Feb  5 12:03:42.005: Failed waiting for pods to be running: Timeout waiting for 1 pods to be ready
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2649

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200bf060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] KubeProxy should test kube-proxy [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubeproxy.go:107
Expected error:
    <*errors.errorString | 0xc8200bf060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26490 #33669

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-container_vm-1.5-upgrade-master/255/
Multiple broken tests:

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:453
Expected error:
    <*errors.errorString | 0xc822cc5340>: {
        s: "failed to wait for pods responding: pod with UID 9a1f6a0e-ec3e-11e6-8a1a-42010af0002d is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-zr3ph/pods 33015} [{{ } {my-hostname-delete-node-3r8g6 my-hostname-delete-node- e2e-tests-resize-nodes-zr3ph /api/v1/namespaces/e2e-tests-resize-nodes-zr3ph/pods/my-hostname-delete-node-3r8g6 e1190dd6-ec3e-11e6-8a1a-42010af0002d 32868 0 {2017-02-05 23:35:49 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-zr3ph\",\"name\":\"my-hostname-delete-node\",\"uid\":\"9a1bc61b-ec3e-11e6-8a1a-42010af0002d\",\"apiVersion\":\"v1\",\"resourceVersion\":\"32858\"}}\n] [{v1 ReplicationController my-hostname-delete-node 9a1bc61b-ec3e-11e6-8a1a-42010af0002d 0xc8225cd217}] []} {[{default-token-z1c9n {<nil> <nil> <nil> <nil> <nil> 0xc8229597a0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-z1c9n true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8225cd310 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-52b6b036-qxg4 0xc822c88bc0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-02-05 23:35:49 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-02-05 23:35:49 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-02-05 23:35:49 -0800 PST}  }]   10.240.0.2 10.96.1.4 2017-02-05T23:35:49-08:00 [] [{my-hostname-delete-node {<nil> 0xc82293e320 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://93a006f397e01349231469a41c0d7d7ba7847808f46598189d721301cf18d261}]}} {{ } {my-hostname-delete-node-hkz98 my-hostname-delete-node- e2e-tests-resize-nodes-zr3ph /api/v1/namespaces/e2e-tests-resize-nodes-zr3ph/pods/my-hostname-delete-node-hkz98 e115f04b-ec3e-11e6-8a1a-42010af0002d 32866 0 {2017-02-05 23:35:48 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-zr3ph\",\"name\":\"my-hostname-delete-node\",\"uid\":\"9a1bc61b-ec3e-11e6-8a1a-42010af0002d\",\"apiVersion\":\"v1\",\"resourceVersion\":\"32788\"}}\n] [{v1 ReplicationController my-hostname-delete-node 9a1bc61b-ec3e-11e6-8a1a-42010af0002d 0xc8225cd5a7}] []} {[{default-token-z1c9n {<nil> <nil> <nil> <nil> <nil> 0xc822959890 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-z1c9n true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8225cd6a0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-52b6b036-lqmd 0xc822c88c80 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-02-05 23:35:49 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-02-05 23:35:49 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-02-05 23:35:49 -0800 PST}  }]   10.240.0.3 10.96.0.3 2017-02-05T23:35:49-08:00 [] [{my-hostname-delete-node {<nil> 0xc82293e340 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://5036e6f648d65a73a63f1a0c4d01b270c52cd5370137ab1a59884c7d7af90a65}]}} {{ } {my-hostname-delete-node-rg0l9 my-hostname-delete-node- e2e-tests-resize-nodes-zr3ph /api/v1/namespaces/e2e-tests-resize-nodes-zr3ph/pods/my-hostname-delete-node-rg0l9 9a1eba51-ec3e-11e6-8a1a-42010af0002d 32690 0 {2017-02-05 23:33:49 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-zr3ph\",\"name\":\"my-hostname-delete-node\",\"uid\":\"9a1bc61b-ec3e-11e6-8a1a-42010af0002d\",\"apiVersion\":\"v1\",\"resourceVersion\":\"32676\"}}\n] [{v1 ReplicationController my-hostname-delete-node 9a1bc61b-ec3e-11e6-8a1a-42010af0002d 0xc8225cd967}] []} {[{default-token-z1c9n {<nil> <nil> <nil> <nil> <nil> 0xc822959c20 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-z1c9n true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8225cda60 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-52b6b036-qxg4 0xc822c88d40 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-02-05 23:33:50 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-02-05 23:33:51 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-02-05 23:33:49 -0800 PST}  }]   10.240.0.2 10.96.1.3 2017-02-05T23:33:50-08:00 [] [{my-hostname-delete-node {<nil> 0xc82293e360 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://f27aa52666c95c355afdfbb1eeb2e44dcde0438b58ac970d1ae400a602a28b08}]}}]}",
    }
    failed to wait for pods responding: pod with UID 9a1f6a0e-ec3e-11e6-8a1a-42010af0002d is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-zr3ph/pods 33015} [{{ } {my-hostname-delete-node-3r8g6 my-hostname-delete-node- e2e-tests-resize-nodes-zr3ph /api/v1/namespaces/e2e-tests-resize-nodes-zr3ph/pods/my-hostname-delete-node-3r8g6 e1190dd6-ec3e-11e6-8a1a-42010af0002d 32868 0 {2017-02-05 23:35:49 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-zr3ph","name":"my-hostname-delete-node","uid":"9a1bc61b-ec3e-11e6-8a1a-42010af0002d","apiVersion":"v1","resourceVersion":"32858"}}
    ] [{v1 ReplicationController my-hostname-delete-node 9a1bc61b-ec3e-11e6-8a1a-42010af0002d 0xc8225cd217}] []} {[{default-token-z1c9n {<nil> <nil> <nil> <nil> <nil> 0xc8229597a0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-z1c9n true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8225cd310 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-52b6b036-qxg4 0xc822c88bc0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-02-05 23:35:49 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-02-05 23:35:49 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-02-05 23:35:49 -0800 PST}  }]   10.240.0.2 10.96.1.4 2017-02-05T23:35:49-08:00 [] [{my-hostname-delete-node {<nil> 0xc82293e320 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://93a006f397e01349231469a41c0d7d7ba7847808f46598189d721301cf18d261}]}} {{ } {my-hostname-delete-node-hkz98 my-hostname-delete-node- e2e-tests-resize-nodes-zr3ph /api/v1/namespaces/e2e-tests-resize-nodes-zr3ph/pods/my-hostname-delete-node-hkz98 e115f04b-ec3e-11e6-8a1a-42010af0002d 32866 0 {2017-02-05 23:35:48 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-zr3ph","name":"my-hostname-delete-node","uid":"9a1bc61b-ec3e-11e6-8a1a-42010af0002d","apiVersion":"v1","resourceVersion":"32788"}}
    ] [{v1 ReplicationController my-hostname-delete-node 9a1bc61b-ec3e-11e6-8a1a-42010af0002d 0xc8225cd5a7}] []} {[{default-token-z1c9n {<nil> <nil> <nil> <nil> <nil> 0xc822959890 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-z1c9n true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8225cd6a0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-52b6b036-lqmd 0xc822c88c80 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-02-05 23:35:49 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-02-05 23:35:49 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-02-05 23:35:49 -0800 PST}  }]   10.240.0.3 10.96.0.3 2017-02-05T23:35:49-08:00 [] [{my-hostname-delete-node {<nil> 0xc82293e340 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://5036e6f648d65a73a63f1a0c4d01b270c52cd5370137ab1a59884c7d7af90a65}]}} {{ } {my-hostname-delete-node-rg0l9 my-hostname-delete-node- e2e-tests-resize-nodes-zr3ph /api/v1/namespaces/e2e-tests-resize-nodes-zr3ph/pods/my-hostname-delete-node-rg0l9 9a1eba51-ec3e-11e6-8a1a-42010af0002d 32690 0 {2017-02-05 23:33:49 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-zr3ph","name":"my-hostname-delete-node","uid":"9a1bc61b-ec3e-11e6-8a1a-42010af0002d","apiVersion":"v1","resourceVersion":"32676"}}
    ] [{v1 ReplicationController my-hostname-delete-node 9a1bc61b-ec3e-11e6-8a1a-42010af0002d 0xc8225cd967}] []} {[{default-token-z1c9n {<nil> <nil> <nil> <nil> <nil> 0xc822959c20 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-z1c9n true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8225cda60 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-52b6b036-qxg4 0xc822c88d40 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-02-05 23:33:50 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-02-05 23:33:51 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-02-05 23:33:49 -0800 PST}  }]   10.240.0.2 10.96.1.3 2017-02-05T23:33:50-08:00 [] [{my-hostname-delete-node {<nil> 0xc82293e360 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://f27aa52666c95c355afdfbb1eeb2e44dcde0438b58ac970d1ae400a602a28b08}]}}]}
not to have occurred

Issues about this test specifically: #27233 #36204

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200c9060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200c9060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:372
Expected error:
    <*errors.errorString | 0xc821b3bbe0>: {
        s: "service verification failed for: 10.99.240.8\nexpected [service2-3kjfg service2-8vgs5 service2-95c2h]\nreceived [service2-3kjfg service2-8vgs5]",
    }
    service verification failed for: 10.99.240.8
    expected [service2-3kjfg service2-8vgs5 service2-95c2h]
    received [service2-3kjfg service2-8vgs5]
not to have occurred

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200c9060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:279
Expected error:
    <*errors.errorString | 0xc8229335e0>: {
        s: "service verification failed for: 10.99.240.5\nexpected [service3-n835h service3-r0l60 service3-x86w9]\nreceived [service3-n835h service3-x86w9]",
    }
    service verification failed for: 10.99.240.5
    expected [service3-n835h service3-r0l60 service3-x86w9]
    received [service3-n835h service3-x86w9]
not to have occurred

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Feb  6 00:10:15.029: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-52b6b036-qxg4:
 container "runtime": expected RSS memory (MB) < 314572800; got 537567232
node gke-bootstrap-e2e-default-pool-52b6b036-17ws:
 container "runtime": expected RSS memory (MB) < 314572800; got 511975424
node gke-bootstrap-e2e-default-pool-52b6b036-lqmd:
 container "runtime": expected RSS memory (MB) < 314572800; got 536133632

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200c9060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-container_vm-1.5-upgrade-master/256/
Multiple broken tests:

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:453
Expected error:
    <*errors.errorString | 0xc821e293b0>: {
        s: "failed to wait for pods responding: pod with UID 8491a9c9-ec77-11e6-b608-42010af00003 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-57f8b/pods 35423} [{{ } {my-hostname-delete-node-1f6xz my-hostname-delete-node- e2e-tests-resize-nodes-57f8b /api/v1/namespaces/e2e-tests-resize-nodes-57f8b/pods/my-hostname-delete-node-1f6xz 84918aad-ec77-11e6-b608-42010af00003 35113 0 {2017-02-06 06:21:15 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-57f8b\",\"name\":\"my-hostname-delete-node\",\"uid\":\"848fd9d4-ec77-11e6-b608-42010af00003\",\"apiVersion\":\"v1\",\"resourceVersion\":\"35097\"}}\n] [{v1 ReplicationController my-hostname-delete-node 848fd9d4-ec77-11e6-b608-42010af00003 0xc820d98fd7}] []} {[{default-token-2tbzb {<nil> <nil> <nil> <nil> <nil> 0xc82243d170 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-2tbzb true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820d990d0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-134a870c-rvjm 0xc820b56e40 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-02-06 06:21:15 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-02-06 06:21:16 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-02-06 06:21:15 -0800 PST}  }]   10.240.0.4 10.96.2.4 2017-02-06T06:21:15-08:00 [] [{my-hostname-delete-node {<nil> 0xc821cfdae0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://365ed201f89060292e485cb25b25a4366399f47d14d362418378db9606a50171}]}} {{ } {my-hostname-delete-node-51trh my-hostname-delete-node- e2e-tests-resize-nodes-57f8b /api/v1/namespaces/e2e-tests-resize-nodes-57f8b/pods/my-hostname-delete-node-51trh ce0ea148-ec77-11e6-b608-42010af00003 35286 0 {2017-02-06 06:23:18 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-57f8b\",\"name\":\"my-hostname-delete-node\",\"uid\":\"848fd9d4-ec77-11e6-b608-42010af00003\",\"apiVersion\":\"v1\",\"resourceVersion\":\"35211\"}}\n] [{v1 ReplicationController my-hostname-delete-node 848fd9d4-ec77-11e6-b608-42010af00003 0xc820d99367}] []} {[{default-token-2tbzb {<nil> <nil> <nil> <nil> <nil> 0xc82243d1d0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-2tbzb true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820d99460 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-134a870c-rvjm 0xc820b56f00 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-02-06 06:23:18 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-02-06 06:23:19 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-02-06 06:23:18 -0800 PST}  }]   10.240.0.4 10.96.2.8 2017-02-06T06:23:18-08:00 [] [{my-hostname-delete-node {<nil> 0xc821cfdb00 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://655d630fa72c8715b001e384e93be30eb0fd6af4db5a5e27c88bb13d72c8ce2a}]}} {{ } {my-hostname-delete-node-qz0vj my-hostname-delete-node- e2e-tests-resize-nodes-57f8b /api/v1/namespaces/e2e-tests-resize-nodes-57f8b/pods/my-hostname-delete-node-qz0vj 8491beb9-ec77-11e6-b608-42010af00003 35110 0 {2017-02-06 06:21:15 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-57f8b\",\"name\":\"my-hostname-delete-node\",\"uid\":\"848fd9d4-ec77-11e6-b608-42010af00003\",\"apiVersion\":\"v1\",\"resourceVersion\":\"35097\"}}\n] [{v1 ReplicationController my-hostname-delete-node 848fd9d4-ec77-11e6-b608-42010af00003 0xc820d996f7}] []} {[{default-token-2tbzb {<nil> <nil> <nil> <nil> <nil> 0xc82243d230 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-2tbzb true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820d997f0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-134a870c-gjkw 0xc820b56fc0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-02-06 06:21:15 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-02-06 06:21:16 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-02-06 06:21:15 -0800 PST}  }]   10.240.0.2 10.96.0.3 2017-02-06T06:21:15-08:00 [] [{my-hostname-delete-node {<nil> 0xc821cfdb20 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://693bc807c730885a448424577b936953909a0c7c201ef6456c91a0e818bc326c}]}}]}",
    }
    failed to wait for pods responding: pod with UID 8491a9c9-ec77-11e6-b608-42010af00003 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-57f8b/pods 35423} [{{ } {my-hostname-delete-node-1f6xz my-hostname-delete-node- e2e-tests-resize-nodes-57f8b /api/v1/namespaces/e2e-tests-resize-nodes-57f8b/pods/my-hostname-delete-node-1f6xz 84918aad-ec77-11e6-b608-42010af00003 35113 0 {2017-02-06 06:21:15 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-57f8b","name":"my-hostname-delete-node","uid":"848fd9d4-ec77-11e6-b608-42010af00003","apiVersion":"v1","resourceVersion":"35097"}}
    ] [{v1 ReplicationController my-hostname-delete-node 848fd9d4-ec77-11e6-b608-42010af00003 0xc820d98fd7}] []} {[{default-token-2tbzb {<nil> <nil> <nil> <nil> <nil> 0xc82243d170 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-2tbzb true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820d990d0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-134a870c-rvjm 0xc820b56e40 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-02-06 06:21:15 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-02-06 06:21:16 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-02-06 06:21:15 -0800 PST}  }]   10.240.0.4 10.96.2.4 2017-02-06T06:21:15-08:00 [] [{my-hostname-delete-node {<nil> 0xc821cfdae0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://365ed201f89060292e485cb25b25a4366399f47d14d362418378db9606a50171}]}} {{ } {my-hostname-delete-node-51trh my-hostname-delete-node- e2e-tests-resize-nodes-57f8b /api/v1/namespaces/e2e-tests-resize-nodes-57f8b/pods/my-hostname-delete-node-51trh ce0ea148-ec77-11e6-b608-42010af00003 35286 0 {2017-02-06 06:23:18 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-57f8b","name":"my-hostname-delete-node","uid":"848fd9d4-ec77-11e6-b608-42010af00003","apiVersion":"v1","resourceVersion":"35211"}}
    ] [{v1 ReplicationController my-hostname-delete-node 848fd9d4-ec77-11e6-b608-42010af00003 0xc820d99367}] []} {[{default-token-2tbzb {<nil> <nil> <nil> <nil> <nil> 0xc82243d1d0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-2tbzb true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820d99460 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-134a870c-rvjm 0xc820b56f00 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-02-06 06:23:18 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-02-06 06:23:19 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-02-06 06:23:18 -0800 PST}  }]   10.240.0.4 10.96.2.8 2017-02-06T06:23:18-08:00 [] [{my-hostname-delete-node {<nil> 0xc821cfdb00 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://655d630fa72c8715b001e384e93be30eb0fd6af4db5a5e27c88bb13d72c8ce2a}]}} {{ } {my-hostname-delete-node-qz0vj my-hostname-delete-node- e2e-tests-resize-nodes-57f8b /api/v1/namespaces/e2e-tests-resize-nodes-57f8b/pods/my-hostname-delete-node-qz0vj 8491beb9-ec77-11e6-b608-42010af00003 35110 0 {2017-02-06 06:21:15 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-57f8b","name":"my-hostname-delete-node","uid":"848fd9d4-ec77-11e6-b608-42010af00003","apiVersion":"v1","resourceVersion":"35097"}}
    ] [{v1 ReplicationController my-hostname-delete-node 848fd9d4-ec77-11e6-b608-42010af00003 0xc820d996f7}] []} {[{default-token-2tbzb {<nil> <nil> <nil> <nil> <nil> 0xc82243d230 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-2tbzb true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820d997f0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-134a870c-gjkw 0xc820b56fc0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-02-06 06:21:15 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-02-06 06:21:16 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-02-06 06:21:15 -0800 PST}  }]   10.240.0.2 10.96.0.3 2017-02-06T06:21:15-08:00 [] [{my-hostname-delete-node {<nil> 0xc821cfdb20 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://693bc807c730885a448424577b936953909a0c7c201ef6456c91a0e818bc326c}]}}]}
not to have occurred

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Feb  6 04:01:41.771: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-134a870c-ckm2:
 container "runtime": expected RSS memory (MB) < 314572800; got 524021760
node gke-bootstrap-e2e-default-pool-134a870c-gjkw:
 container "runtime": expected RSS memory (MB) < 314572800; got 537493504
node gke-bootstrap-e2e-default-pool-134a870c-rvjm:
 container "runtime": expected RSS memory (MB) < 314572800; got 541638656

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-container_vm-1.5-upgrade-master/257/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Feb  6 14:14:46.716: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-6426b75a-h6ht:
 container "runtime": expected RSS memory (MB) < 314572800; got 524259328
node gke-bootstrap-e2e-default-pool-6426b75a-mbsp:
 container "runtime": expected RSS memory (MB) < 314572800; got 539533312
node gke-bootstrap-e2e-default-pool-6426b75a-w3wn:
 container "runtime": expected RSS memory (MB) < 314572800; got 531333120

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200c9060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200c9060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200c9060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200c9060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-container_vm-1.5-upgrade-master/258/
Multiple broken tests:

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:279
Expected error:
    <*errors.errorString | 0xc8223ee180>: {
        s: "service verification failed for: 10.99.244.212\nexpected [service3-84tqx service3-fw114 service3-lfgzt]\nreceived [service3-fw114 service3-lfgzt]",
    }
    service verification failed for: 10.99.244.212
    expected [service3-84tqx service3-fw114 service3-lfgzt]
    received [service3-fw114 service3-lfgzt]
not to have occurred

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200bf060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200bf060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: list nodes {e2e.go}

exit status 1

Issues about this test specifically: #38667

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200bf060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Feb  6 19:39:56.765: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-24131328-wc0n:
 container "runtime": expected RSS memory (MB) < 314572800; got 528240640
node gke-bootstrap-e2e-default-pool-24131328-8bvz:
 container "runtime": expected RSS memory (MB) < 314572800; got 528384000
node gke-bootstrap-e2e-default-pool-24131328-kwjw:
 container "runtime": expected RSS memory (MB) < 314572800; got 511414272

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200bf060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-container_vm-1.5-upgrade-master/259/
Multiple broken tests:

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc82007ff80>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:725
Feb  7 03:11:56.861: Failed waiting for pods to be running: Timeout waiting for 1 pods to be ready

Issues about this test specifically: #26134

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Feb  7 01:24:29.675: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-15d35f18-svfs:
 container "runtime": expected RSS memory (MB) < 314572800; got 524951552
node gke-bootstrap-e2e-default-pool-15d35f18-8ngc:
 container "runtime": expected RSS memory (MB) < 314572800; got 536502272
node gke-bootstrap-e2e-default-pool-15d35f18-m0kw:
 container "runtime": expected RSS memory (MB) < 314572800; got 535994368

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc82007ff80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668 #41048

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc82007ff80>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc82007ff80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:279
Expected error:
    <*errors.errorString | 0xc821878fe0>: {
        s: "service verification failed for: 10.99.247.136\nexpected [service3-2355h service3-sdhxv service3-xzn1b]\nreceived [service3-sdhxv service3-xzn1b]",
    }
    service verification failed for: 10.99.247.136
    expected [service3-2355h service3-sdhxv service3-xzn1b]
    received [service3-sdhxv service3-xzn1b]
not to have occurred

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:453
Expected error:
    <*errors.errorString | 0xc82178b910>: {
        s: "failed to wait for pods responding: pod with UID 0937650f-ed1f-11e6-87eb-42010af00014 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-w25k6/pods 30994} [{{ } {my-hostname-delete-node-28h9f my-hostname-delete-node- e2e-tests-resize-nodes-w25k6 /api/v1/namespaces/e2e-tests-resize-nodes-w25k6/pods/my-hostname-delete-node-28h9f 093718eb-ed1f-11e6-87eb-42010af00014 30624 0 {2017-02-07 02:20:23 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-w25k6\",\"name\":\"my-hostname-delete-node\",\"uid\":\"09354c2e-ed1f-11e6-87eb-42010af00014\",\"apiVersion\":\"v1\",\"resourceVersion\":\"30612\"}}\n] [{v1 ReplicationController my-hostname-delete-node 09354c2e-ed1f-11e6-87eb-42010af00014 0xc82109b4e7}] []} {[{default-token-kqbhh {<nil> <nil> <nil> <nil> <nil> 0xc82218b4a0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-kqbhh true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82109b680 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-15d35f18-svfs 0xc821ce2fc0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-02-07 02:20:23 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-02-07 02:20:24 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-02-07 02:20:23 -0800 PST}  }]   10.240.0.4 10.96.2.3 2017-02-07T02:20:23-08:00 [] [{my-hostname-delete-node {<nil> 0xc8224fc000 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://90e4040d01b91f71415d20eb227b5a3b9b23aa9771125223f9aaad1c00bd6d05}]}} {{ } {my-hostname-delete-node-40gvr my-hostname-delete-node- e2e-tests-resize-nodes-w25k6 /api/v1/namespaces/e2e-tests-resize-nodes-w25k6/pods/my-hostname-delete-node-40gvr 4535f5a4-ed1f-11e6-87eb-42010af00014 30831 0 {2017-02-07 02:22:04 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-w25k6\",\"name\":\"my-hostname-delete-node\",\"uid\":\"09354c2e-ed1f-11e6-87eb-42010af00014\",\"apiVersion\":\"v1\",\"resourceVersion\":\"30715\"}}\n] [{v1 ReplicationController my-hostname-delete-node 09354c2e-ed1f-11e6-87eb-42010af00014 0xc82109b937}] []} {[{default-token-kqbhh {<nil> <nil> <nil> <nil> <nil> 0xc82218b500 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-kqbhh true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82109ba30 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-15d35f18-m0kw 0xc821ce3080 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-02-07 02:22:04 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-02-07 02:22:05 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-02-07 02:22:04 -0800 PST}  }]   10.240.0.3 10.96.1.3 2017-02-07T02:22:04-08:00 [] [{my-hostname-delete-node {<nil> 0xc8224fc020 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://7de53ec691c5491ea86075013e8a23641ba43c822fe6081971f7feb2438ad334}]}} {{ } {my-hostname-delete-node-phm6d my-hostname-delete-node- e2e-tests-resize-nodes-w25k6 /api/v1/namespaces/e2e-tests-resize-nodes-w25k6/pods/my-hostname-delete-node-phm6d 09393e2e-ed1f-11e6-87eb-42010af00014 30627 0 {2017-02-07 02:20:23 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-w25k6\",\"name\":\"my-hostname-delete-node\",\"uid\":\"09354c2e-ed1f-11e6-87eb-42010af00014\",\"apiVersion\":\"v1\",\"resourceVersion\":\"30612\"}}\n] [{v1 ReplicationController my-hostname-delete-node 09354c2e-ed1f-11e6-87eb-42010af00014 0xc82109bcc7}] []} {[{default-token-kqbhh {<nil> <nil> <nil> <nil> <nil> 0xc82218b560 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-kqbhh true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82109bdf0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-15d35f18-svfs 0xc821ce3140 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-02-07 02:20:23 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-02-07 02:20:24 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-02-07 02:20:23 -0800 PST}  }]   10.240.0.4 10.96.2.4 2017-02-07T02:20:23-08:00 [] [{my-hostname-delete-node {<nil> 0xc8224fc040 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://7aa23556a55097195545fa9aa2c38217847b849e31aa394c137c256bd434f34a}]}}]}",
    }
    failed to wait for pods responding: pod with UID 0937650f-ed1f-11e6-87eb-42010af00014 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-w25k6/pods 30994} [{{ } {my-hostname-delete-node-28h9f my-hostname-delete-node- e2e-tests-resize-nodes-w25k6 /api/v1/namespaces/e2e-tests-resize-nodes-w25k6/pods/my-hostname-delete-node-28h9f 093718eb-ed1f-11e6-87eb-42010af00014 30624 0 {2017-02-07 02:20:23 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-w25k6","name":"my-hostname-delete-node","uid":"09354c2e-ed1f-11e6-87eb-42010af00014","apiVersion":"v1","resourceVersion":"30612"}}
    ] [{v1 ReplicationController my-hostname-delete-node 09354c2e-ed1f-11e6-87eb-42010af00014 0xc82109b4e7}] []} {[{default-token-kqbhh {<nil> <nil> <nil> <nil> <nil> 0xc82218b4a0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-kqbhh true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82109b680 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-15d35f18-svfs 0xc821ce2fc0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-02-07 02:20:23 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-02-07 02:20:24 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-02-07 02:20:23 -0800 PST}  }]   10.240.0.4 10.96.2.3 2017-02-07T02:20:23-08:00 [] [{my-hostname-delete-node {<nil> 0xc8224fc000 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://90e4040d01b91f71415d20eb227b5a3b9b23aa9771125223f9aaad1c00bd6d05}]}} {{ } {my-hostname-delete-node-40gvr my-hostname-delete-node- e2e-tests-resize-nodes-w25k6 /api/v1/namespaces/e2e-tests-resize-nodes-w25k6/pods/my-hostname-delete-node-40gvr 4535f5a4-ed1f-11e6-87eb-42010af00014 30831 0 {2017-02-07 02:22:04 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-w25k6","name":"my-hostname-delete-node","uid":"09354c2e-ed1f-11e6-87eb-42010af00014","apiVersion":"v1","resourceVersion":"30715"}}
    ] [{v1 ReplicationController my-hostname-delete-node 09354c2e-ed1f-11e6-87eb-42010af00014 0xc82109b937}] []} {[{default-token-kqbhh {<nil> <nil> <nil> <nil> <nil> 0xc82218b500 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-kqbhh true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82109ba30 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-15d35f18-m0kw 0xc821ce3080 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-02-07 02:22:04 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-02-07 02:22:05 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-02-07 02:22:04 -0800 PST}  }]   10.240.0.3 10.96.1.3 2017-02-07T02:22:04-08:00 [] [{my-hostname-delete-node {<nil> 0xc8224fc020 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://7de53ec691c5491ea86075013e8a23641ba43c822fe6081971f7feb2438ad334}]}} {{ } {my-hostname-delete-node-phm6d my-hostname-delete-node- e2e-tests-resize-nodes-w25k6 /api/v1/namespaces/e2e-tests-resize-nodes-w25k6/pods/my-hostname-delete-node-phm6d 09393e2e-ed1f-11e6-87eb-42010af00014 30627 0 {2017-02-07 02:20:23 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-w25k6","name":"my-hostname-delete-node","uid":"09354c2e-ed1f-11e6-87eb-42010af00014","apiVersion":"v1","resourceVersion":"30612"}}
    ] [{v1 ReplicationController my-hostname-delete-node 09354c2e-ed1f-11e6-87eb-42010af00014 0xc82109bcc7}] []} {[{default-token-kqbhh {<nil> <nil> <nil> <nil> <nil> 0xc82218b560 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-kqbhh true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82109bdf0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-15d35f18-svfs 0xc821ce3140 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-02-07 02:20:23 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-02-07 02:20:24 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-02-07 02:20:23 -0800 PST}  }]   10.240.0.4 10.96.2.4 2017-02-07T02:20:23-08:00 [] [{my-hostname-delete-node {<nil> 0xc8224fc040 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://7aa23556a55097195545fa9aa2c38217847b849e31aa394c137c256bd434f34a}]}}]}
not to have occurred

Issues about this test specifically: #27233 #36204

@calebamiles calebamiles modified the milestone: v1.6 Mar 3, 2017
@grodrigues3 grodrigues3 added sig/node Categorizes an issue or PR as relevant to SIG Node. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. labels Mar 11, 2017
@dchen1107
Copy link
Member

GKE specific integration issue. cc/ @roberthbailey @fabioy

@calebamiles calebamiles modified the milestones: v1.6, v1.5 Mar 13, 2017
@roberthbailey
Copy link
Contributor

Closing as obsolete.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/flake Categorizes issue or PR as related to a flaky test. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. sig/node Categorizes an issue or PR as relevant to SIG Node.
Projects
None yet
Development

No branches or pull requests

6 participants