Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master: broken test run #38488

Closed
k8s-github-robot opened this issue Dec 9, 2016 · 192 comments
Closed
Assignees
Labels
kind/flake Categorizes issue or PR as related to a flaky test.

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/18/

Multiple broken tests:

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200b3060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc820c94050>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.248.198 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-69gz3 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.248.53\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-69gz3/services/redis-master\", \"uid\":\"e66d5bc0-bca5-11e6-815f-42010af00013\", \"resourceVersion\":\"3568\", \"creationTimestamp\":\"2016-12-07T17:52:20Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-69gz3\"}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc820fd4d00 exit status 1 <nil> true [0xc8200382a8 0xc8200382d0 0xc8200382f8] [0xc8200382a8 0xc8200382d0 0xc8200382f8] [0xc8200382b8 0xc8200382f0] [0xa97590 0xa97590] 0xc820913bc0}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.248.53\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-69gz3/services/redis-master\", \"uid\":\"e66d5bc0-bca5-11e6-815f-42010af00013\", \"resourceVersion\":\"3568\", \"creationTimestamp\":\"2016-12-07T17:52:20Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-69gz3\"}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.248.198 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-69gz3 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.248.53", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"selfLink":"/api/v1/namespaces/e2e-tests-kubectl-69gz3/services/redis-master", "uid":"e66d5bc0-bca5-11e6-815f-42010af00013", "resourceVersion":"3568", "creationTimestamp":"2016-12-07T17:52:20Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-69gz3"}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc820fd4d00 exit status 1 <nil> true [0xc8200382a8 0xc8200382d0 0xc8200382f8] [0xc8200382a8 0xc8200382d0 0xc8200382f8] [0xc8200382b8 0xc8200382f0] [0xa97590 0xa97590] 0xc820913bc0}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.248.53", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"selfLink":"/api/v1/namespaces/e2e-tests-kubectl-69gz3/services/redis-master", "uid":"e66d5bc0-bca5-11e6-815f-42010af00013", "resourceVersion":"3568", "creationTimestamp":"2016-12-07T17:52:20Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-69gz3"}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200b3060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200b3060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:331
Expected error:
    <*errors.errorString | 0xc8216bdb80>: {
        s: "service verification failed for: 10.99.251.87\nexpected [service1-7mgtz service1-ffw4w service1-sz6sh]\nreceived [service1-7mgtz service1-ffw4w]",
    }
    service verification failed for: 10.99.251.87
    expected [service1-7mgtz service1-ffw4w service1-sz6sh]
    received [service1-7mgtz service1-ffw4w]
not to have occurred

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication between nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:267
Expected error:
    <*errors.errorString | 0xc821460290>: {
        s: "pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-12-07 15:32:30 -0800 PST} FinishedAt:{Time:2016-12-07 15:32:40 -0800 PST} ContainerID:docker://a09e5de8484c324201683065c33efd05c3f37dce71c341613f521955ba3e7e36}",
    }
    pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-12-07 15:32:30 -0800 PST} FinishedAt:{Time:2016-12-07 15:32:40 -0800 PST} ContainerID:docker://a09e5de8484c324201683065c33efd05c3f37dce71c341613f521955ba3e7e36}
not to have occurred

Issues about this test specifically: #30131 #31402

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec  7 12:01:54.980: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-0c03b542-dkwj:
 container "runtime": expected RSS memory (MB) < 314572800; got 522022912
node gke-bootstrap-e2e-default-pool-0c03b542-yizl:
 container "runtime": expected RSS memory (MB) < 314572800; got 515284992
node gke-bootstrap-e2e-default-pool-0c03b542-6ruk:
 container "runtime": expected RSS memory (MB) < 314572800; got 534851584

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200b3060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/P2 labels Dec 9, 2016
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/19/

Multiple broken tests:

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:280
0 (0; 226.34313ms): path /api/v1/namespaces/e2e-tests-proxy-nk2r4/pods/proxy-service-gdqx1-76pp2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server has prevented the request from succeeding Reason:InternalError Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Error: 'EOF'\nTrying to reach: 'http://10.96.2.3:80/'" field:"" > retryAfterSeconds:0  Code:503}

Issues about this test specifically: #26164 #26210 #33998 #37158

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc82070a6d0>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.225.202 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-65rtj -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-65rtj/services/redis-master\", \"uid\":\"182e084d-bd02-11e6-b617-42010af00031\", \"resourceVersion\":\"35453\", \"creationTimestamp\":\"2016-12-08T04:52:17Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-65rtj\"}, \"spec\":map[string]interface {}{\"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.254.233\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc821fc2f60 exit status 1 <nil> true [0xc8200c4338 0xc8200c4368 0xc8200c4388] [0xc8200c4338 0xc8200c4368 0xc8200c4388] [0xc8200c4358 0xc8200c4380] [0xa97590 0xa97590] 0xc826bbbbc0}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-65rtj/services/redis-master\", \"uid\":\"182e084d-bd02-11e6-b617-42010af00031\", \"resourceVersion\":\"35453\", \"creationTimestamp\":\"2016-12-08T04:52:17Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-65rtj\"}, \"spec\":map[string]interface {}{\"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.254.233\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.225.202 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-65rtj -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"selfLink":"/api/v1/namespaces/e2e-tests-kubectl-65rtj/services/redis-master", "uid":"182e084d-bd02-11e6-b617-42010af00031", "resourceVersion":"35453", "creationTimestamp":"2016-12-08T04:52:17Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-65rtj"}, "spec":map[string]interface {}{"selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.254.233", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc821fc2f60 exit status 1 <nil> true [0xc8200c4338 0xc8200c4368 0xc8200c4388] [0xc8200c4338 0xc8200c4368 0xc8200c4388] [0xc8200c4358 0xc8200c4380] [0xa97590 0xa97590] 0xc826bbbbc0}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"selfLink":"/api/v1/namespaces/e2e-tests-kubectl-65rtj/services/redis-master", "uid":"182e084d-bd02-11e6-b617-42010af00031", "resourceVersion":"35453", "creationTimestamp":"2016-12-08T04:52:17Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-65rtj"}, "spec":map[string]interface {}{"selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.254.233", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication between nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:267
Expected error:
    <*errors.errorString | 0xc821abb060>: {
        s: "pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-12-07 22:00:32 -0800 PST} FinishedAt:{Time:2016-12-07 22:00:42 -0800 PST} ContainerID:docker://18b27447f8500a7c86cf2891a26f78409ab7569d0a46c571ff9f6bbc471a7c36}",
    }
    pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-12-07 22:00:32 -0800 PST} FinishedAt:{Time:2016-12-07 22:00:42 -0800 PST} ContainerID:docker://18b27447f8500a7c86cf2891a26f78409ab7569d0a46c571ff9f6bbc471a7c36}
not to have occurred

Issues about this test specifically: #30131 #31402

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec  7 19:46:25.347: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-291d9bae-w0uh:
 container "runtime": expected RSS memory (MB) < 314572800; got 522989568
node gke-bootstrap-e2e-default-pool-291d9bae-ka3k:
 container "runtime": expected RSS memory (MB) < 314572800; got 513966080
node gke-bootstrap-e2e-default-pool-291d9bae-vw8t:
 container "runtime": expected RSS memory (MB) < 314572800; got 530903040

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/20/

Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication between nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:267
Expected error:
    <*errors.errorString | 0xc82257b260>: {
        s: "pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-12-08 03:15:41 -0800 PST} FinishedAt:{Time:2016-12-08 03:15:51 -0800 PST} ContainerID:docker://9f2e01b12898e122fc73296b8eba133237ff7ccc7770582b299a7d4347629e9f}",
    }
    pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-12-08 03:15:41 -0800 PST} FinishedAt:{Time:2016-12-08 03:15:51 -0800 PST} ContainerID:docker://9f2e01b12898e122fc73296b8eba133237ff7ccc7770582b299a7d4347629e9f}
not to have occurred

Issues about this test specifically: #30131 #31402

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:372
Expected error:
    <*errors.errorString | 0xc82101b740>: {
        s: "service verification failed for: 10.99.249.175\nexpected [service2-cz7th service2-ll1qn service2-w5t37]\nreceived [service2-cz7th service2-ll1qn]",
    }
    service verification failed for: 10.99.249.175
    expected [service2-cz7th service2-ll1qn service2-w5t37]
    received [service2-cz7th service2-ll1qn]
not to have occurred

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec  8 01:39:27.500: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-54e9d969-3zqu:
 container "runtime": expected RSS memory (MB) < 314572800; got 539230208
node gke-bootstrap-e2e-default-pool-54e9d969-r28s:
 container "runtime": expected RSS memory (MB) < 314572800; got 521019392
node gke-bootstrap-e2e-default-pool-54e9d969-xzzv:
 container "runtime": expected RSS memory (MB) < 314572800; got 536711168

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc821f59130>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.37.143 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-gw098 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.252.201\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"namespace\":\"e2e-tests-kubectl-gw098\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-gw098/services/redis-master\", \"uid\":\"76864f8a-bd19-11e6-873e-42010af00026\", \"resourceVersion\":\"10486\", \"creationTimestamp\":\"2016-12-08T07:39:34Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\"}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc821097720 exit status 1 <nil> true [0xc8210fa028 0xc8210fa040 0xc8210fa058] [0xc8210fa028 0xc8210fa040 0xc8210fa058] [0xc8210fa038 0xc8210fa050] [0xa97590 0xa97590] 0xc821c68960}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.252.201\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"namespace\":\"e2e-tests-kubectl-gw098\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-gw098/services/redis-master\", \"uid\":\"76864f8a-bd19-11e6-873e-42010af00026\", \"resourceVersion\":\"10486\", \"creationTimestamp\":\"2016-12-08T07:39:34Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\"}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.37.143 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-gw098 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.252.201", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"namespace":"e2e-tests-kubectl-gw098", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-gw098/services/redis-master", "uid":"76864f8a-bd19-11e6-873e-42010af00026", "resourceVersion":"10486", "creationTimestamp":"2016-12-08T07:39:34Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master"}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc821097720 exit status 1 <nil> true [0xc8210fa028 0xc8210fa040 0xc8210fa058] [0xc8210fa028 0xc8210fa040 0xc8210fa058] [0xc8210fa038 0xc8210fa050] [0xa97590 0xa97590] 0xc821c68960}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.252.201", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"namespace":"e2e-tests-kubectl-gw098", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-gw098/services/redis-master", "uid":"76864f8a-bd19-11e6-873e-42010af00026", "resourceVersion":"10486", "creationTimestamp":"2016-12-08T07:39:34Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master"}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/21/

Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc821812520>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.20.60 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-wf6g6 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-wf6g6\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-wf6g6/services/redis-master\", \"uid\":\"914c3ffd-bd7d-11e6-893f-42010af00019\", \"resourceVersion\":\"44479\", \"creationTimestamp\":\"2016-12-08T19:36:08Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.251.7\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc822c93a60 exit status 1 <nil> true [0xc820f520c0 0xc820f520e8 0xc820f52108] [0xc820f520c0 0xc820f520e8 0xc820f52108] [0xc820f520d8 0xc820f520f8] [0xa97590 0xa97590] 0xc821346180}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-wf6g6\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-wf6g6/services/redis-master\", \"uid\":\"914c3ffd-bd7d-11e6-893f-42010af00019\", \"resourceVersion\":\"44479\", \"creationTimestamp\":\"2016-12-08T19:36:08Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.251.7\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.20.60 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-wf6g6 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"name":"redis-master", "namespace":"e2e-tests-kubectl-wf6g6", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-wf6g6/services/redis-master", "uid":"914c3ffd-bd7d-11e6-893f-42010af00019", "resourceVersion":"44479", "creationTimestamp":"2016-12-08T19:36:08Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.251.7", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc822c93a60 exit status 1 <nil> true [0xc820f520c0 0xc820f520e8 0xc820f52108] [0xc820f520c0 0xc820f520e8 0xc820f52108] [0xc820f520d8 0xc820f520f8] [0xa97590 0xa97590] 0xc821346180}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"name":"redis-master", "namespace":"e2e-tests-kubectl-wf6g6", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-wf6g6/services/redis-master", "uid":"914c3ffd-bd7d-11e6-893f-42010af00019", "resourceVersion":"44479", "creationTimestamp":"2016-12-08T19:36:08Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.251.7", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec  8 05:56:14.093: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-db262ce3-v7qi:
 container "runtime": expected RSS memory (MB) < 314572800; got 518856704
node gke-bootstrap-e2e-default-pool-db262ce3-e8m7:
 container "runtime": expected RSS memory (MB) < 314572800; got 515252224
node gke-bootstrap-e2e-default-pool-db262ce3-l15g:
 container "runtime": expected RSS memory (MB) < 314572800; got 525983744

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/22/

Multiple broken tests:

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc82007ff80>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc82007ff80>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc82007ff80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec  8 17:36:12.818: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-191cfff9-yajf:
 container "runtime": expected RSS memory (MB) < 314572800; got 532774912
node gke-bootstrap-e2e-default-pool-191cfff9-osdo:
 container "runtime": expected RSS memory (MB) < 314572800; got 536485888
node gke-bootstrap-e2e-default-pool-191cfff9-pk4q:
 container "runtime": expected RSS memory (MB) < 314572800; got 524419072

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_etc_hosts.go:56
Dec  8 15:07:53.856: Failed to read from kubectl exec stdout: EOF

Issues about this test specifically: #27023 #34604

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc82007ff80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc821d48e90>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.20.60 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-l8lfp -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"namespace\":\"e2e-tests-kubectl-l8lfp\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-l8lfp/services/redis-master\", \"uid\":\"8b1f3302-bda7-11e6-8281-42010af0001e\", \"resourceVersion\":\"29136\", \"creationTimestamp\":\"2016-12-09T00:36:37Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\"}, \"spec\":map[string]interface {}{\"clusterIP\":\"10.99.251.0\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc8221cee40 exit status 1 <nil> true [0xc820fd2620 0xc820fd2638 0xc820fd2660] [0xc820fd2620 0xc820fd2638 0xc820fd2660] [0xc820fd2630 0xc820fd2650] [0xa97590 0xa97590] 0xc822861ce0}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"namespace\":\"e2e-tests-kubectl-l8lfp\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-l8lfp/services/redis-master\", \"uid\":\"8b1f3302-bda7-11e6-8281-42010af0001e\", \"resourceVersion\":\"29136\", \"creationTimestamp\":\"2016-12-09T00:36:37Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\"}, \"spec\":map[string]interface {}{\"clusterIP\":\"10.99.251.0\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.20.60 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-l8lfp -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"namespace":"e2e-tests-kubectl-l8lfp", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-l8lfp/services/redis-master", "uid":"8b1f3302-bda7-11e6-8281-42010af0001e", "resourceVersion":"29136", "creationTimestamp":"2016-12-09T00:36:37Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master"}, "spec":map[string]interface {}{"clusterIP":"10.99.251.0", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc8221cee40 exit status 1 <nil> true [0xc820fd2620 0xc820fd2638 0xc820fd2660] [0xc820fd2620 0xc820fd2638 0xc820fd2660] [0xc820fd2630 0xc820fd2650] [0xa97590 0xa97590] 0xc822861ce0}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"namespace":"e2e-tests-kubectl-l8lfp", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-l8lfp/services/redis-master", "uid":"8b1f3302-bda7-11e6-8281-42010af0001e", "resourceVersion":"29136", "creationTimestamp":"2016-12-09T00:36:37Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master"}, "spec":map[string]interface {}{"clusterIP":"10.99.251.0", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/23/

Multiple broken tests:

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200c7060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200c7060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200c7060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable [replication controller] recreates pods scheduled on the unreachable node AND allows scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:544
Dec  8 23:49:27.417: Node gke-bootstrap-e2e-default-pool-817914b2-bv7l did not become ready within 2m0s

Issues about this test specifically: #27324 #35852 #35880

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc820786c60>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.25.66 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-zz331 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.246.202\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-zz331\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-zz331/services/redis-master\", \"uid\":\"e5c44dce-bdb9-11e6-b995-42010af00035\", \"resourceVersion\":\"1172\", \"creationTimestamp\":\"2016-12-09T02:48:00Z\", \"labels\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc820715b00 exit status 1 <nil> true [0xc82087e0a8 0xc82087e0d8 0xc82087e0f8] [0xc82087e0a8 0xc82087e0d8 0xc82087e0f8] [0xc82087e0d0 0xc82087e0f0] [0xa97590 0xa97590] 0xc820846660}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.246.202\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-zz331\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-zz331/services/redis-master\", \"uid\":\"e5c44dce-bdb9-11e6-b995-42010af00035\", \"resourceVersion\":\"1172\", \"creationTimestamp\":\"2016-12-09T02:48:00Z\", \"labels\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.25.66 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-zz331 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.246.202", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"name":"redis-master", "namespace":"e2e-tests-kubectl-zz331", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-zz331/services/redis-master", "uid":"e5c44dce-bdb9-11e6-b995-42010af00035", "resourceVersion":"1172", "creationTimestamp":"2016-12-09T02:48:00Z", "labels":map[string]interface {}{"role":"master", "app":"redis"}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc820715b00 exit status 1 <nil> true [0xc82087e0a8 0xc82087e0d8 0xc82087e0f8] [0xc82087e0a8 0xc82087e0d8 0xc82087e0f8] [0xc82087e0d0 0xc82087e0f0] [0xa97590 0xa97590] 0xc820846660}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.246.202", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"name":"redis-master", "namespace":"e2e-tests-kubectl-zz331", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-zz331/services/redis-master", "uid":"e5c44dce-bdb9-11e6-b995-42010af00035", "resourceVersion":"1172", "creationTimestamp":"2016-12-09T02:48:00Z", "labels":map[string]interface {}{"role":"master", "app":"redis"}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200c7060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec  8 19:15:42.204: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-817914b2-q8b8:
 container "runtime": expected RSS memory (MB) < 314572800; got 534859776
node gke-bootstrap-e2e-default-pool-817914b2-u6sz:
 container "runtime": expected RSS memory (MB) < 314572800; got 516075520
node gke-bootstrap-e2e-default-pool-817914b2-y74t:
 container "runtime": expected RSS memory (MB) < 314572800; got 521330688

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/24/

Multiple broken tests:

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200c9060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] KubeProxy should test kube-proxy [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubeproxy.go:107
Expected
    <bool>: false
to be true

Issues about this test specifically: #26490 #33669

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:372
Expected error:
    <*errors.errorString | 0xc820a6cf70>: {
        s: "service verification failed for: 10.99.244.119\nexpected [service2-9pnt3 service2-c60d6 service2-kp583]\nreceived [service2-c60d6 service2-kp583]",
    }
    service verification failed for: 10.99.244.119
    expected [service2-9pnt3 service2-c60d6 service2-kp583]
    received [service2-c60d6 service2-kp583]
not to have occurred

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec  9 03:52:30.523: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-7ff02876-ghjd:
 container "runtime": expected RSS memory (MB) < 314572800; got 541073408
node gke-bootstrap-e2e-default-pool-7ff02876-ulvr:
 container "runtime": expected RSS memory (MB) < 314572800; got 513613824
node gke-bootstrap-e2e-default-pool-7ff02876-z9vu:
 container "runtime": expected RSS memory (MB) < 314572800; got 532889600

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200c9060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc8208747f0>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.63.200 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-36b4d -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-36b4d\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-36b4d/services/redis-master\", \"uid\":\"ce6434ef-be02-11e6-8028-42010af00031\", \"resourceVersion\":\"17003\", \"creationTimestamp\":\"2016-12-09T11:29:54Z\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.246.47\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc8215afb60 exit status 1 <nil> true [0xc821e963f8 0xc821e96410 0xc821e96428] [0xc821e963f8 0xc821e96410 0xc821e96428] [0xc821e96408 0xc821e96420] [0xa97590 0xa97590] 0xc821a4d080}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-36b4d\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-36b4d/services/redis-master\", \"uid\":\"ce6434ef-be02-11e6-8028-42010af00031\", \"resourceVersion\":\"17003\", \"creationTimestamp\":\"2016-12-09T11:29:54Z\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.246.47\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.63.200 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-36b4d -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-36b4d", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-36b4d/services/redis-master", "uid":"ce6434ef-be02-11e6-8028-42010af00031", "resourceVersion":"17003", "creationTimestamp":"2016-12-09T11:29:54Z"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.246.47", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc8215afb60 exit status 1 <nil> true [0xc821e963f8 0xc821e96410 0xc821e96428] [0xc821e963f8 0xc821e96410 0xc821e96428] [0xc821e96408 0xc821e96420] [0xa97590 0xa97590] 0xc821a4d080}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-36b4d", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-36b4d/services/redis-master", "uid":"ce6434ef-be02-11e6-8028-42010af00031", "resourceVersion":"17003", "creationTimestamp":"2016-12-09T11:29:54Z"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.246.47", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200c9060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200c9060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:279
Expected error:
    <*errors.errorString | 0xc8211815a0>: {
        s: "service verification failed for: 10.99.253.238\nexpected [service3-dbwdh service3-fxv3w service3-vvs5t]\nreceived [service3-dbwdh service3-fxv3w]",
    }
    service verification failed for: 10.99.253.238
    expected [service3-dbwdh service3-fxv3w service3-vvs5t]
    received [service3-dbwdh service3-fxv3w]
not to have occurred

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/25/

Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc82025cc80>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.69.160 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-rkgz7 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"namespace\":\"e2e-tests-kubectl-rkgz7\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-rkgz7/services/redis-master\", \"uid\":\"c0997e2e-be31-11e6-bb1f-42010af00031\", \"resourceVersion\":\"10077\", \"creationTimestamp\":\"2016-12-09T17:05:57Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\"}, \"spec\":map[string]interface {}{\"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"clusterIP\":\"10.99.255.146\", \"type\":\"ClusterIP\"}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc8209160a0 exit status 1 <nil> true [0xc820122a10 0xc820122a48 0xc820122a60] [0xc820122a10 0xc820122a48 0xc820122a60] [0xc820122a40 0xc820122a58] [0xa97590 0xa97590] 0xc820fa82a0}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"namespace\":\"e2e-tests-kubectl-rkgz7\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-rkgz7/services/redis-master\", \"uid\":\"c0997e2e-be31-11e6-bb1f-42010af00031\", \"resourceVersion\":\"10077\", \"creationTimestamp\":\"2016-12-09T17:05:57Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\"}, \"spec\":map[string]interface {}{\"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"clusterIP\":\"10.99.255.146\", \"type\":\"ClusterIP\"}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.69.160 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-rkgz7 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"namespace":"e2e-tests-kubectl-rkgz7", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-rkgz7/services/redis-master", "uid":"c0997e2e-be31-11e6-bb1f-42010af00031", "resourceVersion":"10077", "creationTimestamp":"2016-12-09T17:05:57Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master"}, "spec":map[string]interface {}{"sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"role":"master", "app":"redis"}, "clusterIP":"10.99.255.146", "type":"ClusterIP"}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc8209160a0 exit status 1 <nil> true [0xc820122a10 0xc820122a48 0xc820122a60] [0xc820122a10 0xc820122a48 0xc820122a60] [0xc820122a40 0xc820122a58] [0xa97590 0xa97590] 0xc820fa82a0}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"namespace":"e2e-tests-kubectl-rkgz7", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-rkgz7/services/redis-master", "uid":"c0997e2e-be31-11e6-bb1f-42010af00031", "resourceVersion":"10077", "creationTimestamp":"2016-12-09T17:05:57Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master"}, "spec":map[string]interface {}{"sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"role":"master", "app":"redis"}, "clusterIP":"10.99.255.146", "type":"ClusterIP"}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200d80b0>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200d80b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200d80b0>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec  9 10:52:15.050: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-3933a7e6-9mdi:
 container "runtime": expected RSS memory (MB) < 314572800; got 542736384
node gke-bootstrap-e2e-default-pool-3933a7e6-m399:
 container "runtime": expected RSS memory (MB) < 314572800; got 515252224
node gke-bootstrap-e2e-default-pool-3933a7e6-vjco:
 container "runtime": expected RSS memory (MB) < 314572800; got 529563648

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:372
Expected error:
    <*errors.errorString | 0xc82171d3c0>: {
        s: "service verification failed for: 10.99.254.124\nexpected [service2-0rh3h service2-2qbvk service2-kllkl]\nreceived [service2-2qbvk service2-kllkl]",
    }
    service verification failed for: 10.99.254.124
    expected [service2-0rh3h service2-2qbvk service2-kllkl]
    received [service2-2qbvk service2-kllkl]
not to have occurred

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200d80b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication between nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:267
Expected error:
    <*errors.errorString | 0xc821d7eaa0>: {
        s: "pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-12-09 11:22:14 -0800 PST} FinishedAt:{Time:2016-12-09 11:22:24 -0800 PST} ContainerID:docker://f504125f2b28ea5b7cf0f1b4018a3276ebf48aaf064e170cd0968b7d2390ee02}",
    }
    pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-12-09 11:22:14 -0800 PST} FinishedAt:{Time:2016-12-09 11:22:24 -0800 PST} ContainerID:docker://f504125f2b28ea5b7cf0f1b4018a3276ebf48aaf064e170cd0968b7d2390ee02}
not to have occurred

Issues about this test specifically: #30131 #31402

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/26/

Multiple broken tests:

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200d7060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200d7060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:453
Expected error:
    <*errors.errorString | 0xc822592ca0>: {
        s: "failed to wait for pods responding: pod with UID bd7ecb87-be77-11e6-a9d7-42010af0001d is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-v9t2s/pods 26022} [{{ } {my-hostname-delete-node-fsm0j my-hostname-delete-node- e2e-tests-resize-nodes-v9t2s /api/v1/namespaces/e2e-tests-resize-nodes-v9t2s/pods/my-hostname-delete-node-fsm0j ee952665-be77-11e6-a9d7-42010af0001d 25769 0 {2016-12-09 17:28:19 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-v9t2s\",\"name\":\"my-hostname-delete-node\",\"uid\":\"bd7cc8c1-be77-11e6-a9d7-42010af0001d\",\"apiVersion\":\"v1\",\"resourceVersion\":\"25759\"}}\n] [{v1 ReplicationController my-hostname-delete-node bd7cc8c1-be77-11e6-a9d7-42010af0001d 0xc822315167}] []} {[{default-token-3vpq8 {<nil> <nil> <nil> <nil> <nil> 0xc8218d4a50 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-3vpq8 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc822315260 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-03b253eb-hmux 0xc82231b100 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-09 17:28:19 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-09 17:28:20 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-09 17:28:19 -0800 PST}  }]   10.240.0.2 10.96.0.8 2016-12-09T17:28:19-08:00 [] [{my-hostname-delete-node {<nil> 0xc8217ba2a0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://0ea3efe86490e769d45595589c778f20ce2c967941296771165dfd7b79d93b73}]}} {{ } {my-hostname-delete-node-hzwn4 my-hostname-delete-node- e2e-tests-resize-nodes-v9t2s /api/v1/namespaces/e2e-tests-resize-nodes-v9t2s/pods/my-hostname-delete-node-hzwn4 bd7e9ea8-be77-11e6-a9d7-42010af0001d 25637 0 {2016-12-09 17:26:57 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-v9t2s\",\"name\":\"my-hostname-delete-node\",\"uid\":\"bd7cc8c1-be77-11e6-a9d7-42010af0001d\",\"apiVersion\":\"v1\",\"resourceVersion\":\"25626\"}}\n] [{v1 ReplicationController my-hostname-delete-node bd7cc8c1-be77-11e6-a9d7-42010af0001d 0xc8223155b7}] []} {[{default-token-3vpq8 {<nil> <nil> <nil> <nil> <nil> 0xc8218d4ab0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-3vpq8 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc822315730 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-03b253eb-hmux 0xc82231b240 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-09 17:26:57 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-09 17:26:58 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-09 17:26:57 -0800 PST}  }]   10.240.0.2 10.96.0.3 2016-12-09T17:26:57-08:00 [] [{my-hostname-delete-node {<nil> 0xc8217ba2c0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://859f830113723e21dcf24b722237e369824c04f654e0300a87e6d486378b0d60}]}} {{ } {my-hostname-delete-node-pkbcn my-hostname-delete-node- e2e-tests-resize-nodes-v9t2s /api/v1/namespaces/e2e-tests-resize-nodes-v9t2s/pods/my-hostname-delete-node-pkbcn ee8f9694-be77-11e6-a9d7-42010af0001d 25767 0 {2016-12-09 17:28:19 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-v9t2s\",\"name\":\"my-hostname-delete-node\",\"uid\":\"bd7cc8c1-be77-11e6-a9d7-42010af0001d\",\"apiVersion\":\"v1\",\"resourceVersion\":\"25703\"}}\n] [{v1 ReplicationController my-hostname-delete-node bd7cc8c1-be77-11e6-a9d7-42010af0001d 0xc822315af7}] []} {[{default-token-3vpq8 {<nil> <nil> <nil> <nil> <nil> 0xc8218d4b10 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-3vpq8 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc822315c60 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-03b253eb-tqiw 0xc82231b3c0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-09 17:28:19 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-09 17:28:20 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-09 17:28:19 -0800 PST}  }]   10.240.0.3 10.96.1.3 2016-12-09T17:28:19-08:00 [] [{my-hostname-delete-node {<nil> 0xc8217ba2e0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://40e98eafb62a2f6b224657cb8b0de77eee0419eaf746c85b66b540badea6ea29}]}}]}",
    }
    failed to wait for pods responding: pod with UID bd7ecb87-be77-11e6-a9d7-42010af0001d is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-v9t2s/pods 26022} [{{ } {my-hostname-delete-node-fsm0j my-hostname-delete-node- e2e-tests-resize-nodes-v9t2s /api/v1/namespaces/e2e-tests-resize-nodes-v9t2s/pods/my-hostname-delete-node-fsm0j ee952665-be77-11e6-a9d7-42010af0001d 25769 0 {2016-12-09 17:28:19 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-v9t2s","name":"my-hostname-delete-node","uid":"bd7cc8c1-be77-11e6-a9d7-42010af0001d","apiVersion":"v1","resourceVersion":"25759"}}
    ] [{v1 ReplicationController my-hostname-delete-node bd7cc8c1-be77-11e6-a9d7-42010af0001d 0xc822315167}] []} {[{default-token-3vpq8 {<nil> <nil> <nil> <nil> <nil> 0xc8218d4a50 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-3vpq8 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc822315260 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-03b253eb-hmux 0xc82231b100 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-09 17:28:19 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-09 17:28:20 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-09 17:28:19 -0800 PST}  }]   10.240.0.2 10.96.0.8 2016-12-09T17:28:19-08:00 [] [{my-hostname-delete-node {<nil> 0xc8217ba2a0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://0ea3efe86490e769d45595589c778f20ce2c967941296771165dfd7b79d93b73}]}} {{ } {my-hostname-delete-node-hzwn4 my-hostname-delete-node- e2e-tests-resize-nodes-v9t2s /api/v1/namespaces/e2e-tests-resize-nodes-v9t2s/pods/my-hostname-delete-node-hzwn4 bd7e9ea8-be77-11e6-a9d7-42010af0001d 25637 0 {2016-12-09 17:26:57 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-v9t2s","name":"my-hostname-delete-node","uid":"bd7cc8c1-be77-11e6-a9d7-42010af0001d","apiVersion":"v1","resourceVersion":"25626"}}
    ] [{v1 ReplicationController my-hostname-delete-node bd7cc8c1-be77-11e6-a9d7-42010af0001d 0xc8223155b7}] []} {[{default-token-3vpq8 {<nil> <nil> <nil> <nil> <nil> 0xc8218d4ab0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-3vpq8 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc822315730 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-03b253eb-hmux 0xc82231b240 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-09 17:26:57 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-09 17:26:58 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-09 17:26:57 -0800 PST}  }]   10.240.0.2 10.96.0.3 2016-12-09T17:26:57-08:00 [] [{my-hostname-delete-node {<nil> 0xc8217ba2c0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://859f830113723e21dcf24b722237e369824c04f654e0300a87e6d486378b0d60}]}} {{ } {my-hostname-delete-node-pkbcn my-hostname-delete-node- e2e-tests-resize-nodes-v9t2s /api/v1/namespaces/e2e-tests-resize-nodes-v9t2s/pods/my-hostname-delete-node-pkbcn ee8f9694-be77-11e6-a9d7-42010af0001d 25767 0 {2016-12-09 17:28:19 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-v9t2s","name":"my-hostname-delete-node","uid":"bd7cc8c1-be77-11e6-a9d7-42010af0001d","apiVersion":"v1","resourceVersion":"25703"}}
    ] [{v1 ReplicationController my-hostname-delete-node bd7cc8c1-be77-11e6-a9d7-42010af0001d 0xc822315af7}] []} {[{default-token-3vpq8 {<nil> <nil> <nil> <nil> <nil> 0xc8218d4b10 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-3vpq8 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc822315c60 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-03b253eb-tqiw 0xc82231b3c0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-09 17:28:19 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-09 17:28:20 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-09 17:28:19 -0800 PST}  }]   10.240.0.3 10.96.1.3 2016-12-09T17:28:19-08:00 [] [{my-hostname-delete-node {<nil> 0xc8217ba2e0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://40e98eafb62a2f6b224657cb8b0de77eee0419eaf746c85b66b540badea6ea29}]}}]}
not to have occurred

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200d7060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc820e0cb60>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.219.124 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-7zcbs -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"metadata\":map[string]interface {}{\"uid\":\"bb44779b-be65-11e6-a9d7-42010af0001d\", \"resourceVersion\":\"6897\", \"creationTimestamp\":\"2016-12-09T23:18:02Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-7zcbs\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-7zcbs/services/redis-master\"}, \"spec\":map[string]interface {}{\"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.252.209\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\"}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc821721280 exit status 1 <nil> true [0xc820094158 0xc820094268 0xc820094290] [0xc820094158 0xc820094268 0xc820094290] [0xc820094168 0xc820094280] [0xa97590 0xa97590] 0xc821cf28a0}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"metadata\":map[string]interface {}{\"uid\":\"bb44779b-be65-11e6-a9d7-42010af0001d\", \"resourceVersion\":\"6897\", \"creationTimestamp\":\"2016-12-09T23:18:02Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-7zcbs\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-7zcbs/services/redis-master\"}, \"spec\":map[string]interface {}{\"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.252.209\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\"}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.219.124 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-7zcbs -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"metadata":map[string]interface {}{"uid":"bb44779b-be65-11e6-a9d7-42010af0001d", "resourceVersion":"6897", "creationTimestamp":"2016-12-09T23:18:02Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-7zcbs", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-7zcbs/services/redis-master"}, "spec":map[string]interface {}{"type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.252.209"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1"}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc821721280 exit status 1 <nil> true [0xc820094158 0xc820094268 0xc820094290] [0xc820094158 0xc820094268 0xc820094290] [0xc820094168 0xc820094280] [0xa97590 0xa97590] 0xc821cf28a0}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"metadata":map[string]interface {}{"uid":"bb44779b-be65-11e6-a9d7-42010af0001d", "resourceVersion":"6897", "creationTimestamp":"2016-12-09T23:18:02Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-7zcbs", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-7zcbs/services/redis-master"}, "spec":map[string]interface {}{"type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.252.209"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1"}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec  9 17:23:37.806: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-03b253eb-tqiw:
 container "runtime": expected RSS memory (MB) < 314572800; got 537624576
node gke-bootstrap-e2e-default-pool-03b253eb-gebh:
 container "runtime": expected RSS memory (MB) < 314572800; got 509964288
node gke-bootstrap-e2e-default-pool-03b253eb-hmux:
 container "runtime": expected RSS memory (MB) < 314572800; got 537030656

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200d7060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:47
Dec  9 16:56:26.199: Did not get expected responses within the timeout period of 120.00 seconds.

Issues about this test specifically: #32023

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/28/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec 10 00:17:58.804: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-e398c20c-l7f1:
 container "runtime": expected RSS memory (MB) < 314572800; got 519462912
node gke-bootstrap-e2e-default-pool-e398c20c-xg3i:
 container "runtime": expected RSS memory (MB) < 314572800; got 526163968
node gke-bootstrap-e2e-default-pool-e398c20c-e4ki:
 container "runtime": expected RSS memory (MB) < 314572800; got 523964416

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200c5060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200c5060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200c5060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:331
Expected error:
    <*errors.errorString | 0xc8211e26f0>: {
        s: "service verification failed for: 10.99.240.55\nexpected [service2-33rwz service2-5lp07 service2-p2g0n]\nreceived [service2-33rwz service2-p2g0n]",
    }
    service verification failed for: 10.99.240.55
    expected [service2-33rwz service2-5lp07 service2-p2g0n]
    received [service2-33rwz service2-p2g0n]
not to have occurred

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200c5060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc820932150>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.11.103 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-91761 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-91761\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-91761/services/redis-master\", \"uid\":\"64362482-beb5-11e6-adaa-42010af00037\", \"resourceVersion\":\"22239\", \"creationTimestamp\":\"2016-12-10T08:48:16Z\"}, \"spec\":map[string]interface {}{\"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.244.213\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc820a51820 exit status 1 <nil> true [0xc820036308 0xc820036380 0xc8200363a8] [0xc820036308 0xc820036380 0xc8200363a8] [0xc820036378 0xc820036398] [0xa97590 0xa97590] 0xc8213e66c0}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-91761\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-91761/services/redis-master\", \"uid\":\"64362482-beb5-11e6-adaa-42010af00037\", \"resourceVersion\":\"22239\", \"creationTimestamp\":\"2016-12-10T08:48:16Z\"}, \"spec\":map[string]interface {}{\"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.244.213\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.11.103 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-91761 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-91761", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-91761/services/redis-master", "uid":"64362482-beb5-11e6-adaa-42010af00037", "resourceVersion":"22239", "creationTimestamp":"2016-12-10T08:48:16Z"}, "spec":map[string]interface {}{"selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.244.213", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc820a51820 exit status 1 <nil> true [0xc820036308 0xc820036380 0xc8200363a8] [0xc820036308 0xc820036380 0xc8200363a8] [0xc820036378 0xc820036398] [0xa97590 0xa97590] 0xc8213e66c0}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-91761", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-91761/services/redis-master", "uid":"64362482-beb5-11e6-adaa-42010af00037", "resourceVersion":"22239", "creationTimestamp":"2016-12-10T08:48:16Z"}, "spec":map[string]interface {}{"selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.244.213", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/29/

Multiple broken tests:

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: DiffResources {e2e.go}

Error: 1 leaked resources
+a3ed9a764bed511e69b6f42010af0003  us-central1

Issues about this test specifically: #33373 #33416 #34060

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec 10 10:19:32.893: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-fcfc109b-ptcr:
 container "runtime": expected RSS memory (MB) < 314572800; got 515723264
node gke-bootstrap-e2e-default-pool-fcfc109b-sxb6:
 container "runtime": expected RSS memory (MB) < 314572800; got 536481792
node gke-bootstrap-e2e-default-pool-fcfc109b-wn12:
 container "runtime": expected RSS memory (MB) < 314572800; got 532217856

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc820a3c5c0>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.11.103 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-0chwx -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"namespace\":\"e2e-tests-kubectl-0chwx\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-0chwx/services/redis-master\", \"uid\":\"91823c4e-bee7-11e6-a89e-42010af00035\", \"resourceVersion\":\"13422\", \"creationTimestamp\":\"2016-12-10T14:47:26Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\"}, \"spec\":map[string]interface {}{\"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"clusterIP\":\"10.99.240.45\", \"type\":\"ClusterIP\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc8207a5a20 exit status 1 <nil> true [0xc8211b4738 0xc8211b4750 0xc8211b4768] [0xc8211b4738 0xc8211b4750 0xc8211b4768] [0xc8211b4748 0xc8211b4760] [0xa97590 0xa97590] 0xc821157800}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"namespace\":\"e2e-tests-kubectl-0chwx\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-0chwx/services/redis-master\", \"uid\":\"91823c4e-bee7-11e6-a89e-42010af00035\", \"resourceVersion\":\"13422\", \"creationTimestamp\":\"2016-12-10T14:47:26Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\"}, \"spec\":map[string]interface {}{\"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"clusterIP\":\"10.99.240.45\", \"type\":\"ClusterIP\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.11.103 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-0chwx -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"namespace":"e2e-tests-kubectl-0chwx", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-0chwx/services/redis-master", "uid":"91823c4e-bee7-11e6-a89e-42010af00035", "resourceVersion":"13422", "creationTimestamp":"2016-12-10T14:47:26Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master"}, "spec":map[string]interface {}{"sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"role":"master", "app":"redis"}, "clusterIP":"10.99.240.45", "type":"ClusterIP"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc8207a5a20 exit status 1 <nil> true [0xc8211b4738 0xc8211b4750 0xc8211b4768] [0xc8211b4738 0xc8211b4750 0xc8211b4768] [0xc8211b4748 0xc8211b4760] [0xa97590 0xa97590] 0xc821157800}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"namespace":"e2e-tests-kubectl-0chwx", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-0chwx/services/redis-master", "uid":"91823c4e-bee7-11e6-a89e-42010af00035", "resourceVersion":"13422", "creationTimestamp":"2016-12-10T14:47:26Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master"}, "spec":map[string]interface {}{"sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"role":"master", "app":"redis"}, "clusterIP":"10.99.240.45", "type":"ClusterIP"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/30/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec 10 16:24:10.079: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-ff5aa7d0-bh6l:
 container "runtime": expected RSS memory (MB) < 314572800; got 511655936
node gke-bootstrap-e2e-default-pool-ff5aa7d0-gxr4:
 container "runtime": expected RSS memory (MB) < 314572800; got 528740352
node gke-bootstrap-e2e-default-pool-ff5aa7d0-t5fk:
 container "runtime": expected RSS memory (MB) < 314572800; got 529129472

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:281
Expected
    <bool>: false
to be true

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200bf060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200bf060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200bf060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc8223f5cb0>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.58.18 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-jv0c1 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-jv0c1\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-jv0c1/services/redis-master\", \"uid\":\"5e9493ed-bf30-11e6-8778-42010af00039\", \"resourceVersion\":\"25355\", \"creationTimestamp\":\"2016-12-10T23:28:34Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.241.71\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc820092fe0 exit status 1 <nil> true [0xc82169eb18 0xc82169eb30 0xc82169eb48] [0xc82169eb18 0xc82169eb30 0xc82169eb48] [0xc82169eb28 0xc82169eb40] [0xa97590 0xa97590] 0xc82238a720}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-jv0c1\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-jv0c1/services/redis-master\", \"uid\":\"5e9493ed-bf30-11e6-8778-42010af00039\", \"resourceVersion\":\"25355\", \"creationTimestamp\":\"2016-12-10T23:28:34Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.241.71\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.58.18 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-jv0c1 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"name":"redis-master", "namespace":"e2e-tests-kubectl-jv0c1", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-jv0c1/services/redis-master", "uid":"5e9493ed-bf30-11e6-8778-42010af00039", "resourceVersion":"25355", "creationTimestamp":"2016-12-10T23:28:34Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.241.71", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc820092fe0 exit status 1 <nil> true [0xc82169eb18 0xc82169eb30 0xc82169eb48] [0xc82169eb18 0xc82169eb30 0xc82169eb48] [0xc82169eb28 0xc82169eb40] [0xa97590 0xa97590] 0xc82238a720}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"name":"redis-master", "namespace":"e2e-tests-kubectl-jv0c1", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-jv0c1/services/redis-master", "uid":"5e9493ed-bf30-11e6-8778-42010af00039", "resourceVersion":"25355", "creationTimestamp":"2016-12-10T23:28:34Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.241.71", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200bf060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/31/

Multiple broken tests:

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc82007bf80>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc82007bf80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: UpgradeTest {e2e.go}

exit status 1

Issues about this test specifically: #37745

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] master upgrade should maintain responsive services [Feature:MasterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:53
Dec 10 17:45:10.156: Failed waiting for pods to be running: Timeout waiting for 1 pods to be ready
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2562

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc82007bf80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc820c125d0>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.58.18 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-57f1n -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"creationTimestamp\":\"2016-12-11T04:22:48Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-57f1n\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-57f1n/services/redis-master\", \"uid\":\"78cab7d2-bf59-11e6-b6ce-42010af0001c\", \"resourceVersion\":\"18425\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.251.73\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\"}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc820ce95e0 exit status 1 <nil> true [0xc820038260 0xc820038298 0xc8200382c0] [0xc820038260 0xc820038298 0xc8200382c0] [0xc820038280 0xc8200382b0] [0xa97590 0xa97590] 0xc8217405a0}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"creationTimestamp\":\"2016-12-11T04:22:48Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-57f1n\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-57f1n/services/redis-master\", \"uid\":\"78cab7d2-bf59-11e6-b6ce-42010af0001c\", \"resourceVersion\":\"18425\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.251.73\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\"}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.58.18 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-57f1n -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"apiVersion":"v1", "metadata":map[string]interface {}{"creationTimestamp":"2016-12-11T04:22:48Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-57f1n", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-57f1n/services/redis-master", "uid":"78cab7d2-bf59-11e6-b6ce-42010af0001c", "resourceVersion":"18425"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.251.73", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service"}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc820ce95e0 exit status 1 <nil> true [0xc820038260 0xc820038298 0xc8200382c0] [0xc820038260 0xc820038298 0xc8200382c0] [0xc820038280 0xc8200382b0] [0xa97590 0xa97590] 0xc8217405a0}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"apiVersion":"v1", "metadata":map[string]interface {}{"creationTimestamp":"2016-12-11T04:22:48Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-57f1n", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-57f1n/services/redis-master", "uid":"78cab7d2-bf59-11e6-b6ce-42010af0001c", "resourceVersion":"18425"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.251.73", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service"}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc82007bf80>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec 10 18:49:10.778: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-9ec8ea3f-37nn:
 container "runtime": expected RSS memory (MB) < 314572800; got 522047488
node gke-bootstrap-e2e-default-pool-9ec8ea3f-kbpb:
 container "runtime": expected RSS memory (MB) < 314572800; got 513609728
node gke-bootstrap-e2e-default-pool-9ec8ea3f-woyf:
 container "runtime": expected RSS memory (MB) < 314572800; got 529170432

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/32/

Multiple broken tests:

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:372
Expected error:
    <*errors.errorString | 0xc8220b8840>: {
        s: "service verification failed for: 10.99.243.184\nexpected [service1-51lhn service1-q0xpz service1-t6qnn]\nreceived [service1-q0xpz service1-t6qnn]",
    }
    service verification failed for: 10.99.243.184
    expected [service1-51lhn service1-q0xpz service1-t6qnn]
    received [service1-q0xpz service1-t6qnn]
not to have occurred

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200e40b0>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200e40b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc820296940>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.11.103 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-0q43v -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"metadata\":map[string]interface {}{\"uid\":\"70dbbc96-bf9d-11e6-a262-42010af0001e\", \"resourceVersion\":\"28327\", \"creationTimestamp\":\"2016-12-11T12:29:20Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-0q43v\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-0q43v/services/redis-master\"}, \"spec\":map[string]interface {}{\"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.247.68\", \"type\":\"ClusterIP\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\"}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc8218f24e0 exit status 1 <nil> true [0xc8200b8020 0xc8200b8048 0xc8200b8068] [0xc8200b8020 0xc8200b8048 0xc8200b8068] [0xc8200b8038 0xc8200b8060] [0xa97590 0xa97590] 0xc820c37ce0}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"metadata\":map[string]interface {}{\"uid\":\"70dbbc96-bf9d-11e6-a262-42010af0001e\", \"resourceVersion\":\"28327\", \"creationTimestamp\":\"2016-12-11T12:29:20Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-0q43v\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-0q43v/services/redis-master\"}, \"spec\":map[string]interface {}{\"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.247.68\", \"type\":\"ClusterIP\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\"}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.11.103 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-0q43v -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"metadata":map[string]interface {}{"uid":"70dbbc96-bf9d-11e6-a262-42010af0001e", "resourceVersion":"28327", "creationTimestamp":"2016-12-11T12:29:20Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-0q43v", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-0q43v/services/redis-master"}, "spec":map[string]interface {}{"sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.247.68", "type":"ClusterIP"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1"}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc8218f24e0 exit status 1 <nil> true [0xc8200b8020 0xc8200b8048 0xc8200b8068] [0xc8200b8020 0xc8200b8048 0xc8200b8068] [0xc8200b8038 0xc8200b8060] [0xa97590 0xa97590] 0xc820c37ce0}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"metadata":map[string]interface {}{"uid":"70dbbc96-bf9d-11e6-a262-42010af0001e", "resourceVersion":"28327", "creationTimestamp":"2016-12-11T12:29:20Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-0q43v", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-0q43v/services/redis-master"}, "spec":map[string]interface {}{"sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.247.68", "type":"ClusterIP"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1"}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: UpgradeTest {e2e.go}

exit status 1

Issues about this test specifically: #37745

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec 11 05:07:35.133: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-29bf7df0-hdk6:
 container "runtime": expected RSS memory (MB) < 314572800; got 537640960
node gke-bootstrap-e2e-default-pool-29bf7df0-plge:
 container "runtime": expected RSS memory (MB) < 314572800; got 510967808
node gke-bootstrap-e2e-default-pool-29bf7df0-q61r:
 container "runtime": expected RSS memory (MB) < 314572800; got 538697728

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:331
Expected error:
    <*errors.errorString | 0xc82166f890>: {
        s: "service verification failed for: 10.99.241.41\nexpected [service1-2mw5z service1-6fzhk service1-n96dj]\nreceived [service1-2mw5z service1-6fzhk]",
    }
    service verification failed for: 10.99.241.41
    expected [service1-2mw5z service1-6fzhk service1-n96dj]
    received [service1-2mw5z service1-6fzhk]
not to have occurred

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:279
Expected error:
    <*errors.errorString | 0xc821470440>: {
        s: "service verification failed for: 10.99.252.126\nexpected [service1-4k1c8 service1-5bgxg service1-8mjxt]\nreceived [service1-5bgxg service1-8mjxt]",
    }
    service verification failed for: 10.99.252.126
    expected [service1-4k1c8 service1-5bgxg service1-8mjxt]
    received [service1-5bgxg service1-8mjxt]
not to have occurred

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] master upgrade should maintain responsive services [Feature:MasterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:53
Dec 11 00:24:04.641: Failed waiting for pods to be running: Timeout waiting for 1 pods to be ready
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2562

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:453
Expected error:
    <*errors.errorString | 0xc82115d740>: {
        s: "failed to wait for pods responding: pod with UID b117cb29-bf9a-11e6-a262-42010af0001e is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-d3bmc/pods 26478} [{{ } {my-hostname-delete-node-20vbb my-hostname-delete-node- e2e-tests-resize-nodes-d3bmc /api/v1/namespaces/e2e-tests-resize-nodes-d3bmc/pods/my-hostname-delete-node-20vbb b1179f04-bf9a-11e6-a262-42010af0001e 26110 0 {2016-12-11 04:09:39 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-d3bmc\",\"name\":\"my-hostname-delete-node\",\"uid\":\"b115d8a3-bf9a-11e6-a262-42010af0001e\",\"apiVersion\":\"v1\",\"resourceVersion\":\"26095\"}}\n] [{v1 ReplicationController my-hostname-delete-node b115d8a3-bf9a-11e6-a262-42010af0001e 0xc821b2cf27}] []} {[{default-token-2krxl {<nil> <nil> <nil> <nil> <nil> 0xc82290b260 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-2krxl true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821b2d020 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-29bf7df0-hdk6 0xc822615080 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 04:09:39 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 04:09:40 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 04:09:39 -0800 PST}  }]   10.240.0.2 10.96.1.3 2016-12-11T04:09:39-08:00 [] [{my-hostname-delete-node {<nil> 0xc822652ac0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://ea9870f9c18a9b31b3e3f95c2bdba0324cd4ec9bc70bd115ca60646b2b464d20}]}} {{ } {my-hostname-delete-node-9t8qq my-hostname-delete-node- e2e-tests-resize-nodes-d3bmc /api/v1/namespaces/e2e-tests-resize-nodes-d3bmc/pods/my-hostname-delete-node-9t8qq f6cc6974-bf9a-11e6-a262-42010af0001e 26325 0 {2016-12-11 04:11:36 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-d3bmc\",\"name\":\"my-hostname-delete-node\",\"uid\":\"b115d8a3-bf9a-11e6-a262-42010af0001e\",\"apiVersion\":\"v1\",\"resourceVersion\":\"26219\"}}\n] [{v1 ReplicationController my-hostname-delete-node b115d8a3-bf9a-11e6-a262-42010af0001e 0xc821b2d2b7}] []} {[{default-token-2krxl {<nil> <nil> <nil> <nil> <nil> 0xc82290b2c0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-2krxl true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821b2d3b0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-29bf7df0-hdk6 0xc822615140 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 04:11:36 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 04:11:37 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 04:11:36 -0800 PST}  }]   10.240.0.2 10.96.1.7 2016-12-11T04:11:36-08:00 [] [{my-hostname-delete-node {<nil> 0xc822652ae0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://370a2bf8d86d231a2616a035f353776e370c1135b6f74b652eb2219492a45920}]}} {{ } {my-hostname-delete-node-vjkw4 my-hostname-delete-node- e2e-tests-resize-nodes-d3bmc /api/v1/namespaces/e2e-tests-resize-nodes-d3bmc/pods/my-hostname-delete-node-vjkw4 b117f0c4-bf9a-11e6-a262-42010af0001e 26112 0 {2016-12-11 04:09:39 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-d3bmc\",\"name\":\"my-hostname-delete-node\",\"uid\":\"b115d8a3-bf9a-11e6-a262-42010af0001e\",\"apiVersion\":\"v1\",\"resourceVersion\":\"26095\"}}\n] [{v1 ReplicationController my-hostname-delete-node b115d8a3-bf9a-11e6-a262-42010af0001e 0xc821b2d677}] []} {[{default-token-2krxl {<nil> <nil> <nil> <nil> <nil> 0xc82290b320 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-2krxl true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821b2d780 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-29bf7df0-q61r 0xc822615200 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 04:09:39 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 04:09:40 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 04:09:39 -0800 PST}  }]   10.240.0.4 10.96.0.3 2016-12-11T04:09:39-08:00 [] [{my-hostname-delete-node {<nil> 0xc822652b00 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://06d4f19dbf313e16a2fd4a11eec54d0b5bf6a17ea816e2237ee7b4cbdc8e3a3d}]}}]}",
    }
    failed to wait for pods responding: pod with UID b117cb29-bf9a-11e6-a262-42010af0001e is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-d3bmc/pods 26478} [{{ } {my-hostname-delete-node-20vbb my-hostname-delete-node- e2e-tests-resize-nodes-d3bmc /api/v1/namespaces/e2e-tests-resize-nodes-d3bmc/pods/my-hostname-delete-node-20vbb b1179f04-bf9a-11e6-a262-42010af0001e 26110 0 {2016-12-11 04:09:39 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-d3bmc","name":"my-hostname-delete-node","uid":"b115d8a3-bf9a-11e6-a262-42010af0001e","apiVersion":"v1","resourceVersion":"26095"}}
    ] [{v1 ReplicationController my-hostname-delete-node b115d8a3-bf9a-11e6-a262-42010af0001e 0xc821b2cf27}] []} {[{default-token-2krxl {<nil> <nil> <nil> <nil> <nil> 0xc82290b260 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-2krxl true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821b2d020 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-29bf7df0-hdk6 0xc822615080 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 04:09:39 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 04:09:40 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 04:09:39 -0800 PST}  }]   10.240.0.2 10.96.1.3 2016-12-11T04:09:39-08:00 [] [{my-hostname-delete-node {<nil> 0xc822652ac0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://ea9870f9c18a9b31b3e3f95c2bdba0324cd4ec9bc70bd115ca60646b2b464d20}]}} {{ } {my-hostname-delete-node-9t8qq my-hostname-delete-node- e2e-tests-resize-nodes-d3bmc /api/v1/namespaces/e2e-tests-resize-nodes-d3bmc/pods/my-hostname-delete-node-9t8qq f6cc6974-bf9a-11e6-a262-42010af0001e 26325 0 {2016-12-11 04:11:36 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-d3bmc","name":"my-hostname-delete-node","uid":"b115d8a3-bf9a-11e6-a262-42010af0001e","apiVersion":"v1","resourceVersion":"26219"}}
    ] [{v1 ReplicationController my-hostname-delete-node b115d8a3-bf9a-11e6-a262-42010af0001e 0xc821b2d2b7}] []} {[{default-token-2krxl {<nil> <nil> <nil> <nil> <nil> 0xc82290b2c0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-2krxl true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821b2d3b0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-29bf7df0-hdk6 0xc822615140 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 04:11:36 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 04:11:37 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 04:11:36 -0800 PST}  }]   10.240.0.2 10.96.1.7 2016-12-11T04:11:36-08:00 [] [{my-hostname-delete-node {<nil> 0xc822652ae0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://370a2bf8d86d231a2616a035f353776e370c1135b6f74b652eb2219492a45920}]}} {{ } {my-hostname-delete-node-vjkw4 my-hostname-delete-node- e2e-tests-resize-nodes-d3bmc /api/v1/namespaces/e2e-tests-resize-nodes-d3bmc/pods/my-hostname-delete-node-vjkw4 b117f0c4-bf9a-11e6-a262-42010af0001e 26112 0 {2016-12-11 04:09:39 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-d3bmc","name":"my-hostname-delete-node","uid":"b115d8a3-bf9a-11e6-a262-42010af0001e","apiVersion":"v1","resourceVersion":"26095"}}
    ] [{v1 ReplicationController my-hostname-delete-node b115d8a3-bf9a-11e6-a262-42010af0001e 0xc821b2d677}] []} {[{default-token-2krxl {<nil> <nil> <nil> <nil> <nil> 0xc82290b320 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-2krxl true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821b2d780 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-29bf7df0-q61r 0xc822615200 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 04:09:39 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 04:09:40 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 04:09:39 -0800 PST}  }]   10.240.0.4 10.96.0.3 2016-12-11T04:09:39-08:00 [] [{my-hostname-delete-node {<nil> 0xc822652b00 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://06d4f19dbf313e16a2fd4a11eec54d0b5bf6a17ea816e2237ee7b4cbdc8e3a3d}]}}]}
not to have occurred

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200e40b0>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200e40b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/33/

Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc8220cb740>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.11.103 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-h5pph -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"spec\":map[string]interface {}{\"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.253.181\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"targetPort\":\"redis-server\", \"protocol\":\"TCP\", \"port\":6379}}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-h5pph\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-h5pph/services/redis-master\", \"uid\":\"8e405ad7-bfc6-11e6-a01d-42010af00015\", \"resourceVersion\":\"19265\", \"creationTimestamp\":\"2016-12-11T17:23:39Z\", \"labels\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc821d277e0 exit status 1 <nil> true [0xc821ace0b0 0xc821ace0c8 0xc821ace0e0] [0xc821ace0b0 0xc821ace0c8 0xc821ace0e0] [0xc821ace0c0 0xc821ace0d8] [0xa97590 0xa97590] 0xc822420900}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"spec\":map[string]interface {}{\"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.253.181\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"targetPort\":\"redis-server\", \"protocol\":\"TCP\", \"port\":6379}}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-h5pph\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-h5pph/services/redis-master\", \"uid\":\"8e405ad7-bfc6-11e6-a01d-42010af00015\", \"resourceVersion\":\"19265\", \"creationTimestamp\":\"2016-12-11T17:23:39Z\", \"labels\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.11.103 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-h5pph -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"spec":map[string]interface {}{"selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.253.181", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"targetPort":"redis-server", "protocol":"TCP", "port":6379}}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"name":"redis-master", "namespace":"e2e-tests-kubectl-h5pph", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-h5pph/services/redis-master", "uid":"8e405ad7-bfc6-11e6-a01d-42010af00015", "resourceVersion":"19265", "creationTimestamp":"2016-12-11T17:23:39Z", "labels":map[string]interface {}{"role":"master", "app":"redis"}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc821d277e0 exit status 1 <nil> true [0xc821ace0b0 0xc821ace0c8 0xc821ace0e0] [0xc821ace0b0 0xc821ace0c8 0xc821ace0e0] [0xc821ace0c0 0xc821ace0d8] [0xa97590 0xa97590] 0xc822420900}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"spec":map[string]interface {}{"selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.253.181", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"targetPort":"redis-server", "protocol":"TCP", "port":6379}}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"name":"redis-master", "namespace":"e2e-tests-kubectl-h5pph", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-h5pph/services/redis-master", "uid":"8e405ad7-bfc6-11e6-a01d-42010af00015", "resourceVersion":"19265", "creationTimestamp":"2016-12-11T17:23:39Z", "labels":map[string]interface {}{"role":"master", "app":"redis"}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:280
Expected error:
    <*errors.errorString | 0xc8208b6230>: {
        s: "Only 0 pods started out of 1",
    }
    Only 0 pods started out of 1
not to have occurred

Issues about this test specifically: #26164 #26210 #33998 #37158

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:59
Expected error:
    <*errors.errorString | 0xc820d03fd0>: {
        s: "Only 0 pods started out of 1",
    }
    Only 0 pods started out of 1
not to have occurred

Issues about this test specifically: #27397 #27917 #31592

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:453
Expected error:
    <*errors.errorString | 0xc822427670>: {
        s: "failed to wait for pods responding: pod with UID 68a4a123-bfe8-11e6-8de2-42010af00015 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-q7gnm/pods 45701} [{{ } {my-hostname-delete-node-1bzw5 my-hostname-delete-node- e2e-tests-resize-nodes-q7gnm /api/v1/namespaces/e2e-tests-resize-nodes-q7gnm/pods/my-hostname-delete-node-1bzw5 68a48aae-bfe8-11e6-8de2-42010af00015 45355 0 {2016-12-11 13:25:59 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-q7gnm\",\"name\":\"my-hostname-delete-node\",\"uid\":\"68a26e49-bfe8-11e6-8de2-42010af00015\",\"apiVersion\":\"v1\",\"resourceVersion\":\"45339\"}}\n] [{v1 ReplicationController my-hostname-delete-node 68a26e49-bfe8-11e6-8de2-42010af00015 0xc8221f15e7}] []} {[{default-token-2ft89 {<nil> <nil> <nil> <nil> <nil> 0xc8222b3800 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-2ft89 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8221f16e0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-12152c32-j3rp 0xc827671180 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 13:25:59 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 13:26:00 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 13:25:59 -0800 PST}  }]   10.240.0.5 10.96.3.2 2016-12-11T13:25:59-08:00 [] [{my-hostname-delete-node {<nil> 0xc8275bfd20 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://2e31bbc845171462758366b5ed7754758d420a8e567af419ca21b70cf72a712e}]}} {{ } {my-hostname-delete-node-dzxjd my-hostname-delete-node- e2e-tests-resize-nodes-q7gnm /api/v1/namespaces/e2e-tests-resize-nodes-q7gnm/pods/my-hostname-delete-node-dzxjd 68a45809-bfe8-11e6-8de2-42010af00015 45351 0 {2016-12-11 13:25:59 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-q7gnm\",\"name\":\"my-hostname-delete-node\",\"uid\":\"68a26e49-bfe8-11e6-8de2-42010af00015\",\"apiVersion\":\"v1\",\"resourceVersion\":\"45339\"}}\n] [{v1 ReplicationController my-hostname-delete-node 68a26e49-bfe8-11e6-8de2-42010af00015 0xc8221f1977}] []} {[{default-token-2ft89 {<nil> <nil> <nil> <nil> <nil> 0xc8222b3860 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-2ft89 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8221f1a70 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-12152c32-kszf 0xc8276712c0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 13:25:59 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 13:25:59 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 13:25:59 -0800 PST}  }]   10.240.0.4 10.96.0.3 2016-12-11T13:25:59-08:00 [] [{my-hostname-delete-node {<nil> 0xc8275bfd40 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://37bfa6fe273b075e7a4986dca2b44173aa40a364c8881c5d2bee57110e92ba18}]}} {{ } {my-hostname-delete-node-kmn8c my-hostname-delete-node- e2e-tests-resize-nodes-q7gnm /api/v1/namespaces/e2e-tests-resize-nodes-q7gnm/pods/my-hostname-delete-node-kmn8c a50431be-bfe8-11e6-8de2-42010af00015 45538 0 {2016-12-11 13:27:40 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-q7gnm\",\"name\":\"my-hostname-delete-node\",\"uid\":\"68a26e49-bfe8-11e6-8de2-42010af00015\",\"apiVersion\":\"v1\",\"resourceVersion\":\"45422\"}}\n] [{v1 ReplicationController my-hostname-delete-node 68a26e49-bfe8-11e6-8de2-42010af00015 0xc8221f1d07}] []} {[{default-token-2ft89 {<nil> <nil> <nil> <nil> <nil> 0xc8222b38c0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-2ft89 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8221f1e00 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-12152c32-kszf 0xc827671380 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 13:27:40 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 13:27:42 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 13:27:40 -0800 PST}  }]   10.240.0.4 10.96.0.4 2016-12-11T13:27:40-08:00 [] [{my-hostname-delete-node {<nil> 0xc8275bfd60 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://25295dd0eb46a554b06b7776562aa408f74ad157340848525061df454908143a}]}}]}",
    }
    failed to wait for pods responding: pod with UID 68a4a123-bfe8-11e6-8de2-42010af00015 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-q7gnm/pods 45701} [{{ } {my-hostname-delete-node-1bzw5 my-hostname-delete-node- e2e-tests-resize-nodes-q7gnm /api/v1/namespaces/e2e-tests-resize-nodes-q7gnm/pods/my-hostname-delete-node-1bzw5 68a48aae-bfe8-11e6-8de2-42010af00015 45355 0 {2016-12-11 13:25:59 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-q7gnm","name":"my-hostname-delete-node","uid":"68a26e49-bfe8-11e6-8de2-42010af00015","apiVersion":"v1","resourceVersion":"45339"}}
    ] [{v1 ReplicationController my-hostname-delete-node 68a26e49-bfe8-11e6-8de2-42010af00015 0xc8221f15e7}] []} {[{default-token-2ft89 {<nil> <nil> <nil> <nil> <nil> 0xc8222b3800 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-2ft89 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8221f16e0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-12152c32-j3rp 0xc827671180 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 13:25:59 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 13:26:00 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 13:25:59 -0800 PST}  }]   10.240.0.5 10.96.3.2 2016-12-11T13:25:59-08:00 [] [{my-hostname-delete-node {<nil> 0xc8275bfd20 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://2e31bbc845171462758366b5ed7754758d420a8e567af419ca21b70cf72a712e}]}} {{ } {my-hostname-delete-node-dzxjd my-hostname-delete-node- e2e-tests-resize-nodes-q7gnm /api/v1/namespaces/e2e-tests-resize-nodes-q7gnm/pods/my-hostname-delete-node-dzxjd 68a45809-bfe8-11e6-8de2-42010af00015 45351 0 {2016-12-11 13:25:59 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-q7gnm","name":"my-hostname-delete-node","uid":"68a26e49-bfe8-11e6-8de2-42010af00015","apiVersion":"v1","resourceVersion":"45339"}}
    ] [{v1 ReplicationController my-hostname-delete-node 68a26e49-bfe8-11e6-8de2-42010af00015 0xc8221f1977}] []} {[{default-token-2ft89 {<nil> <nil> <nil> <nil> <nil> 0xc8222b3860 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-2ft89 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8221f1a70 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-12152c32-kszf 0xc8276712c0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 13:25:59 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 13:25:59 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 13:25:59 -0800 PST}  }]   10.240.0.4 10.96.0.3 2016-12-11T13:25:59-08:00 [] [{my-hostname-delete-node {<nil> 0xc8275bfd40 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://37bfa6fe273b075e7a4986dca2b44173aa40a364c8881c5d2bee57110e92ba18}]}} {{ } {my-hostname-delete-node-kmn8c my-hostname-delete-node- e2e-tests-resize-nodes-q7gnm /api/v1/namespaces/e2e-tests-resize-nodes-q7gnm/pods/my-hostname-delete-node-kmn8c a50431be-bfe8-11e6-8de2-42010af00015 45538 0 {2016-12-11 13:27:40 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-q7gnm","name":"my-hostname-delete-node","uid":"68a26e49-bfe8-11e6-8de2-42010af00015","apiVersion":"v1","resourceVersion":"45422"}}
    ] [{v1 ReplicationController my-hostname-delete-node 68a26e49-bfe8-11e6-8de2-42010af00015 0xc8221f1d07}] []} {[{default-token-2ft89 {<nil> <nil> <nil> <nil> <nil> 0xc8222b38c0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-2ft89 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8221f1e00 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-12152c32-kszf 0xc827671380 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 13:27:40 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 13:27:42 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 13:27:40 -0800 PST}  }]   10.240.0.4 10.96.0.4 2016-12-11T13:27:40-08:00 [] [{my-hostname-delete-node {<nil> 0xc8275bfd60 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://25295dd0eb46a554b06b7776562aa408f74ad157340848525061df454908143a}]}}]}
not to have occurred

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:372
Expected error:
    <*errors.errorString | 0xc8218ab1e0>: {
        s: "service verification failed for: 10.99.245.234\nexpected [service2-48jlh service2-c15jc service2-q8nh2]\nreceived [service2-48jlh service2-q8nh2]",
    }
    service verification failed for: 10.99.245.234
    expected [service2-48jlh service2-c15jc service2-q8nh2]
    received [service2-48jlh service2-q8nh2]
not to have occurred

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable [replication controller] recreates pods scheduled on the unreachable node AND allows scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:544
Expected error:
    <*errors.errorString | 0xc821c14c10>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27324 #35852 #35880

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec 11 12:17:02.721: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-12152c32-h434:
 container "runtime": expected RSS memory (MB) < 314572800; got 542560256
node gke-bootstrap-e2e-default-pool-12152c32-j3rp:
 container "runtime": expected RSS memory (MB) < 314572800; got 527544320
node gke-bootstrap-e2e-default-pool-12152c32-kszf:
 container "runtime": expected RSS memory (MB) < 314572800; got 524636160

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:279
Expected error:
    <*errors.errorString | 0xc821739530>: {
        s: "service verification failed for: 10.99.241.149\nexpected [service3-0brhq service3-6zzs4 service3-zbltc]\nreceived [service3-0brhq service3-6zzs4]",
    }
    service verification failed for: 10.99.241.149
    expected [service3-0brhq service3-6zzs4 service3-zbltc]
    received [service3-0brhq service3-6zzs4]
not to have occurred

Issues about this test specifically: #26128 #26685 #33408 #36298

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/34/

Multiple broken tests:

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] master upgrade should maintain responsive services [Feature:MasterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:53
Dec 11 14:03:42.692: Failed waiting for pods to be running: Timeout waiting for 1 pods to be ready
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2562

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200d9060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200d9060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200d9060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200d9060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: UpgradeTest {e2e.go}

exit status 1

Issues about this test specifically: #37745

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc820810800>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.148.2 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-9gsv7 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"spec\":map[string]interface {}{\"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.255.192\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"uid\":\"020d0778-c016-11e6-91ee-42010af0001c\", \"resourceVersion\":\"35494\", \"creationTimestamp\":\"2016-12-12T02:52:23Z\", \"labels\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-9gsv7\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-9gsv7/services/redis-master\"}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc82678cd80 exit status 1 <nil> true [0xc8221962a0 0xc8221962c0 0xc822196760] [0xc8221962a0 0xc8221962c0 0xc822196760] [0xc8221962b8 0xc822196740] [0xa97590 0xa97590] 0xc820f8e000}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"spec\":map[string]interface {}{\"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.255.192\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"uid\":\"020d0778-c016-11e6-91ee-42010af0001c\", \"resourceVersion\":\"35494\", \"creationTimestamp\":\"2016-12-12T02:52:23Z\", \"labels\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-9gsv7\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-9gsv7/services/redis-master\"}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.148.2 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-9gsv7 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"spec":map[string]interface {}{"selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.255.192", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"uid":"020d0778-c016-11e6-91ee-42010af0001c", "resourceVersion":"35494", "creationTimestamp":"2016-12-12T02:52:23Z", "labels":map[string]interface {}{"role":"master", "app":"redis"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-9gsv7", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-9gsv7/services/redis-master"}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc82678cd80 exit status 1 <nil> true [0xc8221962a0 0xc8221962c0 0xc822196760] [0xc8221962a0 0xc8221962c0 0xc822196760] [0xc8221962b8 0xc822196740] [0xa97590 0xa97590] 0xc820f8e000}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"spec":map[string]interface {}{"selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.255.192", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"uid":"020d0778-c016-11e6-91ee-42010af0001c", "resourceVersion":"35494", "creationTimestamp":"2016-12-12T02:52:23Z", "labels":map[string]interface {}{"role":"master", "app":"redis"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-9gsv7", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-9gsv7/services/redis-master"}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec 11 17:52:13.623: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-f369bb8d-giu3:
 container "runtime": expected RSS memory (MB) < 314572800; got 520912896
node gke-bootstrap-e2e-default-pool-f369bb8d-rfu5:
 container "runtime": expected RSS memory (MB) < 314572800; got 537243648
node gke-bootstrap-e2e-default-pool-f369bb8d-xo8n:
 container "runtime": expected RSS memory (MB) < 314572800; got 539279360

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:453
Expected error:
    <*errors.errorString | 0xc821c16c40>: {
        s: "failed to wait for pods responding: pod with UID 7c2702ed-c023-11e6-8498-42010af0001c is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-197tj/pods 45457} [{{ } {my-hostname-delete-node-2qcb0 my-hostname-delete-node- e2e-tests-resize-nodes-197tj /api/v1/namespaces/e2e-tests-resize-nodes-197tj/pods/my-hostname-delete-node-2qcb0 7c26ed21-c023-11e6-8498-42010af0001c 44936 0 {2016-12-11 20:28:52 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-197tj\",\"name\":\"my-hostname-delete-node\",\"uid\":\"7c24a3b4-c023-11e6-8498-42010af0001c\",\"apiVersion\":\"v1\",\"resourceVersion\":\"44916\"}}\n] [{v1 ReplicationController my-hostname-delete-node 7c24a3b4-c023-11e6-8498-42010af0001c 0xc820e5d007}] []} {[{default-token-06k7j {<nil> <nil> <nil> <nil> <nil> 0xc821ee9110 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-06k7j true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820e5d100 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-f369bb8d-rfu5 0xc8265dae80 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 20:28:52 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 20:28:53 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 20:28:52 -0800 PST}  }]   10.240.0.3 10.96.0.3 2016-12-11T20:28:52-08:00 [] [{my-hostname-delete-node {<nil> 0xc8267ec120 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://be66fe96fe5732097c2f5950102b3d2c1fd82871aeb2e7037de9a391bc1c1f7e}]}} {{ } {my-hostname-delete-node-k7l1n my-hostname-delete-node- e2e-tests-resize-nodes-197tj /api/v1/namespaces/e2e-tests-resize-nodes-197tj/pods/my-hostname-delete-node-k7l1n 7c271a6d-c023-11e6-8498-42010af0001c 44934 0 {2016-12-11 20:28:52 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-197tj\",\"name\":\"my-hostname-delete-node\",\"uid\":\"7c24a3b4-c023-11e6-8498-42010af0001c\",\"apiVersion\":\"v1\",\"resourceVersion\":\"44916\"}}\n] [{v1 ReplicationController my-hostname-delete-node 7c24a3b4-c023-11e6-8498-42010af0001c 0xc820e5d397}] []} {[{default-token-06k7j {<nil> <nil> <nil> <nil> <nil> 0xc821ee9170 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-06k7j true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820e5d490 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-f369bb8d-xo8n 0xc8265daf40 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 20:28:52 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 20:28:53 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 20:28:52 -0800 PST}  }]   10.240.0.4 10.96.1.4 2016-12-11T20:28:52-08:00 [] [{my-hostname-delete-node {<nil> 0xc8267ec140 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://6622e2694c96fd424384eef468c7373fc6fcc805999beb1086f75df4e8a7937d}]}} {{ } {my-hostname-delete-node-x3426 my-hostname-delete-node- e2e-tests-resize-nodes-197tj /api/v1/namespaces/e2e-tests-resize-nodes-197tj/pods/my-hostname-delete-node-x3426 198db280-c024-11e6-8498-42010af0001c 45308 0 {2016-12-11 20:33:16 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-197tj\",\"name\":\"my-hostname-delete-node\",\"uid\":\"7c24a3b4-c023-11e6-8498-42010af0001c\",\"apiVersion\":\"v1\",\"resourceVersion\":\"45234\"}}\n] [{v1 ReplicationController my-hostname-delete-node 7c24a3b4-c023-11e6-8498-42010af0001c 0xc820e5d747}] []} {[{default-token-06k7j {<nil> <nil> <nil> <nil> <nil> 0xc821ee91d0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-06k7j true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820e5d840 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-f369bb8d-xo8n 0xc8265db000 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 20:33:16 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 20:33:17 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 20:33:16 -0800 PST}  }]   10.240.0.4 10.96.1.8 2016-12-11T20:33:16-08:00 [] [{my-hostname-delete-node {<nil> 0xc8267ec160 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://e83bde29948c594e19594365cbf20f041e86e764b187ab3f79912c01ef28f342}]}}]}",
    }
    failed to wait for pods responding: pod with UID 7c2702ed-c023-11e6-8498-42010af0001c is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-197tj/pods 45457} [{{ } {my-hostname-delete-node-2qcb0 my-hostname-delete-node- e2e-tests-resize-nodes-197tj /api/v1/namespaces/e2e-tests-resize-nodes-197tj/pods/my-hostname-delete-node-2qcb0 7c26ed21-c023-11e6-8498-42010af0001c 44936 0 {2016-12-11 20:28:52 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-197tj","name":"my-hostname-delete-node","uid":"7c24a3b4-c023-11e6-8498-42010af0001c","apiVersion":"v1","resourceVersion":"44916"}}
    ] [{v1 ReplicationController my-hostname-delete-node 7c24a3b4-c023-11e6-8498-42010af0001c 0xc820e5d007}] []} {[{default-token-06k7j {<nil> <nil> <nil> <nil> <nil> 0xc821ee9110 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-06k7j true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820e5d100 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-f369bb8d-rfu5 0xc8265dae80 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 20:28:52 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 20:28:53 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 20:28:52 -0800 PST}  }]   10.240.0.3 10.96.0.3 2016-12-11T20:28:52-08:00 [] [{my-hostname-delete-node {<nil> 0xc8267ec120 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://be66fe96fe5732097c2f5950102b3d2c1fd82871aeb2e7037de9a391bc1c1f7e}]}} {{ } {my-hostname-delete-node-k7l1n my-hostname-delete-node- e2e-tests-resize-nodes-197tj /api/v1/namespaces/e2e-tests-resize-nodes-197tj/pods/my-hostname-delete-node-k7l1n 7c271a6d-c023-11e6-8498-42010af0001c 44934 0 {2016-12-11 20:28:52 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-197tj","name":"my-hostname-delete-node","uid":"7c24a3b4-c023-11e6-8498-42010af0001c","apiVersion":"v1","resourceVersion":"44916"}}
    ] [{v1 ReplicationController my-hostname-delete-node 7c24a3b4-c023-11e6-8498-42010af0001c 0xc820e5d397}] []} {[{default-token-06k7j {<nil> <nil> <nil> <nil> <nil> 0xc821ee9170 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-06k7j true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820e5d490 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-f369bb8d-xo8n 0xc8265daf40 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 20:28:52 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 20:28:53 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 20:28:52 -0800 PST}  }]   10.240.0.4 10.96.1.4 2016-12-11T20:28:52-08:00 [] [{my-hostname-delete-node {<nil> 0xc8267ec140 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://6622e2694c96fd424384eef468c7373fc6fcc805999beb1086f75df4e8a7937d}]}} {{ } {my-hostname-delete-node-x3426 my-hostname-delete-node- e2e-tests-resize-nodes-197tj /api/v1/namespaces/e2e-tests-resize-nodes-197tj/pods/my-hostname-delete-node-x3426 198db280-c024-11e6-8498-42010af0001c 45308 0 {2016-12-11 20:33:16 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-197tj","name":"my-hostname-delete-node","uid":"7c24a3b4-c023-11e6-8498-42010af0001c","apiVersion":"v1","resourceVersion":"45234"}}
    ] [{v1 ReplicationController my-hostname-delete-node 7c24a3b4-c023-11e6-8498-42010af0001c 0xc820e5d747}] []} {[{default-token-06k7j {<nil> <nil> <nil> <nil> <nil> 0xc821ee91d0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-06k7j true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820e5d840 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-f369bb8d-xo8n 0xc8265db000 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 20:33:16 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 20:33:17 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-11 20:33:16 -0800 PST}  }]   10.240.0.4 10.96.1.8 2016-12-11T20:33:16-08:00 [] [{my-hostname-delete-node {<nil> 0xc8267ec160 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://e83bde29948c594e19594365cbf20f041e86e764b187ab3f79912c01ef28f342}]}}]}
not to have occurred

Issues about this test specifically: #27233 #36204

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/35/

Multiple broken tests:

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec 11 23:52:23.342: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-49e12a3b-jfh8:
 container "runtime": expected RSS memory (MB) < 314572800; got 511320064
node gke-bootstrap-e2e-default-pool-49e12a3b-o5du:
 container "runtime": expected RSS memory (MB) < 314572800; got 529395712
node gke-bootstrap-e2e-default-pool-49e12a3b-vb4v:
 container "runtime": expected RSS memory (MB) < 314572800; got 528633856

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc821d36be0>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.161.16 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-m12wb -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"resourceVersion\":\"36656\", \"creationTimestamp\":\"2016-12-12T10:22:32Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-m12wb\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-m12wb/services/redis-master\", \"uid\":\"e498b7c7-c054-11e6-a2e6-42010af00018\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.252.202\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc821e45e80 exit status 1 <nil> true [0xc8200dc0a8 0xc8200dc0d0 0xc8200dc100] [0xc8200dc0a8 0xc8200dc0d0 0xc8200dc100] [0xc8200dc0c8 0xc8200dc0f8] [0xa97590 0xa97590] 0xc821f1fa40}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"resourceVersion\":\"36656\", \"creationTimestamp\":\"2016-12-12T10:22:32Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-m12wb\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-m12wb/services/redis-master\", \"uid\":\"e498b7c7-c054-11e6-a2e6-42010af00018\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.252.202\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.161.16 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-m12wb -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"resourceVersion":"36656", "creationTimestamp":"2016-12-12T10:22:32Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-m12wb", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-m12wb/services/redis-master", "uid":"e498b7c7-c054-11e6-a2e6-42010af00018"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.252.202", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc821e45e80 exit status 1 <nil> true [0xc8200dc0a8 0xc8200dc0d0 0xc8200dc100] [0xc8200dc0a8 0xc8200dc0d0 0xc8200dc100] [0xc8200dc0c8 0xc8200dc0f8] [0xa97590 0xa97590] 0xc821f1fa40}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"resourceVersion":"36656", "creationTimestamp":"2016-12-12T10:22:32Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-m12wb", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-m12wb/services/redis-master", "uid":"e498b7c7-c054-11e6-a2e6-42010af00018"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.252.202", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/36/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec 12 06:21:31.394: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-e267eb1f-v4aw:
 container "runtime": expected RSS memory (MB) < 314572800; got 518901760
node gke-bootstrap-e2e-default-pool-e267eb1f-h5rk:
 container "runtime": expected RSS memory (MB) < 314572800; got 534085632
node gke-bootstrap-e2e-default-pool-e267eb1f-onvt:
 container "runtime": expected RSS memory (MB) < 314572800; got 521682944

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200d7060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:453
Expected error:
    <*errors.errorString | 0xc8219e9e40>: {
        s: "failed to wait for pods responding: pod with UID 21a20d41-c08e-11e6-9702-42010af00020 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-628cp/pods 38462} [{{ } {my-hostname-delete-node-0sxnc my-hostname-delete-node- e2e-tests-resize-nodes-628cp /api/v1/namespaces/e2e-tests-resize-nodes-628cp/pods/my-hostname-delete-node-0sxnc 219fe8de-c08e-11e6-9702-42010af00020 38131 0 {2016-12-12 09:12:16 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-628cp\",\"name\":\"my-hostname-delete-node\",\"uid\":\"219d7c24-c08e-11e6-9702-42010af00020\",\"apiVersion\":\"v1\",\"resourceVersion\":\"38118\"}}\n] [{v1 ReplicationController my-hostname-delete-node 219d7c24-c08e-11e6-9702-42010af00020 0xc820f00f37}] []} {[{default-token-g9f2f {<nil> <nil> <nil> <nil> <nil> 0xc821c87590 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-g9f2f true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820f01030 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-e267eb1f-onvt 0xc821f0b380 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 09:12:16 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 09:12:16 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 09:12:16 -0800 PST}  }]   10.240.0.3 10.96.1.3 2016-12-12T09:12:16-08:00 [] [{my-hostname-delete-node {<nil> 0xc821eb82e0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://a3024e5852223e543231cf9383cc142c2618bb1384089e02aa88ed1f7e199c5f}]}} {{ } {my-hostname-delete-node-658hq my-hostname-delete-node- e2e-tests-resize-nodes-628cp /api/v1/namespaces/e2e-tests-resize-nodes-628cp/pods/my-hostname-delete-node-658hq 21a03ffc-c08e-11e6-9702-42010af00020 38133 0 {2016-12-12 09:12:16 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-628cp\",\"name\":\"my-hostname-delete-node\",\"uid\":\"219d7c24-c08e-11e6-9702-42010af00020\",\"apiVersion\":\"v1\",\"resourceVersion\":\"38118\"}}\n] [{v1 ReplicationController my-hostname-delete-node 219d7c24-c08e-11e6-9702-42010af00020 0xc820f012f7}] []} {[{default-token-g9f2f {<nil> <nil> <nil> <nil> <nil> 0xc821c875f0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-g9f2f true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820f013f0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-e267eb1f-v4aw 0xc821f0b4c0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 09:12:16 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 09:12:17 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 09:12:16 -0800 PST}  }]   10.240.0.2 10.96.2.3 2016-12-12T09:12:16-08:00 [] [{my-hostname-delete-node {<nil> 0xc821eb8300 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://8cea4595d76dbf49f9834348913d0b5b9804c4e7fce32205558dd18641cfc22f}]}} {{ } {my-hostname-delete-node-xw031 my-hostname-delete-node- e2e-tests-resize-nodes-628cp /api/v1/namespaces/e2e-tests-resize-nodes-628cp/pods/my-hostname-delete-node-xw031 5e0b749e-c08e-11e6-9702-42010af00020 38305 0 {2016-12-12 09:13:57 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-628cp\",\"name\":\"my-hostname-delete-node\",\"uid\":\"219d7c24-c08e-11e6-9702-42010af00020\",\"apiVersion\":\"v1\",\"resourceVersion\":\"38223\"}}\n] [{v1 ReplicationController my-hostname-delete-node 219d7c24-c08e-11e6-9702-42010af00020 0xc820f01687}] []} {[{default-token-g9f2f {<nil> <nil> <nil> <nil> <nil> 0xc821c87650 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-g9f2f true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820f01790 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-e267eb1f-v4aw 0xc821f0b600 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 09:13:57 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 09:13:58 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 09:13:57 -0800 PST}  }]   10.240.0.2 10.96.2.7 2016-12-12T09:13:57-08:00 [] [{my-hostname-delete-node {<nil> 0xc821eb8340 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://55fd0984f804a7196f14fd5887a5412c813123c69b55a22d5b7da6029855fa75}]}}]}",
    }
    failed to wait for pods responding: pod with UID 21a20d41-c08e-11e6-9702-42010af00020 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-628cp/pods 38462} [{{ } {my-hostname-delete-node-0sxnc my-hostname-delete-node- e2e-tests-resize-nodes-628cp /api/v1/namespaces/e2e-tests-resize-nodes-628cp/pods/my-hostname-delete-node-0sxnc 219fe8de-c08e-11e6-9702-42010af00020 38131 0 {2016-12-12 09:12:16 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-628cp","name":"my-hostname-delete-node","uid":"219d7c24-c08e-11e6-9702-42010af00020","apiVersion":"v1","resourceVersion":"38118"}}
    ] [{v1 ReplicationController my-hostname-delete-node 219d7c24-c08e-11e6-9702-42010af00020 0xc820f00f37}] []} {[{default-token-g9f2f {<nil> <nil> <nil> <nil> <nil> 0xc821c87590 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-g9f2f true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820f01030 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-e267eb1f-onvt 0xc821f0b380 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 09:12:16 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 09:12:16 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 09:12:16 -0800 PST}  }]   10.240.0.3 10.96.1.3 2016-12-12T09:12:16-08:00 [] [{my-hostname-delete-node {<nil> 0xc821eb82e0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://a3024e5852223e543231cf9383cc142c2618bb1384089e02aa88ed1f7e199c5f}]}} {{ } {my-hostname-delete-node-658hq my-hostname-delete-node- e2e-tests-resize-nodes-628cp /api/v1/namespaces/e2e-tests-resize-nodes-628cp/pods/my-hostname-delete-node-658hq 21a03ffc-c08e-11e6-9702-42010af00020 38133 0 {2016-12-12 09:12:16 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-628cp","name":"my-hostname-delete-node","uid":"219d7c24-c08e-11e6-9702-42010af00020","apiVersion":"v1","resourceVersion":"38118"}}
    ] [{v1 ReplicationController my-hostname-delete-node 219d7c24-c08e-11e6-9702-42010af00020 0xc820f012f7}] []} {[{default-token-g9f2f {<nil> <nil> <nil> <nil> <nil> 0xc821c875f0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-g9f2f true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820f013f0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-e267eb1f-v4aw 0xc821f0b4c0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 09:12:16 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 09:12:17 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 09:12:16 -0800 PST}  }]   10.240.0.2 10.96.2.3 2016-12-12T09:12:16-08:00 [] [{my-hostname-delete-node {<nil> 0xc821eb8300 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://8cea4595d76dbf49f9834348913d0b5b9804c4e7fce32205558dd18641cfc22f}]}} {{ } {my-hostname-delete-node-xw031 my-hostname-delete-node- e2e-tests-resize-nodes-628cp /api/v1/namespaces/e2e-tests-resize-nodes-628cp/pods/my-hostname-delete-node-xw031 5e0b749e-c08e-11e6-9702-42010af00020 38305 0 {2016-12-12 09:13:57 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-628cp","name":"my-hostname-delete-node","uid":"219d7c24-c08e-11e6-9702-42010af00020","apiVersion":"v1","resourceVersion":"38223"}}
    ] [{v1 ReplicationController my-hostname-delete-node 219d7c24-c08e-11e6-9702-42010af00020 0xc820f01687}] []} {[{default-token-g9f2f {<nil> <nil> <nil> <nil> <nil> 0xc821c87650 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-g9f2f true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820f01790 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-e267eb1f-v4aw 0xc821f0b600 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 09:13:57 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 09:13:58 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 09:13:57 -0800 PST}  }]   10.240.0.2 10.96.2.7 2016-12-12T09:13:57-08:00 [] [{my-hostname-delete-node {<nil> 0xc821eb8340 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://55fd0984f804a7196f14fd5887a5412c813123c69b55a22d5b7da6029855fa75}]}}]}
not to have occurred

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200d7060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200d7060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc820e775a0>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.161.16 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-m1jmn -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"creationTimestamp\":\"2016-12-12T11:37:45Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-m1jmn\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-m1jmn/services/redis-master\", \"uid\":\"6643cccb-c05f-11e6-8890-42010af00020\", \"resourceVersion\":\"474\"}, \"spec\":map[string]interface {}{\"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"clusterIP\":\"10.99.241.150\", \"type\":\"ClusterIP\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc82026ac40 exit status 1 <nil> true [0xc820e8a070 0xc820e8a088 0xc820e8a0a0] [0xc820e8a070 0xc820e8a088 0xc820e8a0a0] [0xc820e8a080 0xc820e8a098] [0xa97590 0xa97590] 0xc8207e3680}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"creationTimestamp\":\"2016-12-12T11:37:45Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-m1jmn\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-m1jmn/services/redis-master\", \"uid\":\"6643cccb-c05f-11e6-8890-42010af00020\", \"resourceVersion\":\"474\"}, \"spec\":map[string]interface {}{\"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"clusterIP\":\"10.99.241.150\", \"type\":\"ClusterIP\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.161.16 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-m1jmn -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"creationTimestamp":"2016-12-12T11:37:45Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-m1jmn", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-m1jmn/services/redis-master", "uid":"6643cccb-c05f-11e6-8890-42010af00020", "resourceVersion":"474"}, "spec":map[string]interface {}{"sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"role":"master", "app":"redis"}, "clusterIP":"10.99.241.150", "type":"ClusterIP"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc82026ac40 exit status 1 <nil> true [0xc820e8a070 0xc820e8a088 0xc820e8a0a0] [0xc820e8a070 0xc820e8a088 0xc820e8a0a0] [0xc820e8a080 0xc820e8a098] [0xa97590 0xa97590] 0xc8207e3680}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"creationTimestamp":"2016-12-12T11:37:45Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-m1jmn", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-m1jmn/services/redis-master", "uid":"6643cccb-c05f-11e6-8890-42010af00020", "resourceVersion":"474"}, "spec":map[string]interface {}{"sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"role":"master", "app":"redis"}, "clusterIP":"10.99.241.150", "type":"ClusterIP"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:406
Dec 12 03:47:25.078: Failed waiting for pods to be running: Timeout waiting for 1 pods to be ready

Issues about this test specifically: #28064 #28569 #34036

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200d7060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication between nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:267
Expected error:
    <*errors.errorString | 0xc82159ff50>: {
        s: "pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-12-12 06:51:49 -0800 PST} FinishedAt:{Time:2016-12-12 06:51:59 -0800 PST} ContainerID:docker://98cea3194c7f2608cbe70fb47888592ec8d343e8de1e9b25eddbca50ed00232f}",
    }
    pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-12-12 06:51:49 -0800 PST} FinishedAt:{Time:2016-12-12 06:51:59 -0800 PST} ContainerID:docker://98cea3194c7f2608cbe70fb47888592ec8d343e8de1e9b25eddbca50ed00232f}
not to have occurred

Issues about this test specifically: #30131 #31402

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:331
Expected error:
    <*errors.errorString | 0xc821e157f0>: {
        s: "service verification failed for: 10.99.252.7\nexpected [service2-4qxx9 service2-m1wlf service2-qlcb0]\nreceived [service2-4qxx9 service2-m1wlf]",
    }
    service verification failed for: 10.99.252.7
    expected [service2-4qxx9 service2-m1wlf service2-qlcb0]
    received [service2-4qxx9 service2-m1wlf]
not to have occurred

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/37/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec 12 13:15:34.877: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-fd4d0c3c-w9yu:
 container "runtime": expected RSS memory (MB) < 314572800; got 529969152
node gke-bootstrap-e2e-default-pool-fd4d0c3c-oyzo:
 container "runtime": expected RSS memory (MB) < 314572800; got 526295040
node gke-bootstrap-e2e-default-pool-fd4d0c3c-prny:
 container "runtime": expected RSS memory (MB) < 314572800; got 538980352

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] KubeProxy should test kube-proxy [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubeproxy.go:107
Expected error:
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26490 #33669

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc8210ede10>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.28.75 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-0490s -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"namespace\":\"e2e-tests-kubectl-0490s\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-0490s/services/redis-master\", \"uid\":\"22b7dce5-c0be-11e6-a4e2-42010af00031\", \"resourceVersion\":\"33601\", \"creationTimestamp\":\"2016-12-12T22:55:54Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\"}, \"spec\":map[string]interface {}{\"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"port\":6379, \"targetPort\":\"redis-server\", \"protocol\":\"TCP\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.250.111\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc822d8b380 exit status 1 <nil> true [0xc820094090 0xc8200940b0 0xc8200940c8] [0xc820094090 0xc8200940b0 0xc8200940c8] [0xc8200940a0 0xc8200940c0] [0xa97590 0xa97590] 0xc821663920}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"namespace\":\"e2e-tests-kubectl-0490s\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-0490s/services/redis-master\", \"uid\":\"22b7dce5-c0be-11e6-a4e2-42010af00031\", \"resourceVersion\":\"33601\", \"creationTimestamp\":\"2016-12-12T22:55:54Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\"}, \"spec\":map[string]interface {}{\"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"port\":6379, \"targetPort\":\"redis-server\", \"protocol\":\"TCP\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.250.111\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.28.75 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-0490s -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"namespace":"e2e-tests-kubectl-0490s", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-0490s/services/redis-master", "uid":"22b7dce5-c0be-11e6-a4e2-42010af00031", "resourceVersion":"33601", "creationTimestamp":"2016-12-12T22:55:54Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master"}, "spec":map[string]interface {}{"type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"port":6379, "targetPort":"redis-server", "protocol":"TCP"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.250.111"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc822d8b380 exit status 1 <nil> true [0xc820094090 0xc8200940b0 0xc8200940c8] [0xc820094090 0xc8200940b0 0xc8200940c8] [0xc8200940a0 0xc8200940c0] [0xa97590 0xa97590] 0xc821663920}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"namespace":"e2e-tests-kubectl-0490s", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-0490s/services/redis-master", "uid":"22b7dce5-c0be-11e6-a4e2-42010af00031", "resourceVersion":"33601", "creationTimestamp":"2016-12-12T22:55:54Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master"}, "spec":map[string]interface {}{"type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"port":6379, "targetPort":"redis-server", "protocol":"TCP"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.250.111"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/38/

Multiple broken tests:

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:279
Expected error:
    <*errors.errorString | 0xc8221bed20>: {
        s: "service verification failed for: 10.99.244.56\nexpected [service3-74nz7 service3-hzdjw service3-w8cv5]\nreceived [service3-hzdjw service3-w8cv5]",
    }
    service verification failed for: 10.99.244.56
    expected [service3-74nz7 service3-hzdjw service3-w8cv5]
    received [service3-hzdjw service3-w8cv5]
not to have occurred

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:453
Expected error:
    <*errors.errorString | 0xc82156f570>: {
        s: "failed to wait for pods responding: pod with UID 7cbd4501-c0cf-11e6-bc3f-42010af0001b is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-qcdc4/pods 3327} [{{ } {my-hostname-delete-node-8m26q my-hostname-delete-node- e2e-tests-resize-nodes-qcdc4 /api/v1/namespaces/e2e-tests-resize-nodes-qcdc4/pods/my-hostname-delete-node-8m26q 7cbd7ca3-c0cf-11e6-bc3f-42010af0001b 3054 0 {2016-12-12 17:00:06 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-qcdc4\",\"name\":\"my-hostname-delete-node\",\"uid\":\"7cbb3dfb-c0cf-11e6-bc3f-42010af0001b\",\"apiVersion\":\"v1\",\"resourceVersion\":\"3039\"}}\n] [{v1 ReplicationController my-hostname-delete-node 7cbb3dfb-c0cf-11e6-bc3f-42010af0001b 0xc820913db7}] []} {[{default-token-ggdd5 {<nil> <nil> <nil> <nil> <nil> 0xc8205e8c00 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-ggdd5 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820913f20 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-8c6ce2c0-iuao 0xc82115f080 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 17:00:06 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 17:00:08 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 17:00:06 -0800 PST}  }]   10.240.0.4 10.96.2.6 2016-12-12T17:00:06-08:00 [] [{my-hostname-delete-node {<nil> 0xc8210effa0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://b5533b8c2ada2167b7fb7d4f2e1fb93585910ea02d4fe23e2b00d3ea42a3dc48}]}} {{ } {my-hostname-delete-node-k87rd my-hostname-delete-node- e2e-tests-resize-nodes-qcdc4 /api/v1/namespaces/e2e-tests-resize-nodes-qcdc4/pods/my-hostname-delete-node-k87rd ace0429a-c0cf-11e6-bc3f-42010af0001b 3176 0 {2016-12-12 17:01:27 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-qcdc4\",\"name\":\"my-hostname-delete-node\",\"uid\":\"7cbb3dfb-c0cf-11e6-bc3f-42010af0001b\",\"apiVersion\":\"v1\",\"resourceVersion\":\"3128\"}}\n] [{v1 ReplicationController my-hostname-delete-node 7cbb3dfb-c0cf-11e6-bc3f-42010af0001b 0xc820ea0257}] []} {[{default-token-ggdd5 {<nil> <nil> <nil> <nil> <nil> 0xc8205e8c60 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-ggdd5 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820ea05f0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-8c6ce2c0-iuao 0xc82115f140 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 17:01:27 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 17:01:28 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 17:01:27 -0800 PST}  }]   10.240.0.4 10.96.2.8 2016-12-12T17:01:27-08:00 [] [{my-hostname-delete-node {<nil> 0xc8210effc0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://7b1b107925468672a388914168f17f450483a811db39106d9f7a50347a102759}]}} {{ } {my-hostname-delete-node-pr0f1 my-hostname-delete-node- e2e-tests-resize-nodes-qcdc4 /api/v1/namespaces/e2e-tests-resize-nodes-qcdc4/pods/my-hostname-delete-node-pr0f1 7cbdb5b3-c0cf-11e6-bc3f-42010af0001b 3056 0 {2016-12-12 17:00:06 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-qcdc4\",\"name\":\"my-hostname-delete-node\",\"uid\":\"7cbb3dfb-c0cf-11e6-bc3f-42010af0001b\",\"apiVersion\":\"v1\",\"resourceVersion\":\"3039\"}}\n] [{v1 ReplicationController my-hostname-delete-node 7cbb3dfb-c0cf-11e6-bc3f-42010af0001b 0xc820ea0b47}] []} {[{default-token-ggdd5 {<nil> <nil> <nil> <nil> <nil> 0xc8205e8cc0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-ggdd5 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820ea0ce0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-8c6ce2c0-hecm 0xc82115f200 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 17:00:06 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 17:00:08 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 17:00:06 -0800 PST}  }]   10.240.0.3 10.96.1.3 2016-12-12T17:00:06-08:00 [] [{my-hostname-delete-node {<nil> 0xc8210effe0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://15b9b2f713b93f52ce0610fc766909361501fc7851cf4d025eed152fb99bb625}]}}]}",
    }
    failed to wait for pods responding: pod with UID 7cbd4501-c0cf-11e6-bc3f-42010af0001b is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-qcdc4/pods 3327} [{{ } {my-hostname-delete-node-8m26q my-hostname-delete-node- e2e-tests-resize-nodes-qcdc4 /api/v1/namespaces/e2e-tests-resize-nodes-qcdc4/pods/my-hostname-delete-node-8m26q 7cbd7ca3-c0cf-11e6-bc3f-42010af0001b 3054 0 {2016-12-12 17:00:06 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-qcdc4","name":"my-hostname-delete-node","uid":"7cbb3dfb-c0cf-11e6-bc3f-42010af0001b","apiVersion":"v1","resourceVersion":"3039"}}
    ] [{v1 ReplicationController my-hostname-delete-node 7cbb3dfb-c0cf-11e6-bc3f-42010af0001b 0xc820913db7}] []} {[{default-token-ggdd5 {<nil> <nil> <nil> <nil> <nil> 0xc8205e8c00 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-ggdd5 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820913f20 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-8c6ce2c0-iuao 0xc82115f080 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 17:00:06 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 17:00:08 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 17:00:06 -0800 PST}  }]   10.240.0.4 10.96.2.6 2016-12-12T17:00:06-08:00 [] [{my-hostname-delete-node {<nil> 0xc8210effa0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://b5533b8c2ada2167b7fb7d4f2e1fb93585910ea02d4fe23e2b00d3ea42a3dc48}]}} {{ } {my-hostname-delete-node-k87rd my-hostname-delete-node- e2e-tests-resize-nodes-qcdc4 /api/v1/namespaces/e2e-tests-resize-nodes-qcdc4/pods/my-hostname-delete-node-k87rd ace0429a-c0cf-11e6-bc3f-42010af0001b 3176 0 {2016-12-12 17:01:27 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-qcdc4","name":"my-hostname-delete-node","uid":"7cbb3dfb-c0cf-11e6-bc3f-42010af0001b","apiVersion":"v1","resourceVersion":"3128"}}
    ] [{v1 ReplicationController my-hostname-delete-node 7cbb3dfb-c0cf-11e6-bc3f-42010af0001b 0xc820ea0257}] []} {[{default-token-ggdd5 {<nil> <nil> <nil> <nil> <nil> 0xc8205e8c60 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-ggdd5 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820ea05f0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-8c6ce2c0-iuao 0xc82115f140 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 17:01:27 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 17:01:28 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 17:01:27 -0800 PST}  }]   10.240.0.4 10.96.2.8 2016-12-12T17:01:27-08:00 [] [{my-hostname-delete-node {<nil> 0xc8210effc0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://7b1b107925468672a388914168f17f450483a811db39106d9f7a50347a102759}]}} {{ } {my-hostname-delete-node-pr0f1 my-hostname-delete-node- e2e-tests-resize-nodes-qcdc4 /api/v1/namespaces/e2e-tests-resize-nodes-qcdc4/pods/my-hostname-delete-node-pr0f1 7cbdb5b3-c0cf-11e6-bc3f-42010af0001b 3056 0 {2016-12-12 17:00:06 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-qcdc4","name":"my-hostname-delete-node","uid":"7cbb3dfb-c0cf-11e6-bc3f-42010af0001b","apiVersion":"v1","resourceVersion":"3039"}}
    ] [{v1 ReplicationController my-hostname-delete-node 7cbb3dfb-c0cf-11e6-bc3f-42010af0001b 0xc820ea0b47}] []} {[{default-token-ggdd5 {<nil> <nil> <nil> <nil> <nil> 0xc8205e8cc0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-ggdd5 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820ea0ce0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-8c6ce2c0-hecm 0xc82115f200 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 17:00:06 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 17:00:08 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-12 17:00:06 -0800 PST}  }]   10.240.0.3 10.96.1.3 2016-12-12T17:00:06-08:00 [] [{my-hostname-delete-node {<nil> 0xc8210effe0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://15b9b2f713b93f52ce0610fc766909361501fc7851cf4d025eed152fb99bb625}]}}]}
not to have occurred

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] KubeProxy should test kube-proxy [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubeproxy.go:107
Expected error:
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26490 #33669

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec 12 22:46:20.265: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-8c6ce2c0-iuao:
 container "runtime": expected RSS memory (MB) < 314572800; got 514539520
node gke-bootstrap-e2e-default-pool-8c6ce2c0-l78f:
 container "runtime": expected RSS memory (MB) < 314572800; got 544788480
node gke-bootstrap-e2e-default-pool-8c6ce2c0-xf8b:
 container "runtime": expected RSS memory (MB) < 314572800; got 523132928

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable [replication controller] recreates pods scheduled on the unreachable node AND allows scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:544
Expected error:
    <*errors.errorString | 0xc821f1a830>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27324 #35852 #35880

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc82156f390>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.203.219 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-xsctb -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-xsctb\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-xsctb/services/redis-master\", \"uid\":\"4e203eee-c0de-11e6-8864-42010af0001b\", \"resourceVersion\":\"14714\", \"creationTimestamp\":\"2016-12-13T02:46:10Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}}, \"spec\":map[string]interface {}{\"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"port\":6379, \"targetPort\":\"redis-server\", \"protocol\":\"TCP\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.248.127\"}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc8209378c0 exit status 1 <nil> true [0xc820038420 0xc820038488 0xc8200384f0] [0xc820038420 0xc820038488 0xc8200384f0] [0xc820038470 0xc8200384b0] [0xa97590 0xa97590] 0xc820ed54a0}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-xsctb\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-xsctb/services/redis-master\", \"uid\":\"4e203eee-c0de-11e6-8864-42010af0001b\", \"resourceVersion\":\"14714\", \"creationTimestamp\":\"2016-12-13T02:46:10Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}}, \"spec\":map[string]interface {}{\"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"port\":6379, \"targetPort\":\"redis-server\", \"protocol\":\"TCP\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.248.127\"}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.203.219 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-xsctb -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"name":"redis-master", "namespace":"e2e-tests-kubectl-xsctb", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-xsctb/services/redis-master", "uid":"4e203eee-c0de-11e6-8864-42010af0001b", "resourceVersion":"14714", "creationTimestamp":"2016-12-13T02:46:10Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}}, "spec":map[string]interface {}{"type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"port":6379, "targetPort":"redis-server", "protocol":"TCP"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.248.127"}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc8209378c0 exit status 1 <nil> true [0xc820038420 0xc820038488 0xc8200384f0] [0xc820038420 0xc820038488 0xc8200384f0] [0xc820038470 0xc8200384b0] [0xa97590 0xa97590] 0xc820ed54a0}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"name":"redis-master", "namespace":"e2e-tests-kubectl-xsctb", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-xsctb/services/redis-master", "uid":"4e203eee-c0de-11e6-8864-42010af0001b", "resourceVersion":"14714", "creationTimestamp":"2016-12-13T02:46:10Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}}, "spec":map[string]interface {}{"type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"port":6379, "targetPort":"redis-server", "protocol":"TCP"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.248.127"}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/39/

Multiple broken tests:

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc821e97e00>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://130.211.184.57 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-3sk4f -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"creationTimestamp\":\"2016-12-13T11:58:43Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-3sk4f\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-3sk4f/services/redis-master\", \"uid\":\"7e87f90a-c12b-11e6-b057-42010af0002a\", \"resourceVersion\":\"36342\"}, \"spec\":map[string]interface {}{\"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.250.172\", \"type\":\"ClusterIP\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc82311f380 exit status 1 <nil> true [0xc820192550 0xc820192610 0xc820192898] [0xc820192550 0xc820192610 0xc820192898] [0xc820192578 0xc820192888] [0xa97590 0xa97590] 0xc821a9a720}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"creationTimestamp\":\"2016-12-13T11:58:43Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-3sk4f\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-3sk4f/services/redis-master\", \"uid\":\"7e87f90a-c12b-11e6-b057-42010af0002a\", \"resourceVersion\":\"36342\"}, \"spec\":map[string]interface {}{\"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.250.172\", \"type\":\"ClusterIP\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://130.211.184.57 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-3sk4f -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"creationTimestamp":"2016-12-13T11:58:43Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-3sk4f", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-3sk4f/services/redis-master", "uid":"7e87f90a-c12b-11e6-b057-42010af0002a", "resourceVersion":"36342"}, "spec":map[string]interface {}{"sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.250.172", "type":"ClusterIP"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc82311f380 exit status 1 <nil> true [0xc820192550 0xc820192610 0xc820192898] [0xc820192550 0xc820192610 0xc820192898] [0xc820192578 0xc820192888] [0xa97590 0xa97590] 0xc821a9a720}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"creationTimestamp":"2016-12-13T11:58:43Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-3sk4f", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-3sk4f/services/redis-master", "uid":"7e87f90a-c12b-11e6-b057-42010af0002a", "resourceVersion":"36342"}, "spec":map[string]interface {}{"sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.250.172", "type":"ClusterIP"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec 13 01:45:20.797: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-3c8fb8d7-dact:
 container "runtime": expected RSS memory (MB) < 314572800; got 517435392
node gke-bootstrap-e2e-default-pool-3c8fb8d7-q3nq:
 container "runtime": expected RSS memory (MB) < 314572800; got 524939264
node gke-bootstrap-e2e-default-pool-3c8fb8d7-vp2c:
 container "runtime": expected RSS memory (MB) < 314572800; got 521134080

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc82007df80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc82007df80>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc82007df80>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc82007df80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/40/

Multiple broken tests:

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200b3060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:725
Dec 13 12:04:46.237: Failed waiting for pods to be running: Timeout waiting for 1 pods to be ready

Issues about this test specifically: #26134

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200b3060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication between nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:267
Expected error:
    <*errors.errorString | 0xc821c6a2e0>: {
        s: "pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-12-13 08:04:17 -0800 PST} FinishedAt:{Time:2016-12-13 08:04:27 -0800 PST} ContainerID:docker://56899e7ad343266d058662ce2db17270b81c645d4c88bd06103cad7416e8bb51}",
    }
    pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-12-13 08:04:17 -0800 PST} FinishedAt:{Time:2016-12-13 08:04:27 -0800 PST} ContainerID:docker://56899e7ad343266d058662ce2db17270b81c645d4c88bd06103cad7416e8bb51}
not to have occurred

Issues about this test specifically: #30131 #31402

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable [replication controller] recreates pods scheduled on the unreachable node AND allows scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:544
Expected error:
    <*errors.errorString | 0xc823598160>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27324 #35852 #35880

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200b3060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec 13 07:08:26.654: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-2638e86e-q989:
 container "runtime": expected RSS memory (MB) < 314572800; got 532414464
node gke-bootstrap-e2e-default-pool-2638e86e-v5bn:
 container "runtime": expected RSS memory (MB) < 314572800; got 524484608
node gke-bootstrap-e2e-default-pool-2638e86e-j9b7:
 container "runtime": expected RSS memory (MB) < 314572800; got 513622016

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200b3060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:453
Expected error:
    <*errors.errorString | 0xc8214c96f0>: {
        s: "failed to wait for pods responding: pod with UID 75159f60-c157-11e6-bd1f-42010af0001f is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-ssnk2/pods 26375} [{{ } {my-hostname-delete-node-175mk my-hostname-delete-node- e2e-tests-resize-nodes-ssnk2 /api/v1/namespaces/e2e-tests-resize-nodes-ssnk2/pods/my-hostname-delete-node-175mk 7516186e-c157-11e6-bd1f-42010af0001f 26066 0 {2016-12-13 09:13:25 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-ssnk2\",\"name\":\"my-hostname-delete-node\",\"uid\":\"7512303d-c157-11e6-bd1f-42010af0001f\",\"apiVersion\":\"v1\",\"resourceVersion\":\"26052\"}}\n] [{v1 ReplicationController my-hostname-delete-node 7512303d-c157-11e6-bd1f-42010af0001f 0xc8240bd657}] []} {[{default-token-qb6jc {<nil> <nil> <nil> <nil> <nil> 0xc82488d3e0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-qb6jc true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8240bd820 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-2638e86e-v5bn 0xc821dd0c80 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 09:13:25 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 09:13:26 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 09:13:25 -0800 PST}  }]   10.240.0.3 10.96.1.3 2016-12-13T09:13:25-08:00 [] [{my-hostname-delete-node {<nil> 0xc821ab1d60 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://bd144921af59417c1cc7624bcf40807fe494fa47c35540687e279ab45fe50d4e}]}} {{ } {my-hostname-delete-node-gpkkt my-hostname-delete-node- e2e-tests-resize-nodes-ssnk2 /api/v1/namespaces/e2e-tests-resize-nodes-ssnk2/pods/my-hostname-delete-node-gpkkt 75165faa-c157-11e6-bd1f-42010af0001f 26068 0 {2016-12-13 09:13:25 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-ssnk2\",\"name\":\"my-hostname-delete-node\",\"uid\":\"7512303d-c157-11e6-bd1f-42010af0001f\",\"apiVersion\":\"v1\",\"resourceVersion\":\"26052\"}}\n] [{v1 ReplicationController my-hostname-delete-node 7512303d-c157-11e6-bd1f-42010af0001f 0xc8240bdad7}] []} {[{default-token-qb6jc {<nil> <nil> <nil> <nil> <nil> 0xc82488d440 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-qb6jc true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8240bdbd0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-2638e86e-q989 0xc821dd0d40 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 09:13:25 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 09:13:26 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 09:13:25 -0800 PST}  }]   10.240.0.4 10.96.2.4 2016-12-13T09:13:25-08:00 [] [{my-hostname-delete-node {<nil> 0xc821ab1d80 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://a955d369659f9845d880e1739a7118fa4d52368a697e19042e77c8d73af81116}]}} {{ } {my-hostname-delete-node-jn2gt my-hostname-delete-node- e2e-tests-resize-nodes-ssnk2 /api/v1/namespaces/e2e-tests-resize-nodes-ssnk2/pods/my-hostname-delete-node-jn2gt b6ae974e-c157-11e6-bd1f-42010af0001f 26225 0 {2016-12-13 09:15:15 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-ssnk2\",\"name\":\"my-hostname-delete-node\",\"uid\":\"7512303d-c157-11e6-bd1f-42010af0001f\",\"apiVersion\":\"v1\",\"resourceVersion\":\"26175\"}}\n] [{v1 ReplicationController my-hostname-delete-node 7512303d-c157-11e6-bd1f-42010af0001f 0xc8240bdf27}] []} {[{default-token-qb6jc {<nil> <nil> <nil> <nil> <nil> 0xc82488d4a0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-qb6jc true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821cb0040 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-2638e86e-v5bn 0xc821dd0e00 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 09:15:15 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 09:15:16 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 09:15:15 -0800 PST}  }]   10.240.0.3 10.96.1.5 2016-12-13T09:15:15-08:00 [] [{my-hostname-delete-node {<nil> 0xc821ab1da0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://c8249ecb5b4a5c57188dc4ecdb055d43f813c843f267aa2dbaf0ae17be23225c}]}}]}",
    }
    failed to wait for pods responding: pod with UID 75159f60-c157-11e6-bd1f-42010af0001f is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-ssnk2/pods 26375} [{{ } {my-hostname-delete-node-175mk my-hostname-delete-node- e2e-tests-resize-nodes-ssnk2 /api/v1/namespaces/e2e-tests-resize-nodes-ssnk2/pods/my-hostname-delete-node-175mk 7516186e-c157-11e6-bd1f-42010af0001f 26066 0 {2016-12-13 09:13:25 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-ssnk2","name":"my-hostname-delete-node","uid":"7512303d-c157-11e6-bd1f-42010af0001f","apiVersion":"v1","resourceVersion":"26052"}}
    ] [{v1 ReplicationController my-hostname-delete-node 7512303d-c157-11e6-bd1f-42010af0001f 0xc8240bd657}] []} {[{default-token-qb6jc {<nil> <nil> <nil> <nil> <nil> 0xc82488d3e0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-qb6jc true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8240bd820 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-2638e86e-v5bn 0xc821dd0c80 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 09:13:25 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 09:13:26 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 09:13:25 -0800 PST}  }]   10.240.0.3 10.96.1.3 2016-12-13T09:13:25-08:00 [] [{my-hostname-delete-node {<nil> 0xc821ab1d60 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://bd144921af59417c1cc7624bcf40807fe494fa47c35540687e279ab45fe50d4e}]}} {{ } {my-hostname-delete-node-gpkkt my-hostname-delete-node- e2e-tests-resize-nodes-ssnk2 /api/v1/namespaces/e2e-tests-resize-nodes-ssnk2/pods/my-hostname-delete-node-gpkkt 75165faa-c157-11e6-bd1f-42010af0001f 26068 0 {2016-12-13 09:13:25 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-ssnk2","name":"my-hostname-delete-node","uid":"7512303d-c157-11e6-bd1f-42010af0001f","apiVersion":"v1","resourceVersion":"26052"}}
    ] [{v1 ReplicationController my-hostname-delete-node 7512303d-c157-11e6-bd1f-42010af0001f 0xc8240bdad7}] []} {[{default-token-qb6jc {<nil> <nil> <nil> <nil> <nil> 0xc82488d440 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-qb6jc true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8240bdbd0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-2638e86e-q989 0xc821dd0d40 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 09:13:25 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 09:13:26 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 09:13:25 -0800 PST}  }]   10.240.0.4 10.96.2.4 2016-12-13T09:13:25-08:00 [] [{my-hostname-delete-node {<nil> 0xc821ab1d80 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://a955d369659f9845d880e1739a7118fa4d52368a697e19042e77c8d73af81116}]}} {{ } {my-hostname-delete-node-jn2gt my-hostname-delete-node- e2e-tests-resize-nodes-ssnk2 /api/v1/namespaces/e2e-tests-resize-nodes-ssnk2/pods/my-hostname-delete-node-jn2gt b6ae974e-c157-11e6-bd1f-42010af0001f 26225 0 {2016-12-13 09:15:15 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-ssnk2","name":"my-hostname-delete-node","uid":"7512303d-c157-11e6-bd1f-42010af0001f","apiVersion":"v1","resourceVersion":"26175"}}
    ] [{v1 ReplicationController my-hostname-delete-node 7512303d-c157-11e6-bd1f-42010af0001f 0xc8240bdf27}] []} {[{default-token-qb6jc {<nil> <nil> <nil> <nil> <nil> 0xc82488d4a0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-qb6jc true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821cb0040 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-2638e86e-v5bn 0xc821dd0e00 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 09:15:15 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 09:15:16 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 09:15:15 -0800 PST}  }]   10.240.0.3 10.96.1.5 2016-12-13T09:15:15-08:00 [] [{my-hostname-delete-node {<nil> 0xc821ab1da0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://c8249ecb5b4a5c57188dc4ecdb055d43f813c843f267aa2dbaf0ae17be23225c}]}}]}
not to have occurred

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:279
Expected error:
    <*errors.errorString | 0xc821f35390>: {
        s: "service verification failed for: 10.99.246.154\nexpected [service2-44gpz service2-qbvqn service2-wfv6h]\nreceived [service2-qbvqn service2-wfv6h]",
    }
    service verification failed for: 10.99.246.154
    expected [service2-44gpz service2-qbvqn service2-wfv6h]
    received [service2-qbvqn service2-wfv6h]
not to have occurred

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc8211a7d00>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.36.127 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-qp0hd -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-qp0hd\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-qp0hd/services/redis-master\", \"uid\":\"9003088d-c14c-11e6-bd1f-42010af0001f\", \"resourceVersion\":\"15219\", \"creationTimestamp\":\"2016-12-13T15:55:25Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}}, \"spec\":map[string]interface {}{\"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.248.194\"}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc820a32ec0 exit status 1 <nil> true [0xc820db6680 0xc820db66a0 0xc820db66d8] [0xc820db6680 0xc820db66a0 0xc820db66d8] [0xc820db6698 0xc820db66b0] [0xa97590 0xa97590] 0xc820c0d140}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-qp0hd\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-qp0hd/services/redis-master\", \"uid\":\"9003088d-c14c-11e6-bd1f-42010af0001f\", \"resourceVersion\":\"15219\", \"creationTimestamp\":\"2016-12-13T15:55:25Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}}, \"spec\":map[string]interface {}{\"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.248.194\"}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://35.184.36.127 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-qp0hd -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"name":"redis-master", "namespace":"e2e-tests-kubectl-qp0hd", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-qp0hd/services/redis-master", "uid":"9003088d-c14c-11e6-bd1f-42010af0001f", "resourceVersion":"15219", "creationTimestamp":"2016-12-13T15:55:25Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}}, "spec":map[string]interface {}{"type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.248.194"}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc820a32ec0 exit status 1 <nil> true [0xc820db6680 0xc820db66a0 0xc820db66d8] [0xc820db6680 0xc820db66a0 0xc820db66d8] [0xc820db6698 0xc820db66b0] [0xa97590 0xa97590] 0xc820c0d140}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"name":"redis-master", "namespace":"e2e-tests-kubectl-qp0hd", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-qp0hd/services/redis-master", "uid":"9003088d-c14c-11e6-bd1f-42010af0001f", "resourceVersion":"15219", "creationTimestamp":"2016-12-13T15:55:25Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}}, "spec":map[string]interface {}{"type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.248.194"}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/41/

Multiple broken tests:

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Pod Disks should schedule a pod w/ a RW PD shared between multiple containers, write to PD, delete pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:360
Expected
    <string>: 
to equal
    <string>: 7391845177666625874

Issues about this test specifically: #28010 #28427 #33997 #37952

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc821256940>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://130.211.212.205 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-7s2gq -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-7s2gq\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-7s2gq/services/redis-master\", \"uid\":\"5df87af9-c18d-11e6-b6b1-42010af00027\", \"resourceVersion\":\"19509\", \"creationTimestamp\":\"2016-12-13T23:39:19Z\", \"labels\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.243.87\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc8211d9f80 exit status 1 <nil> true [0xc821acc060 0xc821acc078 0xc821acc090] [0xc821acc060 0xc821acc078 0xc821acc090] [0xc821acc070 0xc821acc088] [0xa97590 0xa97590] 0xc823185c20}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-7s2gq\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-7s2gq/services/redis-master\", \"uid\":\"5df87af9-c18d-11e6-b6b1-42010af00027\", \"resourceVersion\":\"19509\", \"creationTimestamp\":\"2016-12-13T23:39:19Z\", \"labels\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.243.87\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://130.211.212.205 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-7s2gq -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"name":"redis-master", "namespace":"e2e-tests-kubectl-7s2gq", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-7s2gq/services/redis-master", "uid":"5df87af9-c18d-11e6-b6b1-42010af00027", "resourceVersion":"19509", "creationTimestamp":"2016-12-13T23:39:19Z", "labels":map[string]interface {}{"role":"master", "app":"redis"}}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.243.87", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc8211d9f80 exit status 1 <nil> true [0xc821acc060 0xc821acc078 0xc821acc090] [0xc821acc060 0xc821acc078 0xc821acc090] [0xc821acc070 0xc821acc088] [0xa97590 0xa97590] 0xc823185c20}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"name":"redis-master", "namespace":"e2e-tests-kubectl-7s2gq", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-7s2gq/services/redis-master", "uid":"5df87af9-c18d-11e6-b6b1-42010af00027", "resourceVersion":"19509", "creationTimestamp":"2016-12-13T23:39:19Z", "labels":map[string]interface {}{"role":"master", "app":"redis"}}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.243.87", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:372
Expected error:
    <*errors.errorString | 0xc8210a95c0>: {
        s: "service verification failed for: 10.99.248.157\nexpected [service1-chcdd service1-cz8pb service1-tk3lj]\nreceived [service1-chcdd service1-tk3lj]",
    }
    service verification failed for: 10.99.248.157
    expected [service1-chcdd service1-cz8pb service1-tk3lj]
    received [service1-chcdd service1-tk3lj]
not to have occurred

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec 13 14:08:03.794: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-557b383b-69j2:
 container "runtime": expected RSS memory (MB) < 314572800; got 518090752
node gke-bootstrap-e2e-default-pool-557b383b-7d74:
 container "runtime": expected RSS memory (MB) < 314572800; got 522440704
node gke-bootstrap-e2e-default-pool-557b383b-hof2:
 container "runtime": expected RSS memory (MB) < 314572800; got 529199104

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:453
Expected error:
    <*errors.errorString | 0xc820cc7a60>: {
        s: "failed to wait for pods responding: pod with UID f26b6185-c198-11e6-b6b1-42010af00027 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-4tgfc/pods 31608} [{{ } {my-hostname-delete-node-5lrk4 my-hostname-delete-node- e2e-tests-resize-nodes-4tgfc /api/v1/namespaces/e2e-tests-resize-nodes-4tgfc/pods/my-hostname-delete-node-5lrk4 f26b45cb-c198-11e6-b6b1-42010af00027 31263 0 {2016-12-13 17:02:12 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-4tgfc\",\"name\":\"my-hostname-delete-node\",\"uid\":\"f2688318-c198-11e6-b6b1-42010af00027\",\"apiVersion\":\"v1\",\"resourceVersion\":\"31248\"}}\n] [{v1 ReplicationController my-hostname-delete-node f2688318-c198-11e6-b6b1-42010af00027 0xc821ab3317}] []} {[{default-token-9mkfb {<nil> <nil> <nil> <nil> <nil> 0xc8218f34a0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-9mkfb true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821ab3410 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-557b383b-h6vx 0xc82314d000 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 17:02:12 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 17:02:13 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 17:02:12 -0800 PST}  }]   10.240.0.5 10.96.3.5 2016-12-13T17:02:12-08:00 [] [{my-hostname-delete-node {<nil> 0xc821a767c0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://9c5b05a3353d1224fe6a4dce83a3be6940d632cbf2475574379ce0cfda0a1a4e}]}} {{ } {my-hostname-delete-node-h3hd9 my-hostname-delete-node- e2e-tests-resize-nodes-4tgfc /api/v1/namespaces/e2e-tests-resize-nodes-4tgfc/pods/my-hostname-delete-node-h3hd9 2f903d75-c199-11e6-b6b1-42010af00027 31452 0 {2016-12-13 17:03:55 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-4tgfc\",\"name\":\"my-hostname-delete-node\",\"uid\":\"f2688318-c198-11e6-b6b1-42010af00027\",\"apiVersion\":\"v1\",\"resourceVersion\":\"31348\"}}\n] [{v1 ReplicationController my-hostname-delete-node f2688318-c198-11e6-b6b1-42010af00027 0xc821ab36b7}] []} {[{default-token-9mkfb {<nil> <nil> <nil> <nil> <nil> 0xc8218f3500 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-9mkfb true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821ab37c0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-557b383b-hof2 0xc82314d0c0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 17:03:55 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 17:03:56 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 17:03:55 -0800 PST}  }]   10.240.0.3 10.96.1.3 2016-12-13T17:03:55-08:00 [] [{my-hostname-delete-node {<nil> 0xc821a767e0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://3608c21265b3b447d04f2ebe66436851b8e444428be7250dd41708ab25d9b650}]}} {{ } {my-hostname-delete-node-zq6kt my-hostname-delete-node- e2e-tests-resize-nodes-4tgfc /api/v1/namespaces/e2e-tests-resize-nodes-4tgfc/pods/my-hostname-delete-node-zq6kt f26b1df6-c198-11e6-b6b1-42010af00027 31261 0 {2016-12-13 17:02:12 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-4tgfc\",\"name\":\"my-hostname-delete-node\",\"uid\":\"f2688318-c198-11e6-b6b1-42010af00027\",\"apiVersion\":\"v1\",\"resourceVersion\":\"31248\"}}\n] [{v1 ReplicationController my-hostname-delete-node f2688318-c198-11e6-b6b1-42010af00027 0xc821ab3b87}] []} {[{default-token-9mkfb {<nil> <nil> <nil> <nil> <nil> 0xc8218f3560 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-9mkfb true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821ab3cb0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-557b383b-h6vx 0xc82314d180 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 17:02:12 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 17:02:13 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 17:02:12 -0800 PST}  }]   10.240.0.5 10.96.3.2 2016-12-13T17:02:12-08:00 [] [{my-hostname-delete-node {<nil> 0xc821a76800 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://f8cce335abf27d23598f2e68f320756cb6d586307ee2c72550c3b6f41dcc8b54}]}}]}",
    }
    failed to wait for pods responding: pod with UID f26b6185-c198-11e6-b6b1-42010af00027 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-4tgfc/pods 31608} [{{ } {my-hostname-delete-node-5lrk4 my-hostname-delete-node- e2e-tests-resize-nodes-4tgfc /api/v1/namespaces/e2e-tests-resize-nodes-4tgfc/pods/my-hostname-delete-node-5lrk4 f26b45cb-c198-11e6-b6b1-42010af00027 31263 0 {2016-12-13 17:02:12 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-4tgfc","name":"my-hostname-delete-node","uid":"f2688318-c198-11e6-b6b1-42010af00027","apiVersion":"v1","resourceVersion":"31248"}}
    ] [{v1 ReplicationController my-hostname-delete-node f2688318-c198-11e6-b6b1-42010af00027 0xc821ab3317}] []} {[{default-token-9mkfb {<nil> <nil> <nil> <nil> <nil> 0xc8218f34a0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-9mkfb true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821ab3410 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-557b383b-h6vx 0xc82314d000 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 17:02:12 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 17:02:13 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 17:02:12 -0800 PST}  }]   10.240.0.5 10.96.3.5 2016-12-13T17:02:12-08:00 [] [{my-hostname-delete-node {<nil> 0xc821a767c0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://9c5b05a3353d1224fe6a4dce83a3be6940d632cbf2475574379ce0cfda0a1a4e}]}} {{ } {my-hostname-delete-node-h3hd9 my-hostname-delete-node- e2e-tests-resize-nodes-4tgfc /api/v1/namespaces/e2e-tests-resize-nodes-4tgfc/pods/my-hostname-delete-node-h3hd9 2f903d75-c199-11e6-b6b1-42010af00027 31452 0 {2016-12-13 17:03:55 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-4tgfc","name":"my-hostname-delete-node","uid":"f2688318-c198-11e6-b6b1-42010af00027","apiVersion":"v1","resourceVersion":"31348"}}
    ] [{v1 ReplicationController my-hostname-delete-node f2688318-c198-11e6-b6b1-42010af00027 0xc821ab36b7}] []} {[{default-token-9mkfb {<nil> <nil> <nil> <nil> <nil> 0xc8218f3500 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-9mkfb true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821ab37c0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-557b383b-hof2 0xc82314d0c0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 17:03:55 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 17:03:56 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 17:03:55 -0800 PST}  }]   10.240.0.3 10.96.1.3 2016-12-13T17:03:55-08:00 [] [{my-hostname-delete-node {<nil> 0xc821a767e0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://3608c21265b3b447d04f2ebe66436851b8e444428be7250dd41708ab25d9b650}]}} {{ } {my-hostname-delete-node-zq6kt my-hostname-delete-node- e2e-tests-resize-nodes-4tgfc /api/v1/namespaces/e2e-tests-resize-nodes-4tgfc/pods/my-hostname-delete-node-zq6kt f26b1df6-c198-11e6-b6b1-42010af00027 31261 0 {2016-12-13 17:02:12 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-4tgfc","name":"my-hostname-delete-node","uid":"f2688318-c198-11e6-b6b1-42010af00027","apiVersion":"v1","resourceVersion":"31248"}}
    ] [{v1 ReplicationController my-hostname-delete-node f2688318-c198-11e6-b6b1-42010af00027 0xc821ab3b87}] []} {[{default-token-9mkfb {<nil> <nil> <nil> <nil> <nil> 0xc8218f3560 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-9mkfb true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821ab3cb0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-557b383b-h6vx 0xc82314d180 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 17:02:12 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 17:02:13 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 17:02:12 -0800 PST}  }]   10.240.0.5 10.96.3.2 2016-12-13T17:02:12-08:00 [] [{my-hostname-delete-node {<nil> 0xc821a76800 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://f8cce335abf27d23598f2e68f320756cb6d586307ee2c72550c3b6f41dcc8b54}]}}]}
not to have occurred

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/42/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec 14 00:49:46.034: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-6997ca62-62kx:
 container "runtime": expected RSS memory (MB) < 314572800; got 511721472
node gke-bootstrap-e2e-default-pool-6997ca62-jdgf:
 container "runtime": expected RSS memory (MB) < 314572800; got 521109504
node gke-bootstrap-e2e-default-pool-6997ca62-kiwg:
 container "runtime": expected RSS memory (MB) < 314572800; got 538939392

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc82007df90>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc821895af0>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.210.133 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-w4274 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.246.34\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-w4274\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-w4274/services/redis-master\", \"uid\":\"a52d164b-c1cd-11e6-bf0b-42010af00014\", \"resourceVersion\":\"25778\", \"creationTimestamp\":\"2016-12-14T07:19:26Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc820f40de0 exit status 1 <nil> true [0xc8200389f0 0xc820038a98 0xc820038ac0] [0xc8200389f0 0xc820038a98 0xc820038ac0] [0xc820038a70 0xc820038ab0] [0xa97590 0xa97590] 0xc82146fd40}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.246.34\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-w4274\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-w4274/services/redis-master\", \"uid\":\"a52d164b-c1cd-11e6-bf0b-42010af00014\", \"resourceVersion\":\"25778\", \"creationTimestamp\":\"2016-12-14T07:19:26Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.210.133 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-w4274 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.246.34", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"name":"redis-master", "namespace":"e2e-tests-kubectl-w4274", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-w4274/services/redis-master", "uid":"a52d164b-c1cd-11e6-bf0b-42010af00014", "resourceVersion":"25778", "creationTimestamp":"2016-12-14T07:19:26Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc820f40de0 exit status 1 <nil> true [0xc8200389f0 0xc820038a98 0xc820038ac0] [0xc8200389f0 0xc820038a98 0xc820038ac0] [0xc820038a70 0xc820038ab0] [0xa97590 0xa97590] 0xc82146fd40}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.246.34", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"name":"redis-master", "namespace":"e2e-tests-kubectl-w4274", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-w4274/services/redis-master", "uid":"a52d164b-c1cd-11e6-bf0b-42010af00014", "resourceVersion":"25778", "creationTimestamp":"2016-12-14T07:19:26Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc82007df90>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc82007df90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc82007df90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:453
Expected error:
    <*errors.errorString | 0xc82187b550>: {
        s: "failed to wait for pods responding: pod with UID 784d2771-c1ce-11e6-bf0b-42010af00014 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-8rzwm/pods 27005} [{{ } {my-hostname-delete-node-brz21 my-hostname-delete-node- e2e-tests-resize-nodes-8rzwm /api/v1/namespaces/e2e-tests-resize-nodes-8rzwm/pods/my-hostname-delete-node-brz21 784aed10-c1ce-11e6-bf0b-42010af00014 26468 0 {2016-12-13 23:25:20 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-8rzwm\",\"name\":\"my-hostname-delete-node\",\"uid\":\"7848d9c6-c1ce-11e6-bf0b-42010af00014\",\"apiVersion\":\"v1\",\"resourceVersion\":\"26454\"}}\n] [{v1 ReplicationController my-hostname-delete-node 7848d9c6-c1ce-11e6-bf0b-42010af00014 0xc82183f6f7}] []} {[{default-token-0bg0c {<nil> <nil> <nil> <nil> <nil> 0xc821c465a0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-0bg0c true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82183f7f0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-6997ca62-jdgf 0xc82233b280 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 23:25:20 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 23:25:21 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 23:25:20 -0800 PST}  }]   10.240.0.4 10.96.2.3 2016-12-13T23:25:20-08:00 [] [{my-hostname-delete-node {<nil> 0xc820f58400 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://4f74945c1d3ac779c81c0b6dffe8c6047273b9ab374f872e5e18b31a2ac52876}]}} {{ } {my-hostname-delete-node-c9n66 my-hostname-delete-node- e2e-tests-resize-nodes-8rzwm /api/v1/namespaces/e2e-tests-resize-nodes-8rzwm/pods/my-hostname-delete-node-c9n66 a667291e-c1ce-11e6-bf0b-42010af00014 26847 0 {2016-12-13 23:26:38 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-8rzwm\",\"name\":\"my-hostname-delete-node\",\"uid\":\"7848d9c6-c1ce-11e6-bf0b-42010af00014\",\"apiVersion\":\"v1\",\"resourceVersion\":\"26789\"}}\n] [{v1 ReplicationController my-hostname-delete-node 7848d9c6-c1ce-11e6-bf0b-42010af00014 0xc82183fab7}] []} {[{default-token-0bg0c {<nil> <nil> <nil> <nil> <nil> 0xc821c46600 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-0bg0c true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82183fbb0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-6997ca62-kiwg 0xc82233b340 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 23:26:38 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 23:26:39 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 23:26:38 -0800 PST}  }]   10.240.0.5 10.96.3.5 2016-12-13T23:26:38-08:00 [] [{my-hostname-delete-node {<nil> 0xc820f58420 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://dc86af6f6a614259d13d938b5845ce10a86463435703342a89e5022fa4baa894}]}} {{ } {my-hostname-delete-node-fh5s4 my-hostname-delete-node- e2e-tests-resize-nodes-8rzwm /api/v1/namespaces/e2e-tests-resize-nodes-8rzwm/pods/my-hostname-delete-node-fh5s4 784ad346-c1ce-11e6-bf0b-42010af00014 26470 0 {2016-12-13 23:25:20 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-8rzwm\",\"name\":\"my-hostname-delete-node\",\"uid\":\"7848d9c6-c1ce-11e6-bf0b-42010af00014\",\"apiVersion\":\"v1\",\"resourceVersion\":\"26454\"}}\n] [{v1 ReplicationController my-hostname-delete-node 7848d9c6-c1ce-11e6-bf0b-42010af00014 0xc82183fee7}] []} {[{default-token-0bg0c {<nil> <nil> <nil> <nil> <nil> 0xc821c46660 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-0bg0c true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82187a100 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-6997ca62-kiwg 0xc82233b400 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 23:25:20 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 23:25:22 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 23:25:20 -0800 PST}  }]   10.240.0.5 10.96.3.3 2016-12-13T23:25:20-08:00 [] [{my-hostname-delete-node {<nil> 0xc820f58440 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://3d674057efb1d90710dd286ccf9004115be113c9cafd22558e2c2c7cc8514401}]}}]}",
    }
    failed to wait for pods responding: pod with UID 784d2771-c1ce-11e6-bf0b-42010af00014 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-8rzwm/pods 27005} [{{ } {my-hostname-delete-node-brz21 my-hostname-delete-node- e2e-tests-resize-nodes-8rzwm /api/v1/namespaces/e2e-tests-resize-nodes-8rzwm/pods/my-hostname-delete-node-brz21 784aed10-c1ce-11e6-bf0b-42010af00014 26468 0 {2016-12-13 23:25:20 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-8rzwm","name":"my-hostname-delete-node","uid":"7848d9c6-c1ce-11e6-bf0b-42010af00014","apiVersion":"v1","resourceVersion":"26454"}}
    ] [{v1 ReplicationController my-hostname-delete-node 7848d9c6-c1ce-11e6-bf0b-42010af00014 0xc82183f6f7}] []} {[{default-token-0bg0c {<nil> <nil> <nil> <nil> <nil> 0xc821c465a0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-0bg0c true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82183f7f0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-6997ca62-jdgf 0xc82233b280 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 23:25:20 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 23:25:21 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 23:25:20 -0800 PST}  }]   10.240.0.4 10.96.2.3 2016-12-13T23:25:20-08:00 [] [{my-hostname-delete-node {<nil> 0xc820f58400 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://4f74945c1d3ac779c81c0b6dffe8c6047273b9ab374f872e5e18b31a2ac52876}]}} {{ } {my-hostname-delete-node-c9n66 my-hostname-delete-node- e2e-tests-resize-nodes-8rzwm /api/v1/namespaces/e2e-tests-resize-nodes-8rzwm/pods/my-hostname-delete-node-c9n66 a667291e-c1ce-11e6-bf0b-42010af00014 26847 0 {2016-12-13 23:26:38 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-8rzwm","name":"my-hostname-delete-node","uid":"7848d9c6-c1ce-11e6-bf0b-42010af00014","apiVersion":"v1","resourceVersion":"26789"}}
    ] [{v1 ReplicationController my-hostname-delete-node 7848d9c6-c1ce-11e6-bf0b-42010af00014 0xc82183fab7}] []} {[{default-token-0bg0c {<nil> <nil> <nil> <nil> <nil> 0xc821c46600 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-0bg0c true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82183fbb0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-6997ca62-kiwg 0xc82233b340 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 23:26:38 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 23:26:39 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 23:26:38 -0800 PST}  }]   10.240.0.5 10.96.3.5 2016-12-13T23:26:38-08:00 [] [{my-hostname-delete-node {<nil> 0xc820f58420 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://dc86af6f6a614259d13d938b5845ce10a86463435703342a89e5022fa4baa894}]}} {{ } {my-hostname-delete-node-fh5s4 my-hostname-delete-node- e2e-tests-resize-nodes-8rzwm /api/v1/namespaces/e2e-tests-resize-nodes-8rzwm/pods/my-hostname-delete-node-fh5s4 784ad346-c1ce-11e6-bf0b-42010af00014 26470 0 {2016-12-13 23:25:20 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-8rzwm","name":"my-hostname-delete-node","uid":"7848d9c6-c1ce-11e6-bf0b-42010af00014","apiVersion":"v1","resourceVersion":"26454"}}
    ] [{v1 ReplicationController my-hostname-delete-node 7848d9c6-c1ce-11e6-bf0b-42010af00014 0xc82183fee7}] []} {[{default-token-0bg0c {<nil> <nil> <nil> <nil> <nil> 0xc821c46660 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-0bg0c true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82187a100 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-6997ca62-kiwg 0xc82233b400 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 23:25:20 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 23:25:22 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2016-12-13 23:25:20 -0800 PST}  }]   10.240.0.5 10.96.3.3 2016-12-13T23:25:20-08:00 [] [{my-hostname-delete-node {<nil> 0xc820f58440 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://3d674057efb1d90710dd286ccf9004115be113c9cafd22558e2c2c7cc8514401}]}}]}
not to have occurred

Issues about this test specifically: #27233 #36204

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/43/

Multiple broken tests:

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable [replication controller] recreates pods scheduled on the unreachable node AND allows scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:544
Expected error:
    <*errors.errorString | 0xc822a18750>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27324 #35852 #35880

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200bf060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:471
Expected error:
    <*errors.errorString | 0xc820e62d50>: {
        s: "Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.61.69 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-5982p -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"namespace\":\"e2e-tests-kubectl-5982p\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-5982p/services/redis-master\", \"uid\":\"4f2064ea-c1f6-11e6-955b-42010af00017\", \"resourceVersion\":\"12699\", \"creationTimestamp\":\"2016-12-14T12:10:31Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\"}, \"spec\":map[string]interface {}{\"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"port\":6379, \"targetPort\":\"redis-server\", \"protocol\":\"TCP\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.242.233\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc8208aef00 exit status 1 <nil> true [0xc8223c02e8 0xc8223c0300 0xc8223c0318] [0xc8223c02e8 0xc8223c0300 0xc8223c0318] [0xc8223c02f8 0xc8223c0310] [0xa97590 0xa97590] 0xc821042cc0}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"namespace\":\"e2e-tests-kubectl-5982p\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-5982p/services/redis-master\", \"uid\":\"4f2064ea-c1f6-11e6-955b-42010af00017\", \"resourceVersion\":\"12699\", \"creationTimestamp\":\"2016-12-14T12:10:31Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\"}, \"spec\":map[string]interface {}{\"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"port\":6379, \"targetPort\":\"redis-server\", \"protocol\":\"TCP\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.99.242.233\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
    }
    Error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.61.69 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-5982p -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"namespace":"e2e-tests-kubectl-5982p", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-5982p/services/redis-master", "uid":"4f2064ea-c1f6-11e6-955b-42010af00017", "resourceVersion":"12699", "creationTimestamp":"2016-12-14T12:10:31Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master"}, "spec":map[string]interface {}{"type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"port":6379, "targetPort":"redis-server", "protocol":"TCP"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.242.233"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc8208aef00 exit status 1 <nil> true [0xc8223c02e8 0xc8223c0300 0xc8223c0318] [0xc8223c02e8 0xc8223c0300 0xc8223c0318] [0xc8223c02f8 0xc8223c0310] [0xa97590 0xa97590] 0xc821042cc0}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"namespace":"e2e-tests-kubectl-5982p", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-5982p/services/redis-master", "uid":"4f2064ea-c1f6-11e6-955b-42010af00017", "resourceVersion":"12699", "creationTimestamp":"2016-12-14T12:10:31Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master"}, "spec":map[string]interface {}{"type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"port":6379, "targetPort":"redis-server", "protocol":"TCP"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.99.242.233"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200bf060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200bf060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200bf060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec 14 04:47:16.445: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-7b69d09f-0dk1:
 container "runtime": expected RSS memory (MB) < 314572800; got 511168512
node gke-bootstrap-e2e-default-pool-7b69d09f-aydb:
 container "runtime": expected RSS memory (MB) < 314572800; got 532389888
node gke-bootstrap-e2e-default-pool-7b69d09f-q9fp:
 container "runtime": expected RSS memory (MB) < 314572800; got 524951552

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/210/
Multiple broken tests:

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:279
Expected error:
    <*errors.errorString | 0xc820be1720>: {
        s: "service verification failed for: 10.99.246.63\nexpected [service2-4zrj8 service2-wbzz1 service2-wl5bw]\nreceived [service2-4zrj8 service2-wl5bw]",
    }
    service verification failed for: 10.99.246.63
    expected [service2-4zrj8 service2-wbzz1 service2-wl5bw]
    received [service2-4zrj8 service2-wl5bw]
not to have occurred

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc82007ff80>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Jan 22 02:18:41.182: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-5ffa320a-8vv4:
 container "runtime": expected RSS memory (MB) < 314572800; got 528617472
node gke-bootstrap-e2e-default-pool-5ffa320a-5qsx:
 container "runtime": expected RSS memory (MB) < 314572800; got 533319680
node gke-bootstrap-e2e-default-pool-5ffa320a-6nhl:
 container "runtime": expected RSS memory (MB) < 314572800; got 533831680

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc82007ff80>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] KubeProxy should test kube-proxy [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubeproxy.go:107
Expected error:
    <*errors.errorString | 0xc82007ff80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26490 #33669

Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication between nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:267
Expected error:
    <*errors.errorString | 0xc8209e4700>: {
        s: "pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2017-01-22 04:28:35 -0800 PST} FinishedAt:{Time:2017-01-22 04:28:45 -0800 PST} ContainerID:docker://e55b974465ed5263e6eceaeb305f3b4687b75ff4eb3c297406aec788e36b5903}",
    }
    pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2017-01-22 04:28:35 -0800 PST} FinishedAt:{Time:2017-01-22 04:28:45 -0800 PST} ContainerID:docker://e55b974465ed5263e6eceaeb305f3b4687b75ff4eb3c297406aec788e36b5903}
not to have occurred

Issues about this test specifically: #30131 #31402

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc82007ff80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Pod Disks should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:420
Expected
    <string>: 
to equal
    <string>: 6290361066942119415

Issues about this test specifically: #26127 #28081

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc82007ff80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Pod Disks should schedule a pod w/ a RW PD shared between multiple containers, write to PD, delete pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:360
Expected
    <string>: 
to equal
    <string>: 2266906886570183545

Issues about this test specifically: #28010 #28427 #33997 #37952

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/211/
Multiple broken tests:

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc82007bf80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc82007bf80>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc82007bf80>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc82007bf80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Jan 22 09:45:55.532: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-6bec26d8-q1mh:
 container "runtime": expected RSS memory (MB) < 314572800; got 515194880
node gke-bootstrap-e2e-default-pool-6bec26d8-5fzt:
 container "runtime": expected RSS memory (MB) < 314572800; got 526045184
node gke-bootstrap-e2e-default-pool-6bec26d8-bm3k:
 container "runtime": expected RSS memory (MB) < 314572800; got 532172800

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/212/
Multiple broken tests:

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Jan 22 18:25:10.353: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-3adfd856-hdf5:
 container "runtime": expected RSS memory (MB) < 314572800; got 536489984
node gke-bootstrap-e2e-default-pool-3adfd856-pj27:
 container "runtime": expected RSS memory (MB) < 314572800; got 514007040
node gke-bootstrap-e2e-default-pool-3adfd856-wqrt:
 container "runtime": expected RSS memory (MB) < 314572800; got 524767232

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:453
Expected error:
    <*errors.errorString | 0xc8214954f0>: {
        s: "failed to wait for pods responding: pod with UID d36cb81a-e0fa-11e6-a4e1-42010af00003 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-6zsxr/pods 12041} [{{ } {my-hostname-delete-node-49l0h my-hostname-delete-node- e2e-tests-resize-nodes-6zsxr /api/v1/namespaces/e2e-tests-resize-nodes-6zsxr/pods/my-hostname-delete-node-49l0h d36cdc5b-e0fa-11e6-a4e1-42010af00003 11715 0 {2017-01-22 15:30:57 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-6zsxr\",\"name\":\"my-hostname-delete-node\",\"uid\":\"d36b51d4-e0fa-11e6-a4e1-42010af00003\",\"apiVersion\":\"v1\",\"resourceVersion\":\"11698\"}}\n] [{v1 ReplicationController my-hostname-delete-node d36b51d4-e0fa-11e6-a4e1-42010af00003 0xc821e0ef07}] []} {[{default-token-wjrtd {<nil> <nil> <nil> <nil> <nil> 0xc82145d500 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-wjrtd true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821e0f010 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-3adfd856-hdf5 0xc821bd0c40 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-22 15:30:57 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-22 15:30:58 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-22 15:30:57 -0800 PST}  }]   10.240.0.2 10.96.2.4 2017-01-22T15:30:57-08:00 [] [{my-hostname-delete-node {<nil> 0xc8218a3bc0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://9fd559854917ef3f3d50e589a10302658b6673dd5fe34557452694e39066cc35}]}} {{ } {my-hostname-delete-node-c61x2 my-hostname-delete-node- e2e-tests-resize-nodes-6zsxr /api/v1/namespaces/e2e-tests-resize-nodes-6zsxr/pods/my-hostname-delete-node-c61x2 1bb56161-e0fb-11e6-a4e1-42010af00003 11893 0 {2017-01-22 15:32:58 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-6zsxr\",\"name\":\"my-hostname-delete-node\",\"uid\":\"d36b51d4-e0fa-11e6-a4e1-42010af00003\",\"apiVersion\":\"v1\",\"resourceVersion\":\"11840\"}}\n] [{v1 ReplicationController my-hostname-delete-node d36b51d4-e0fa-11e6-a4e1-42010af00003 0xc821e0f2a7}] []} {[{default-token-wjrtd {<nil> <nil> <nil> <nil> <nil> 0xc82145d560 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-wjrtd true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821e0f3a0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-3adfd856-040g 0xc821bd0d00 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-22 15:32:58 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-22 15:32:59 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-22 15:32:58 -0800 PST}  }]   10.240.0.4 10.96.1.3 2017-01-22T15:32:58-08:00 [] [{my-hostname-delete-node {<nil> 0xc8218a3be0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://25b3c659a7790020efd0866b3c84423f91f4a6a2a640747f11571ed4366eef15}]}} {{ } {my-hostname-delete-node-vzr92 my-hostname-delete-node- e2e-tests-resize-nodes-6zsxr /api/v1/namespaces/e2e-tests-resize-nodes-6zsxr/pods/my-hostname-delete-node-vzr92 1bbc28c7-e0fb-11e6-a4e1-42010af00003 11896 0 {2017-01-22 15:32:58 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-6zsxr\",\"name\":\"my-hostname-delete-node\",\"uid\":\"d36b51d4-e0fa-11e6-a4e1-42010af00003\",\"apiVersion\":\"v1\",\"resourceVersion\":\"11882\"}}\n] [{v1 ReplicationController my-hostname-delete-node d36b51d4-e0fa-11e6-a4e1-42010af00003 0xc821e0f637}] []} {[{default-token-wjrtd {<nil> <nil> <nil> <nil> <nil> 0xc82145d5c0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-wjrtd true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821e0f730 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-3adfd856-hdf5 0xc821bd0dc0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-22 15:32:58 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-22 15:32:59 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-22 15:32:58 -0800 PST}  }]   10.240.0.2 10.96.2.8 2017-01-22T15:32:58-08:00 [] [{my-hostname-delete-node {<nil> 0xc8218a3c00 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://efe2c237565b3c266cec9ca8300d191885be82d3c01b710bb7c20441c3ab6a15}]}}]}",
    }
    failed to wait for pods responding: pod with UID d36cb81a-e0fa-11e6-a4e1-42010af00003 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-6zsxr/pods 12041} [{{ } {my-hostname-delete-node-49l0h my-hostname-delete-node- e2e-tests-resize-nodes-6zsxr /api/v1/namespaces/e2e-tests-resize-nodes-6zsxr/pods/my-hostname-delete-node-49l0h d36cdc5b-e0fa-11e6-a4e1-42010af00003 11715 0 {2017-01-22 15:30:57 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-6zsxr","name":"my-hostname-delete-node","uid":"d36b51d4-e0fa-11e6-a4e1-42010af00003","apiVersion":"v1","resourceVersion":"11698"}}
    ] [{v1 ReplicationController my-hostname-delete-node d36b51d4-e0fa-11e6-a4e1-42010af00003 0xc821e0ef07}] []} {[{default-token-wjrtd {<nil> <nil> <nil> <nil> <nil> 0xc82145d500 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-wjrtd true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821e0f010 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-3adfd856-hdf5 0xc821bd0c40 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-22 15:30:57 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-22 15:30:58 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-22 15:30:57 -0800 PST}  }]   10.240.0.2 10.96.2.4 2017-01-22T15:30:57-08:00 [] [{my-hostname-delete-node {<nil> 0xc8218a3bc0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://9fd559854917ef3f3d50e589a10302658b6673dd5fe34557452694e39066cc35}]}} {{ } {my-hostname-delete-node-c61x2 my-hostname-delete-node- e2e-tests-resize-nodes-6zsxr /api/v1/namespaces/e2e-tests-resize-nodes-6zsxr/pods/my-hostname-delete-node-c61x2 1bb56161-e0fb-11e6-a4e1-42010af00003 11893 0 {2017-01-22 15:32:58 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-6zsxr","name":"my-hostname-delete-node","uid":"d36b51d4-e0fa-11e6-a4e1-42010af00003","apiVersion":"v1","resourceVersion":"11840"}}
    ] [{v1 ReplicationController my-hostname-delete-node d36b51d4-e0fa-11e6-a4e1-42010af00003 0xc821e0f2a7}] []} {[{default-token-wjrtd {<nil> <nil> <nil> <nil> <nil> 0xc82145d560 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-wjrtd true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821e0f3a0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-3adfd856-040g 0xc821bd0d00 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-22 15:32:58 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-22 15:32:59 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-22 15:32:58 -0800 PST}  }]   10.240.0.4 10.96.1.3 2017-01-22T15:32:58-08:00 [] [{my-hostname-delete-node {<nil> 0xc8218a3be0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://25b3c659a7790020efd0866b3c84423f91f4a6a2a640747f11571ed4366eef15}]}} {{ } {my-hostname-delete-node-vzr92 my-hostname-delete-node- e2e-tests-resize-nodes-6zsxr /api/v1/namespaces/e2e-tests-resize-nodes-6zsxr/pods/my-hostname-delete-node-vzr92 1bbc28c7-e0fb-11e6-a4e1-42010af00003 11896 0 {2017-01-22 15:32:58 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-6zsxr","name":"my-hostname-delete-node","uid":"d36b51d4-e0fa-11e6-a4e1-42010af00003","apiVersion":"v1","resourceVersion":"11882"}}
    ] [{v1 ReplicationController my-hostname-delete-node d36b51d4-e0fa-11e6-a4e1-42010af00003 0xc821e0f637}] []} {[{default-token-wjrtd {<nil> <nil> <nil> <nil> <nil> 0xc82145d5c0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-wjrtd true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821e0f730 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-3adfd856-hdf5 0xc821bd0dc0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-22 15:32:58 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-22 15:32:59 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-22 15:32:58 -0800 PST}  }]   10.240.0.2 10.96.2.8 2017-01-22T15:32:58-08:00 [] [{my-hostname-delete-node {<nil> 0xc8218a3c00 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://efe2c237565b3c266cec9ca8300d191885be82d3c01b710bb7c20441c3ab6a15}]}}]}
not to have occurred

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200bd060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200bd060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication between nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:267
Expected error:
    <*errors.errorString | 0xc821dfde90>: {
        s: "pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2017-01-22 19:00:45 -0800 PST} FinishedAt:{Time:2017-01-22 19:00:55 -0800 PST} ContainerID:docker://d164e34a25a7bf0bae88f43a726b8c2f41922e9a57563a5b64a2155c945a6e7d}",
    }
    pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2017-01-22 19:00:45 -0800 PST} FinishedAt:{Time:2017-01-22 19:00:55 -0800 PST} ContainerID:docker://d164e34a25a7bf0bae88f43a726b8c2f41922e9a57563a5b64a2155c945a6e7d}
not to have occurred

Issues about this test specifically: #30131 #31402

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200bd060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200bd060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:279
Expected error:
    <*errors.errorString | 0xc82131f0b0>: {
        s: "service verification failed for: 10.99.245.224\nexpected [service1-5sdfj service1-rt4fj service1-s8x52]\nreceived [service1-5sdfj service1-s8x52]",
    }
    service verification failed for: 10.99.245.224
    expected [service1-5sdfj service1-rt4fj service1-s8x52]
    received [service1-5sdfj service1-s8x52]
not to have occurred

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:281
Expected
    <bool>: false
to be true

Issues about this test specifically: #28371 #29604 #37496

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/213/
Multiple broken tests:

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200e20b0>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable [replication controller] recreates pods scheduled on the unreachable node AND allows scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:544
Expected error:
    <*errors.errorString | 0xc82118e5e0>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27324 #35852 #35880

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:279
Expected error:
    <*errors.errorString | 0xc82215a350>: {
        s: "service verification failed for: 10.99.250.233\nexpected [service2-78822 service2-h5364 service2-hpffx]\nreceived [service2-h5364 service2-hpffx]",
    }
    service verification failed for: 10.99.250.233
    expected [service2-78822 service2-h5364 service2-hpffx]
    received [service2-h5364 service2-hpffx]
not to have occurred

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200e20b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:453
Expected error:
    <*errors.errorString | 0xc821980910>: {
        s: "failed to wait for pods responding: pod with UID d08b0b4f-e145-11e6-9df9-42010af00031 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-14rkg/pods 22496} [{{ } {my-hostname-delete-node-3dkbq my-hostname-delete-node- e2e-tests-resize-nodes-14rkg /api/v1/namespaces/e2e-tests-resize-nodes-14rkg/pods/my-hostname-delete-node-3dkbq 280fec43-e146-11e6-9df9-42010af00031 22342 0 {2017-01-23 00:30:11 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-14rkg\",\"name\":\"my-hostname-delete-node\",\"uid\":\"d088ed8c-e145-11e6-9df9-42010af00031\",\"apiVersion\":\"v1\",\"resourceVersion\":\"22287\"}}\n] [{v1 ReplicationController my-hostname-delete-node d088ed8c-e145-11e6-9df9-42010af00031 0xc82152f147}] []} {[{default-token-c5k94 {<nil> <nil> <nil> <nil> <nil> 0xc821bc3800 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-c5k94 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82152f250 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-8c6ecd42-k13x 0xc8228d3040 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 00:30:11 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 00:30:12 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 00:30:11 -0800 PST}  }]   10.240.0.4 10.96.1.3 2017-01-23T00:30:11-08:00 [] [{my-hostname-delete-node {<nil> 0xc8220121c0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://bcc66c1bc1a4de1a6b3606a6159c446ea79970e6e3466665004bec27a33ba004}]}} {{ } {my-hostname-delete-node-3sj7p my-hostname-delete-node- e2e-tests-resize-nodes-14rkg /api/v1/namespaces/e2e-tests-resize-nodes-14rkg/pods/my-hostname-delete-node-3sj7p d08ab0bd-e145-11e6-9df9-42010af00031 22124 0 {2017-01-23 00:27:44 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-14rkg\",\"name\":\"my-hostname-delete-node\",\"uid\":\"d088ed8c-e145-11e6-9df9-42010af00031\",\"apiVersion\":\"v1\",\"resourceVersion\":\"22111\"}}\n] [{v1 ReplicationController my-hostname-delete-node d088ed8c-e145-11e6-9df9-42010af00031 0xc82152f537}] []} {[{default-token-c5k94 {<nil> <nil> <nil> <nil> <nil> 0xc821bc3860 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-c5k94 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82152f640 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-8c6ecd42-lrg0 0xc8228d3100 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 00:27:44 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 00:27:45 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 00:27:44 -0800 PST}  }]   10.240.0.3 10.96.2.3 2017-01-23T00:27:44-08:00 [] [{my-hostname-delete-node {<nil> 0xc8220121e0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://09d5589cd1660250613db1df10369765487a5cc537c1a1142c139b0284eade5e}]}} {{ } {my-hostname-delete-node-81q3h my-hostname-delete-node- e2e-tests-resize-nodes-14rkg /api/v1/namespaces/e2e-tests-resize-nodes-14rkg/pods/my-hostname-delete-node-81q3h 2815bea6-e146-11e6-9df9-42010af00031 22344 0 {2017-01-23 00:30:11 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-14rkg\",\"name\":\"my-hostname-delete-node\",\"uid\":\"d088ed8c-e145-11e6-9df9-42010af00031\",\"apiVersion\":\"v1\",\"resourceVersion\":\"22326\"}}\n] [{v1 ReplicationController my-hostname-delete-node d088ed8c-e145-11e6-9df9-42010af00031 0xc82152f8d7}] []} {[{default-token-c5k94 {<nil> <nil> <nil> <nil> <nil> 0xc821bc38c0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-c5k94 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82152f9d0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-8c6ecd42-lrg0 0xc8228d31c0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 00:30:11 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 00:30:13 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 00:30:11 -0800 PST}  }]   10.240.0.3 10.96.2.5 2017-01-23T00:30:11-08:00 [] [{my-hostname-delete-node {<nil> 0xc822012200 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://e038260a7161c476d52fe319e8d29d70c178e402875c0461c2c22c45a084ff5b}]}}]}",
    }
    failed to wait for pods responding: pod with UID d08b0b4f-e145-11e6-9df9-42010af00031 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-14rkg/pods 22496} [{{ } {my-hostname-delete-node-3dkbq my-hostname-delete-node- e2e-tests-resize-nodes-14rkg /api/v1/namespaces/e2e-tests-resize-nodes-14rkg/pods/my-hostname-delete-node-3dkbq 280fec43-e146-11e6-9df9-42010af00031 22342 0 {2017-01-23 00:30:11 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-14rkg","name":"my-hostname-delete-node","uid":"d088ed8c-e145-11e6-9df9-42010af00031","apiVersion":"v1","resourceVersion":"22287"}}
    ] [{v1 ReplicationController my-hostname-delete-node d088ed8c-e145-11e6-9df9-42010af00031 0xc82152f147}] []} {[{default-token-c5k94 {<nil> <nil> <nil> <nil> <nil> 0xc821bc3800 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-c5k94 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82152f250 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-8c6ecd42-k13x 0xc8228d3040 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 00:30:11 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 00:30:12 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 00:30:11 -0800 PST}  }]   10.240.0.4 10.96.1.3 2017-01-23T00:30:11-08:00 [] [{my-hostname-delete-node {<nil> 0xc8220121c0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://bcc66c1bc1a4de1a6b3606a6159c446ea79970e6e3466665004bec27a33ba004}]}} {{ } {my-hostname-delete-node-3sj7p my-hostname-delete-node- e2e-tests-resize-nodes-14rkg /api/v1/namespaces/e2e-tests-resize-nodes-14rkg/pods/my-hostname-delete-node-3sj7p d08ab0bd-e145-11e6-9df9-42010af00031 22124 0 {2017-01-23 00:27:44 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-14rkg","name":"my-hostname-delete-node","uid":"d088ed8c-e145-11e6-9df9-42010af00031","apiVersion":"v1","resourceVersion":"22111"}}
    ] [{v1 ReplicationController my-hostname-delete-node d088ed8c-e145-11e6-9df9-42010af00031 0xc82152f537}] []} {[{default-token-c5k94 {<nil> <nil> <nil> <nil> <nil> 0xc821bc3860 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-c5k94 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82152f640 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-8c6ecd42-lrg0 0xc8228d3100 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 00:27:44 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 00:27:45 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 00:27:44 -0800 PST}  }]   10.240.0.3 10.96.2.3 2017-01-23T00:27:44-08:00 [] [{my-hostname-delete-node {<nil> 0xc8220121e0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://09d5589cd1660250613db1df10369765487a5cc537c1a1142c139b0284eade5e}]}} {{ } {my-hostname-delete-node-81q3h my-hostname-delete-node- e2e-tests-resize-nodes-14rkg /api/v1/namespaces/e2e-tests-resize-nodes-14rkg/pods/my-hostname-delete-node-81q3h 2815bea6-e146-11e6-9df9-42010af00031 22344 0 {2017-01-23 00:30:11 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-14rkg","name":"my-hostname-delete-node","uid":"d088ed8c-e145-11e6-9df9-42010af00031","apiVersion":"v1","resourceVersion":"22326"}}
    ] [{v1 ReplicationController my-hostname-delete-node d088ed8c-e145-11e6-9df9-42010af00031 0xc82152f8d7}] []} {[{default-token-c5k94 {<nil> <nil> <nil> <nil> <nil> 0xc821bc38c0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-c5k94 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82152f9d0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-8c6ecd42-lrg0 0xc8228d31c0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 00:30:11 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 00:30:13 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 00:30:11 -0800 PST}  }]   10.240.0.3 10.96.2.5 2017-01-23T00:30:11-08:00 [] [{my-hostname-delete-node {<nil> 0xc822012200 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://e038260a7161c476d52fe319e8d29d70c178e402875c0461c2c22c45a084ff5b}]}}]}
not to have occurred

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Jan 23 02:21:56.304: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-8c6ecd42-lrg0:
 container "runtime": expected RSS memory (MB) < 314572800; got 539676672
node gke-bootstrap-e2e-default-pool-8c6ecd42-k13x:
 container "runtime": expected RSS memory (MB) < 314572800; got 537395200
node gke-bootstrap-e2e-default-pool-8c6ecd42-ktq0:
 container "runtime": expected RSS memory (MB) < 314572800; got 525590528

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200e20b0>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200e20b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/214/
Multiple broken tests:

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200d7060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Jan 23 06:04:42.652: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-fa7cce72-5g9j:
 container "runtime": expected RSS memory (MB) < 314572800; got 509063168
node gke-bootstrap-e2e-default-pool-fa7cce72-jv7p:
 container "runtime": expected RSS memory (MB) < 314572800; got 526860288
node gke-bootstrap-e2e-default-pool-fa7cce72-t00l:
 container "runtime": expected RSS memory (MB) < 314572800; got 529485824

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200d7060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200d7060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200d7060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:453
Expected error:
    <*errors.errorString | 0xc82071e230>: {
        s: "failed to wait for pods responding: pod with UID 3e0915dc-e16c-11e6-b942-42010af0003a is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-7tlzm/pods 8495} [{{ } {my-hostname-delete-node-1z51z my-hostname-delete-node- e2e-tests-resize-nodes-7tlzm /api/v1/namespaces/e2e-tests-resize-nodes-7tlzm/pods/my-hostname-delete-node-1z51z 3e093d39-e16c-11e6-b942-42010af0003a 8162 0 {2017-01-23 05:02:49 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-7tlzm\",\"name\":\"my-hostname-delete-node\",\"uid\":\"3e075628-e16c-11e6-b942-42010af0003a\",\"apiVersion\":\"v1\",\"resourceVersion\":\"8144\"}}\n] [{v1 ReplicationController my-hostname-delete-node 3e075628-e16c-11e6-b942-42010af0003a 0xc821513747}] []} {[{default-token-1m618 {<nil> <nil> <nil> <nil> <nil> 0xc821d933b0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-1m618 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821513850 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-fa7cce72-jv7p 0xc821b3ccc0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 05:02:49 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 05:02:51 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 05:02:49 -0800 PST}  }]   10.240.0.3 10.96.0.5 2017-01-23T05:02:49-08:00 [] [{my-hostname-delete-node {<nil> 0xc821c3df00 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://2710b9df5dcb712a9e8243a17d8c0566b9a174957bdd0f11bb0257ae422adf19}]}} {{ } {my-hostname-delete-node-f6qpl my-hostname-delete-node- e2e-tests-resize-nodes-7tlzm /api/v1/namespaces/e2e-tests-resize-nodes-7tlzm/pods/my-hostname-delete-node-f6qpl 3e08f6f2-e16c-11e6-b942-42010af0003a 8157 0 {2017-01-23 05:02:49 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-7tlzm\",\"name\":\"my-hostname-delete-node\",\"uid\":\"3e075628-e16c-11e6-b942-42010af0003a\",\"apiVersion\":\"v1\",\"resourceVersion\":\"8144\"}}\n] [{v1 ReplicationController my-hostname-delete-node 3e075628-e16c-11e6-b942-42010af0003a 0xc821513ae7}] []} {[{default-token-1m618 {<nil> <nil> <nil> <nil> <nil> 0xc821d93410 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-1m618 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821513c00 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-fa7cce72-t00l 0xc821b3cd80 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 05:02:49 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 05:02:50 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 05:02:49 -0800 PST}  }]   10.240.0.4 10.96.1.4 2017-01-23T05:02:49-08:00 [] [{my-hostname-delete-node {<nil> 0xc821c3df20 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://9cd43604a7aacf21baeeaee1471b2a6e249156141734b5709bf300620e5e7f61}]}} {{ } {my-hostname-delete-node-t7fhr my-hostname-delete-node- e2e-tests-resize-nodes-7tlzm /api/v1/namespaces/e2e-tests-resize-nodes-7tlzm/pods/my-hostname-delete-node-t7fhr 7e9fbe62-e16c-11e6-b942-42010af0003a 8339 0 {2017-01-23 05:04:37 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-7tlzm\",\"name\":\"my-hostname-delete-node\",\"uid\":\"3e075628-e16c-11e6-b942-42010af0003a\",\"apiVersion\":\"v1\",\"resourceVersion\":\"8239\"}}\n] [{v1 ReplicationController my-hostname-delete-node 3e075628-e16c-11e6-b942-42010af0003a 0xc821513ea7}] []} {[{default-token-1m618 {<nil> <nil> <nil> <nil> <nil> 0xc821d93470 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-1m618 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821513fa0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-fa7cce72-t00l 0xc821b3ce80 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 05:04:37 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 05:04:38 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 05:04:37 -0800 PST}  }]   10.240.0.4 10.96.1.6 2017-01-23T05:04:37-08:00 [] [{my-hostname-delete-node {<nil> 0xc821c3df40 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://dfe8f0910314555b034f9691802a0f2024a969ab3b579d23eb7a707275844a1c}]}}]}",
    }
    failed to wait for pods responding: pod with UID 3e0915dc-e16c-11e6-b942-42010af0003a is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-7tlzm/pods 8495} [{{ } {my-hostname-delete-node-1z51z my-hostname-delete-node- e2e-tests-resize-nodes-7tlzm /api/v1/namespaces/e2e-tests-resize-nodes-7tlzm/pods/my-hostname-delete-node-1z51z 3e093d39-e16c-11e6-b942-42010af0003a 8162 0 {2017-01-23 05:02:49 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-7tlzm","name":"my-hostname-delete-node","uid":"3e075628-e16c-11e6-b942-42010af0003a","apiVersion":"v1","resourceVersion":"8144"}}
    ] [{v1 ReplicationController my-hostname-delete-node 3e075628-e16c-11e6-b942-42010af0003a 0xc821513747}] []} {[{default-token-1m618 {<nil> <nil> <nil> <nil> <nil> 0xc821d933b0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-1m618 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821513850 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-fa7cce72-jv7p 0xc821b3ccc0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 05:02:49 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 05:02:51 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 05:02:49 -0800 PST}  }]   10.240.0.3 10.96.0.5 2017-01-23T05:02:49-08:00 [] [{my-hostname-delete-node {<nil> 0xc821c3df00 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://2710b9df5dcb712a9e8243a17d8c0566b9a174957bdd0f11bb0257ae422adf19}]}} {{ } {my-hostname-delete-node-f6qpl my-hostname-delete-node- e2e-tests-resize-nodes-7tlzm /api/v1/namespaces/e2e-tests-resize-nodes-7tlzm/pods/my-hostname-delete-node-f6qpl 3e08f6f2-e16c-11e6-b942-42010af0003a 8157 0 {2017-01-23 05:02:49 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-7tlzm","name":"my-hostname-delete-node","uid":"3e075628-e16c-11e6-b942-42010af0003a","apiVersion":"v1","resourceVersion":"8144"}}
    ] [{v1 ReplicationController my-hostname-delete-node 3e075628-e16c-11e6-b942-42010af0003a 0xc821513ae7}] []} {[{default-token-1m618 {<nil> <nil> <nil> <nil> <nil> 0xc821d93410 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-1m618 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821513c00 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-fa7cce72-t00l 0xc821b3cd80 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 05:02:49 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 05:02:50 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 05:02:49 -0800 PST}  }]   10.240.0.4 10.96.1.4 2017-01-23T05:02:49-08:00 [] [{my-hostname-delete-node {<nil> 0xc821c3df20 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://9cd43604a7aacf21baeeaee1471b2a6e249156141734b5709bf300620e5e7f61}]}} {{ } {my-hostname-delete-node-t7fhr my-hostname-delete-node- e2e-tests-resize-nodes-7tlzm /api/v1/namespaces/e2e-tests-resize-nodes-7tlzm/pods/my-hostname-delete-node-t7fhr 7e9fbe62-e16c-11e6-b942-42010af0003a 8339 0 {2017-01-23 05:04:37 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-7tlzm","name":"my-hostname-delete-node","uid":"3e075628-e16c-11e6-b942-42010af0003a","apiVersion":"v1","resourceVersion":"8239"}}
    ] [{v1 ReplicationController my-hostname-delete-node 3e075628-e16c-11e6-b942-42010af0003a 0xc821513ea7}] []} {[{default-token-1m618 {<nil> <nil> <nil> <nil> <nil> 0xc821d93470 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-1m618 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821513fa0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-fa7cce72-t00l 0xc821b3ce80 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 05:04:37 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 05:04:38 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 05:04:37 -0800 PST}  }]   10.240.0.4 10.96.1.6 2017-01-23T05:04:37-08:00 [] [{my-hostname-delete-node {<nil> 0xc821c3df40 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://dfe8f0910314555b034f9691802a0f2024a969ab3b579d23eb7a707275844a1c}]}}]}
not to have occurred

Issues about this test specifically: #27233 #36204

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/215/
Multiple broken tests:

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:453
Expected error:
    <*errors.errorString | 0xc8204dc850>: {
        s: "failed to wait for pods responding: pod with UID ad4206a3-e19d-11e6-bdf5-42010af0001b is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-7gt8f/pods 3937} [{{ } {my-hostname-delete-node-5dpq4 my-hostname-delete-node- e2e-tests-resize-nodes-7gt8f /api/v1/namespaces/e2e-tests-resize-nodes-7gt8f/pods/my-hostname-delete-node-5dpq4 ad424b4f-e19d-11e6-bdf5-42010af0001b 3754 0 {2017-01-23 10:56:41 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-7gt8f\",\"name\":\"my-hostname-delete-node\",\"uid\":\"ad404503-e19d-11e6-bdf5-42010af0001b\",\"apiVersion\":\"v1\",\"resourceVersion\":\"3739\"}}\n] [{v1 ReplicationController my-hostname-delete-node ad404503-e19d-11e6-bdf5-42010af0001b 0xc820576517}] []} {[{default-token-00vtr {<nil> <nil> <nil> <nil> <nil> 0xc820f658f0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-00vtr true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820576680 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-24f39745-nrjb 0xc820806340 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 10:56:41 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 10:56:42 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 10:56:41 -0800 PST}  }]   10.240.0.4 10.96.1.4 2017-01-23T10:56:41-08:00 [] [{my-hostname-delete-node {<nil> 0xc820a642c0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://0322c527502bd44bb489d278ba7632256f4607a8cc31bf38ba75b36d1b3e4342}]}} {{ } {my-hostname-delete-node-76h0c my-hostname-delete-node- e2e-tests-resize-nodes-7gt8f /api/v1/namespaces/e2e-tests-resize-nodes-7gt8f/pods/my-hostname-delete-node-76h0c ad427f44-e19d-11e6-bdf5-42010af0001b 3756 0 {2017-01-23 10:56:41 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-7gt8f\",\"name\":\"my-hostname-delete-node\",\"uid\":\"ad404503-e19d-11e6-bdf5-42010af0001b\",\"apiVersion\":\"v1\",\"resourceVersion\":\"3739\"}}\n] [{v1 ReplicationController my-hostname-delete-node ad404503-e19d-11e6-bdf5-42010af0001b 0xc8205769d7}] []} {[{default-token-00vtr {<nil> <nil> <nil> <nil> <nil> 0xc820f65950 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-00vtr true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820576ba0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-24f39745-bl23 0xc820806400 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 10:56:41 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 10:56:43 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 10:56:41 -0800 PST}  }]   10.240.0.3 10.96.2.3 2017-01-23T10:56:41-08:00 [] [{my-hostname-delete-node {<nil> 0xc820a64300 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://986a8c85a64e2553e31d142e0137773558ff370cc9c68d07e9f4ee8428883935}]}} {{ } {my-hostname-delete-node-7jxh7 my-hostname-delete-node- e2e-tests-resize-nodes-7gt8f /api/v1/namespaces/e2e-tests-resize-nodes-7gt8f/pods/my-hostname-delete-node-7jxh7 f995563f-e19d-11e6-bdf5-42010af0001b 3936 0 {2017-01-23 10:58:49 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-7gt8f\",\"name\":\"my-hostname-delete-node\",\"uid\":\"ad404503-e19d-11e6-bdf5-42010af0001b\",\"apiVersion\":\"v1\",\"resourceVersion\":\"3882\"}}\n] [{v1 ReplicationController my-hostname-delete-node ad404503-e19d-11e6-bdf5-42010af0001b 0xc820576e47}] []} {[{default-token-00vtr {<nil> <nil> <nil> <nil> <nil> 0xc820f659b0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-00vtr true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820576f60 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-24f39745-nrjb 0xc8208064c0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 10:58:49 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 10:58:50 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 10:58:49 -0800 PST}  }]   10.240.0.4 10.96.1.5 2017-01-23T10:58:49-08:00 [] [{my-hostname-delete-node {<nil> 0xc820a64320 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://756bb74edc23de516119d19437de921576d752d9d8019a5ec04f643fb58e856c}]}}]}",
    }
    failed to wait for pods responding: pod with UID ad4206a3-e19d-11e6-bdf5-42010af0001b is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-7gt8f/pods 3937} [{{ } {my-hostname-delete-node-5dpq4 my-hostname-delete-node- e2e-tests-resize-nodes-7gt8f /api/v1/namespaces/e2e-tests-resize-nodes-7gt8f/pods/my-hostname-delete-node-5dpq4 ad424b4f-e19d-11e6-bdf5-42010af0001b 3754 0 {2017-01-23 10:56:41 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-7gt8f","name":"my-hostname-delete-node","uid":"ad404503-e19d-11e6-bdf5-42010af0001b","apiVersion":"v1","resourceVersion":"3739"}}
    ] [{v1 ReplicationController my-hostname-delete-node ad404503-e19d-11e6-bdf5-42010af0001b 0xc820576517}] []} {[{default-token-00vtr {<nil> <nil> <nil> <nil> <nil> 0xc820f658f0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-00vtr true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820576680 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-24f39745-nrjb 0xc820806340 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 10:56:41 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 10:56:42 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 10:56:41 -0800 PST}  }]   10.240.0.4 10.96.1.4 2017-01-23T10:56:41-08:00 [] [{my-hostname-delete-node {<nil> 0xc820a642c0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://0322c527502bd44bb489d278ba7632256f4607a8cc31bf38ba75b36d1b3e4342}]}} {{ } {my-hostname-delete-node-76h0c my-hostname-delete-node- e2e-tests-resize-nodes-7gt8f /api/v1/namespaces/e2e-tests-resize-nodes-7gt8f/pods/my-hostname-delete-node-76h0c ad427f44-e19d-11e6-bdf5-42010af0001b 3756 0 {2017-01-23 10:56:41 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-7gt8f","name":"my-hostname-delete-node","uid":"ad404503-e19d-11e6-bdf5-42010af0001b","apiVersion":"v1","resourceVersion":"3739"}}
    ] [{v1 ReplicationController my-hostname-delete-node ad404503-e19d-11e6-bdf5-42010af0001b 0xc8205769d7}] []} {[{default-token-00vtr {<nil> <nil> <nil> <nil> <nil> 0xc820f65950 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-00vtr true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820576ba0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-24f39745-bl23 0xc820806400 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 10:56:41 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 10:56:43 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 10:56:41 -0800 PST}  }]   10.240.0.3 10.96.2.3 2017-01-23T10:56:41-08:00 [] [{my-hostname-delete-node {<nil> 0xc820a64300 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://986a8c85a64e2553e31d142e0137773558ff370cc9c68d07e9f4ee8428883935}]}} {{ } {my-hostname-delete-node-7jxh7 my-hostname-delete-node- e2e-tests-resize-nodes-7gt8f /api/v1/namespaces/e2e-tests-resize-nodes-7gt8f/pods/my-hostname-delete-node-7jxh7 f995563f-e19d-11e6-bdf5-42010af0001b 3936 0 {2017-01-23 10:58:49 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-7gt8f","name":"my-hostname-delete-node","uid":"ad404503-e19d-11e6-bdf5-42010af0001b","apiVersion":"v1","resourceVersion":"3882"}}
    ] [{v1 ReplicationController my-hostname-delete-node ad404503-e19d-11e6-bdf5-42010af0001b 0xc820576e47}] []} {[{default-token-00vtr {<nil> <nil> <nil> <nil> <nil> 0xc820f659b0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-00vtr true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820576f60 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-24f39745-nrjb 0xc8208064c0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 10:58:49 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 10:58:50 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-23 10:58:49 -0800 PST}  }]   10.240.0.4 10.96.1.5 2017-01-23T10:58:49-08:00 [] [{my-hostname-delete-node {<nil> 0xc820a64320 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://756bb74edc23de516119d19437de921576d752d9d8019a5ec04f643fb58e856c}]}}]}
not to have occurred

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Jan 23 15:50:22.826: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-24f39745-plj9:
 container "runtime": expected RSS memory (MB) < 314572800; got 509857792
node gke-bootstrap-e2e-default-pool-24f39745-nrjb:
 container "runtime": expected RSS memory (MB) < 314572800; got 531574784
node gke-bootstrap-e2e-default-pool-24f39745-pjdt:
 container "runtime": expected RSS memory (MB) < 314572800; got 544980992

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc82007ff80>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc82007ff80>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication between nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:267
Expected error:
    <*errors.errorString | 0xc8211497a0>: {
        s: "pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2017-01-23 15:05:47 -0800 PST} FinishedAt:{Time:2017-01-23 15:05:57 -0800 PST} ContainerID:docker://7ec07dc6901c330c7e4ef606ffd486305a1105e37c70fc5e695cc07eb846b956}",
    }
    pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2017-01-23 15:05:47 -0800 PST} FinishedAt:{Time:2017-01-23 15:05:57 -0800 PST} ContainerID:docker://7ec07dc6901c330c7e4ef606ffd486305a1105e37c70fc5e695cc07eb846b956}
not to have occurred

Issues about this test specifically: #30131 #31402

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc82007ff80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc82007ff80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877

Failed: [k8s.io] Pod Disks should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:420
Expected
    <string>: 
to equal
    <string>: 8795027006630628250

Issues about this test specifically: #26127 #28081

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/216/
Multiple broken tests:

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200d7060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200d7060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Pod Disks should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:420
Expected
    <string>: 
to equal
    <string>: 5570263462220471998

Issues about this test specifically: #26127 #28081

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200d7060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200d7060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877

Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication between nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:267
Expected error:
    <*errors.errorString | 0xc821991f50>: {
        s: "pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2017-01-23 21:47:56 -0800 PST} FinishedAt:{Time:2017-01-23 21:48:06 -0800 PST} ContainerID:docker://af45f8d9602393566f97d48640dc4575cfb8939c24716f64b07879ec62a89c0a}",
    }
    pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2017-01-23 21:47:56 -0800 PST} FinishedAt:{Time:2017-01-23 21:48:06 -0800 PST} ContainerID:docker://af45f8d9602393566f97d48640dc4575cfb8939c24716f64b07879ec62a89c0a}
not to have occurred

Issues about this test specifically: #30131 #31402

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Jan 23 18:26:05.662: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-9e93bb80-mszp:
 container "runtime": expected RSS memory (MB) < 314572800; got 522530816
node gke-bootstrap-e2e-default-pool-9e93bb80-ps5q:
 container "runtime": expected RSS memory (MB) < 314572800; got 525357056
node gke-bootstrap-e2e-default-pool-9e93bb80-0c2g:
 container "runtime": expected RSS memory (MB) < 314572800; got 515612672

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/217/
Multiple broken tests:

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200c1060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200c1060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200c1060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable [replication controller] recreates pods scheduled on the unreachable node AND allows scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:544
Expected error:
    <*errors.errorString | 0xc82359a250>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27324 #35852 #35880

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Jan 24 05:43:40.697: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-cbb91bb6-7tc5:
 container "runtime": expected RSS memory (MB) < 314572800; got 519090176
node gke-bootstrap-e2e-default-pool-cbb91bb6-hdcj:
 container "runtime": expected RSS memory (MB) < 314572800; got 532619264
node gke-bootstrap-e2e-default-pool-cbb91bb6-q662:
 container "runtime": expected RSS memory (MB) < 314572800; got 534450176

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200c1060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/218/
Multiple broken tests:

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200b3060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200b3060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication between nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:267
Expected error:
    <*errors.errorString | 0xc822679c70>: {
        s: "pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2017-01-24 12:45:04 -0800 PST} FinishedAt:{Time:2017-01-24 12:45:14 -0800 PST} ContainerID:docker://a3ad5b48f9962f7d195e7327faf5c01c0fa99aaa84600042218c00e2f835dd07}",
    }
    pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2017-01-24 12:45:04 -0800 PST} FinishedAt:{Time:2017-01-24 12:45:14 -0800 PST} ContainerID:docker://a3ad5b48f9962f7d195e7327faf5c01c0fa99aaa84600042218c00e2f835dd07}
not to have occurred

Issues about this test specifically: #30131 #31402

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:453
Expected error:
    <*errors.errorString | 0xc820263920>: {
        s: "failed to wait for pods responding: pod with UID fa437460-e249-11e6-a332-42010af00007 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-1tf99/pods 7763} [{{ } {my-hostname-delete-node-65h2x my-hostname-delete-node- e2e-tests-resize-nodes-1tf99 /api/v1/namespaces/e2e-tests-resize-nodes-1tf99/pods/my-hostname-delete-node-65h2x fa4382fe-e249-11e6-a332-42010af00007 7445 0 {2017-01-24 07:30:04 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-1tf99\",\"name\":\"my-hostname-delete-node\",\"uid\":\"fa41d146-e249-11e6-a332-42010af00007\",\"apiVersion\":\"v1\",\"resourceVersion\":\"7428\"}}\n] [{v1 ReplicationController my-hostname-delete-node fa41d146-e249-11e6-a332-42010af00007 0xc8220eb1a7}] []} {[{default-token-wnctj {<nil> <nil> <nil> <nil> <nil> 0xc8208977a0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-wnctj true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8220eb2a0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-03d7c0e2-wj15 0xc822285000 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-24 07:30:04 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-24 07:30:04 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-24 07:30:04 -0800 PST}  }]   10.240.0.3 10.96.2.4 2017-01-24T07:30:04-08:00 [] [{my-hostname-delete-node {<nil> 0xc82248a140 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://7f6918944ec3b9471df902f5feb21c313ad1fe0a4e878f0e20257a304a067b71}]}} {{ } {my-hostname-delete-node-gnq66 my-hostname-delete-node- e2e-tests-resize-nodes-1tf99 /api/v1/namespaces/e2e-tests-resize-nodes-1tf99/pods/my-hostname-delete-node-gnq66 3243b7ff-e24a-11e6-a332-42010af00007 7609 0 {2017-01-24 07:31:38 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-1tf99\",\"name\":\"my-hostname-delete-node\",\"uid\":\"fa41d146-e249-11e6-a332-42010af00007\",\"apiVersion\":\"v1\",\"resourceVersion\":\"7527\"}}\n] [{v1 ReplicationController my-hostname-delete-node fa41d146-e249-11e6-a332-42010af00007 0xc8220eb537}] []} {[{default-token-wnctj {<nil> <nil> <nil> <nil> <nil> 0xc820897800 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-wnctj true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8220eb630 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-03d7c0e2-m1gx 0xc8222850c0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-24 07:31:38 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-24 07:31:38 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-24 07:31:38 -0800 PST}  }]   10.240.0.4 10.96.1.3 2017-01-24T07:31:38-08:00 [] [{my-hostname-delete-node {<nil> 0xc82248a160 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://a43c5593bc59141f2dbfae82c3cc291917c2351a755e924ed946cd35b2f3cb1c}]}} {{ } {my-hostname-delete-node-zjq2k my-hostname-delete-node- e2e-tests-resize-nodes-1tf99 /api/v1/namespaces/e2e-tests-resize-nodes-1tf99/pods/my-hostname-delete-node-zjq2k fa4353d1-e249-11e6-a332-42010af00007 7443 0 {2017-01-24 07:30:04 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-1tf99\",\"name\":\"my-hostname-delete-node\",\"uid\":\"fa41d146-e249-11e6-a332-42010af00007\",\"apiVersion\":\"v1\",\"resourceVersion\":\"7428\"}}\n] [{v1 ReplicationController my-hostname-delete-node fa41d146-e249-11e6-a332-42010af00007 0xc8220eb8d7}] []} {[{default-token-wnctj {<nil> <nil> <nil> <nil> <nil> 0xc820897860 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-wnctj true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8220eb9d0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-03d7c0e2-wj15 0xc822285180 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-24 07:30:04 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-24 07:30:04 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-24 07:30:04 -0800 PST}  }]   10.240.0.3 10.96.2.3 2017-01-24T07:30:04-08:00 [] [{my-hostname-delete-node {<nil> 0xc82248a180 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://6168e117e10803431dfa8e06dc38e80c6588b1c6d377ea579dffa955f6bbcd46}]}}]}",
    }
    failed to wait for pods responding: pod with UID fa437460-e249-11e6-a332-42010af00007 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-1tf99/pods 7763} [{{ } {my-hostname-delete-node-65h2x my-hostname-delete-node- e2e-tests-resize-nodes-1tf99 /api/v1/namespaces/e2e-tests-resize-nodes-1tf99/pods/my-hostname-delete-node-65h2x fa4382fe-e249-11e6-a332-42010af00007 7445 0 {2017-01-24 07:30:04 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-1tf99","name":"my-hostname-delete-node","uid":"fa41d146-e249-11e6-a332-42010af00007","apiVersion":"v1","resourceVersion":"7428"}}
    ] [{v1 ReplicationController my-hostname-delete-node fa41d146-e249-11e6-a332-42010af00007 0xc8220eb1a7}] []} {[{default-token-wnctj {<nil> <nil> <nil> <nil> <nil> 0xc8208977a0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-wnctj true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8220eb2a0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-03d7c0e2-wj15 0xc822285000 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-24 07:30:04 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-24 07:30:04 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-24 07:30:04 -0800 PST}  }]   10.240.0.3 10.96.2.4 2017-01-24T07:30:04-08:00 [] [{my-hostname-delete-node {<nil> 0xc82248a140 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://7f6918944ec3b9471df902f5feb21c313ad1fe0a4e878f0e20257a304a067b71}]}} {{ } {my-hostname-delete-node-gnq66 my-hostname-delete-node- e2e-tests-resize-nodes-1tf99 /api/v1/namespaces/e2e-tests-resize-nodes-1tf99/pods/my-hostname-delete-node-gnq66 3243b7ff-e24a-11e6-a332-42010af00007 7609 0 {2017-01-24 07:31:38 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-1tf99","name":"my-hostname-delete-node","uid":"fa41d146-e249-11e6-a332-42010af00007","apiVersion":"v1","resourceVersion":"7527"}}
    ] [{v1 ReplicationController my-hostname-delete-node fa41d146-e249-11e6-a332-42010af00007 0xc8220eb537}] []} {[{default-token-wnctj {<nil> <nil> <nil> <nil> <nil> 0xc820897800 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-wnctj true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8220eb630 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-03d7c0e2-m1gx 0xc8222850c0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-24 07:31:38 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-24 07:31:38 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-24 07:31:38 -0800 PST}  }]   10.240.0.4 10.96.1.3 2017-01-24T07:31:38-08:00 [] [{my-hostname-delete-node {<nil> 0xc82248a160 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://a43c5593bc59141f2dbfae82c3cc291917c2351a755e924ed946cd35b2f3cb1c}]}} {{ } {my-hostname-delete-node-zjq2k my-hostname-delete-node- e2e-tests-resize-nodes-1tf99 /api/v1/namespaces/e2e-tests-resize-nodes-1tf99/pods/my-hostname-delete-node-zjq2k fa4353d1-e249-11e6-a332-42010af00007 7443 0 {2017-01-24 07:30:04 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-1tf99","name":"my-hostname-delete-node","uid":"fa41d146-e249-11e6-a332-42010af00007","apiVersion":"v1","resourceVersion":"7428"}}
    ] [{v1 ReplicationController my-hostname-delete-node fa41d146-e249-11e6-a332-42010af00007 0xc8220eb8d7}] []} {[{default-token-wnctj {<nil> <nil> <nil> <nil> <nil> 0xc820897860 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-wnctj true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8220eb9d0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-03d7c0e2-wj15 0xc822285180 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-24 07:30:04 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-24 07:30:04 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-24 07:30:04 -0800 PST}  }]   10.240.0.3 10.96.2.3 2017-01-24T07:30:04-08:00 [] [{my-hostname-delete-node {<nil> 0xc82248a180 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://6168e117e10803431dfa8e06dc38e80c6588b1c6d377ea579dffa955f6bbcd46}]}}]}
not to have occurred

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200b3060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200b3060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Jan 24 11:38:59.241: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-03d7c0e2-m1gx:
 container "runtime": expected RSS memory (MB) < 314572800; got 540827648
node gke-bootstrap-e2e-default-pool-03d7c0e2-wj15:
 container "runtime": expected RSS memory (MB) < 314572800; got 533340160
node gke-bootstrap-e2e-default-pool-03d7c0e2-c6lm:
 container "runtime": expected RSS memory (MB) < 314572800; got 521564160

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/219/
Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:281
Expected
    <bool>: false
to be true

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Jan 24 14:08:59.427: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-b6bc5109-w84c:
 container "runtime": expected RSS memory (MB) < 314572800; got 529514496
node gke-bootstrap-e2e-default-pool-b6bc5109-b7sq:
 container "runtime": expected RSS memory (MB) < 314572800; got 525287424
node gke-bootstrap-e2e-default-pool-b6bc5109-mpzp:
 container "runtime": expected RSS memory (MB) < 314572800; got 513982464

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/220/
Multiple broken tests:

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc82007ff80>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc82007ff80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc82007ff80>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Jan 25 02:06:12.421: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-9b8d32c7-dkdj:
 container "runtime": expected RSS memory (MB) < 314572800; got 528723968
node gke-bootstrap-e2e-default-pool-9b8d32c7-k7sm:
 container "runtime": expected RSS memory (MB) < 314572800; got 521367552
node gke-bootstrap-e2e-default-pool-9b8d32c7-rlqs:
 container "runtime": expected RSS memory (MB) < 314572800; got 528392192

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:73
Jan 24 21:50:36.232: timeout waiting 15m0s for pods size to be 3

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:372
Expected error:
    <*errors.errorString | 0xc820e480f0>: {
        s: "service verification failed for: 10.99.242.3\nexpected [service1-b6np8 service1-ccspk service1-g9g0d]\nreceived [service1-g9g0d wget: download timed out]",
    }
    service verification failed for: 10.99.242.3
    expected [service1-b6np8 service1-ccspk service1-g9g0d]
    received [service1-g9g0d wget: download timed out]
not to have occurred

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:279
Expected error:
    <*errors.errorString | 0xc820efc060>: {
        s: "service verification failed for: 10.99.255.228\nexpected [service1-2sbwv service1-30pw3 service1-5rvb8]\nreceived [service1-2sbwv service1-30pw3 wget: download timed out]",
    }
    service verification failed for: 10.99.255.228
    expected [service1-2sbwv service1-30pw3 service1-5rvb8]
    received [service1-2sbwv service1-30pw3 wget: download timed out]
not to have occurred

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:453
Expected error:
    <*errors.errorString | 0xc82146b640>: {
        s: "failed to wait for pods responding: pod with UID af959f3f-e2c9-11e6-9bdd-42010af0002a is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-r9ls8/pods 21954} [{{ } {my-hostname-delete-node-1989c my-hostname-delete-node- e2e-tests-resize-nodes-r9ls8 /api/v1/namespaces/e2e-tests-resize-nodes-r9ls8/pods/my-hostname-delete-node-1989c 82c7ba42-e2ca-11e6-9bdd-42010af0002a 21814 0 {2017-01-24 22:50:08 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-r9ls8\",\"name\":\"my-hostname-delete-node\",\"uid\":\"af93b154-e2c9-11e6-9bdd-42010af0002a\",\"apiVersion\":\"v1\",\"resourceVersion\":\"21746\"}}\n] [{v1 ReplicationController my-hostname-delete-node af93b154-e2c9-11e6-9bdd-42010af0002a 0xc8222dcfc7}] []} {[{default-token-j22f6 {<nil> <nil> <nil> <nil> <nil> 0xc8220d9230 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-j22f6 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8222dd0c0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-9b8d32c7-k7sm 0xc8226af040 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-24 22:50:08 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-24 22:50:09 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-24 22:50:08 -0800 PST}  }]   10.240.0.4 10.96.0.7 2017-01-24T22:50:08-08:00 [] [{my-hostname-delete-node {<nil> 0xc821d68340 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://ed9953ff4ca9d65d201ca168eac0ab054da83207bd149c42bf388ea81ff21648}]}} {{ } {my-hostname-delete-node-8zppz my-hostname-delete-node- e2e-tests-resize-nodes-r9ls8 /api/v1/namespaces/e2e-tests-resize-nodes-r9ls8/pods/my-hostname-delete-node-8zppz af95d31f-e2c9-11e6-9bdd-42010af0002a 21339 0 {2017-01-24 22:44:14 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-r9ls8\",\"name\":\"my-hostname-delete-node\",\"uid\":\"af93b154-e2c9-11e6-9bdd-42010af0002a\",\"apiVersion\":\"v1\",\"resourceVersion\":\"21322\"}}\n] [{v1 ReplicationController my-hostname-delete-node af93b154-e2c9-11e6-9bdd-42010af0002a 0xc8222dd357}] []} {[{default-token-j22f6 {<nil> <nil> <nil> <nil> <nil> 0xc8220d9290 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-j22f6 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8222dd450 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-9b8d32c7-7103 0xc8226af140 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-24 22:44:14 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-24 22:44:15 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-24 22:44:14 -0800 PST}  }]   10.240.0.2 10.96.2.3 2017-01-24T22:44:14-08:00 [] [{my-hostname-delete-node {<nil> 0xc821d68360 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://ebe8b25b64bfce5b120794542fb75b4a5855614460f1486f93b808ff257d5a3e}]}} {{ } {my-hostname-delete-node-v86db my-hostname-delete-node- e2e-tests-resize-nodes-r9ls8 /api/v1/namespaces/e2e-tests-resize-nodes-r9ls8/pods/my-hostname-delete-node-v86db af95b039-e2c9-11e6-9bdd-42010af0002a 21335 0 {2017-01-24 22:44:14 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-r9ls8\",\"name\":\"my-hostname-delete-node\",\"uid\":\"af93b154-e2c9-11e6-9bdd-42010af0002a\",\"apiVersion\":\"v1\",\"resourceVersion\":\"21322\"}}\n] [{v1 ReplicationController my-hostname-delete-node af93b154-e2c9-11e6-9bdd-42010af0002a 0xc8222dd6e7}] []} {[{default-token-j22f6 {<nil> <nil> <nil> <nil> <nil> 0xc8220d92f0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-j22f6 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8222dd7e0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-9b8d32c7-k7sm 0xc8226af240 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-24 22:44:14 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-24 22:44:14 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-24 22:44:14 -0800 PST}  }]   10.240.0.4 10.96.0.4 2017-01-24T22:44:14-08:00 [] [{my-hostname-delete-node {<nil> 0xc821d68380 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://3df81e890c01091f7a213d9f0354c2452afb3f0c867c0e6bca54bb0bb416f205}]}}]}",
    }
    failed to wait for pods responding: pod with UID af959f3f-e2c9-11e6-9bdd-42010af0002a is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-r9ls8/pods 21954} [{{ } {my-hostname-delete-node-1989c my-hostname-delete-node- e2e-tests-resize-nodes-r9ls8 /api/v1/namespaces/e2e-tests-resize-nodes-r9ls8/pods/my-hostname-delete-node-1989c 82c7ba42-e2ca-11e6-9bdd-42010af0002a 21814 0 {2017-01-24 22:50:08 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-r9ls8","name":"my-hostname-delete-node","uid":"af93b154-e2c9-11e6-9bdd-42010af0002a","apiVersion":"v1","resourceVersion":"21746"}}
    ] [{v1 ReplicationController my-hostname-delete-node af93b154-e2c9-11e6-9bdd-42010af0002a 0xc8222dcfc7}] []} {[{default-token-j22f6 {<nil> <nil> <nil> <nil> <nil> 0xc8220d9230 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-j22f6 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8222dd0c0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-9b8d32c7-k7sm 0xc8226af040 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-24 22:50:08 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-24 22:50:09 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-24 22:50:08 -0800 PST}  }]   10.240.0.4 10.96.0.7 2017-01-24T22:50:08-08:00 [] [{my-hostname-delete-node {<nil> 0xc821d68340 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://ed9953ff4ca9d65d201ca168eac0ab054da83207bd149c42bf388ea81ff21648}]}} {{ } {my-hostname-delete-node-8zppz my-hostname-delete-node- e2e-tests-resize-nodes-r9ls8 /api/v1/namespaces/e2e-tests-resize-nodes-r9ls8/pods/my-hostname-delete-node-8zppz af95d31f-e2c9-11e6-9bdd-42010af0002a 21339 0 {2017-01-24 22:44:14 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-r9ls8","name":"my-hostname-delete-node","uid":"af93b154-e2c9-11e6-9bdd-42010af0002a","apiVersion":"v1","resourceVersion":"21322"}}
    ] [{v1 ReplicationController my-hostname-delete-node af93b154-e2c9-11e6-9bdd-42010af0002a 0xc8222dd357}] []} {[{default-token-j22f6 {<nil> <nil> <nil> <nil> <nil> 0xc8220d9290 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-j22f6 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8222dd450 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-9b8d32c7-7103 0xc8226af140 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-24 22:44:14 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-24 22:44:15 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-24 22:44:14 -0800 PST}  }]   10.240.0.2 10.96.2.3 2017-01-24T22:44:14-08:00 [] [{my-hostname-delete-node {<nil> 0xc821d68360 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://ebe8b25b64bfce5b120794542fb75b4a5855614460f1486f93b808ff257d5a3e}]}} {{ } {my-hostname-delete-node-v86db my-hostname-delete-node- e2e-tests-resize-nodes-r9ls8 /api/v1/namespaces/e2e-tests-resize-nodes-r9ls8/pods/my-hostname-delete-node-v86db af95b039-e2c9-11e6-9bdd-42010af0002a 21335 0 {2017-01-24 22:44:14 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-r9ls8","name":"my-hostname-delete-node","uid":"af93b154-e2c9-11e6-9bdd-42010af0002a","apiVersion":"v1","resourceVersion":"21322"}}
    ] [{v1 ReplicationController my-hostname-delete-node af93b154-e2c9-11e6-9bdd-42010af0002a 0xc8222dd6e7}] []} {[{default-token-j22f6 {<nil> <nil> <nil> <nil> <nil> 0xc8220d92f0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-j22f6 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8222dd7e0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-9b8d32c7-k7sm 0xc8226af240 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-24 22:44:14 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-24 22:44:14 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-24 22:44:14 -0800 PST}  }]   10.240.0.4 10.96.0.4 2017-01-24T22:44:14-08:00 [] [{my-hostname-delete-node {<nil> 0xc821d68380 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://3df81e890c01091f7a213d9f0354c2452afb3f0c867c0e6bca54bb0bb416f205}]}}]}
not to have occurred

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication between nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:267
Expected error:
    <*errors.errorString | 0xc820a543a0>: {
        s: "pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2017-01-24 20:03:52 -0800 PST} FinishedAt:{Time:2017-01-24 20:04:02 -0800 PST} ContainerID:docker://b8ee860ed90aea208e23a1b2263c6207690555e85f699466f324e1637b6f9732}",
    }
    pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2017-01-24 20:03:52 -0800 PST} FinishedAt:{Time:2017-01-24 20:04:02 -0800 PST} ContainerID:docker://b8ee860ed90aea208e23a1b2263c6207690555e85f699466f324e1637b6f9732}
not to have occurred

Issues about this test specifically: #30131 #31402

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:331
Expected error:
    <*errors.errorString | 0xc821d38260>: {
        s: "service verification failed for: 10.99.246.253\nexpected [service1-9z6qq service1-dm8qn service1-ffjhj]\nreceived [service1-9z6qq service1-ffjhj]",
    }
    service verification failed for: 10.99.246.253
    expected [service1-9z6qq service1-dm8qn service1-ffjhj]
    received [service1-9z6qq service1-ffjhj]
not to have occurred

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] KubeProxy should test kube-proxy [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubeproxy.go:107
Expected
    <int>: 0
to be ==
    <int>: 1

Issues about this test specifically: #26490 #33669

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:70
Jan 24 21:26:36.686: timeout waiting 15m0s for pods size to be 3

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc82007ff80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:725
Jan 24 22:36:25.989: Could not reach HTTP service through 104.198.223.30:30618 after 5m0s: timed out waiting for the condition

Issues about this test specifically: #26134

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:280
0 (0; 2m7.368756642s): path /api/v1/namespaces/e2e-tests-proxy-2gwdz/pods/proxy-service-nkzv4-4kxts:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server has prevented the request from succeeding Reason:InternalError Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'http://10.96.1.3:1080/'" field:"" > retryAfterSeconds:0  Code:503}
0 (0; 2m7.36905389s): path /api/v1/namespaces/e2e-tests-proxy-2gwdz/pods/https:proxy-service-nkzv4-4kxts:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server has prevented the request from succeeding Reason:InternalError Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'https://10.96.1.3:462/'" field:"" > retryAfterSeconds:0  Code:503}
0 (0; 2m7.370774844s): path /api/v1/namespaces/e2e-tests-proxy-2gwdz/pods/http:proxy-service-nkzv4-4kxts:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server has prevented the request from succeeding Reason:InternalError Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'http://10.96.1.3:1080/'" field:"" > retryAfterSeconds:0  Code:503}
0 (0; 2m7.370918277s): path /api/v1/namespaces/e2e-tests-proxy-2gwdz/services/https:proxy-service-nkzv4:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server has prevented the request from succeeding Reason:InternalError Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'https://10.96.1.3:462/'" field:"" > retryAfterSeconds:0  Code:503}
0 (0; 2m7.439112406s): path /api/v1/proxy/namespaces/e2e-tests-proxy-2gwdz/pods/http:proxy-service-nkzv4-4kxts:1080/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server has prevented the request from succeeding Reason:InternalError Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'http://10.96.1.3:1080/'" field:"" > retryAfterSeconds:0  Code:503}
0 (0; 2m7.439050298s): path /api/v1/proxy/namespaces/e2e-tests-proxy-2gwdz/services/https:proxy-service-nkzv4:tlsportname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server has prevented the request from succeeding Reason:InternalError Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'https://10.96.1.3:462/'" field:"" > retryAfterSeconds:0  Code:503}
0 (0; 2m7.438970085s): path /api/v1/namespaces/e2e-tests-proxy-2gwdz/pods/https:proxy-service-nkzv4-4kxts:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server has prevented the request from succeeding Reason:InternalError Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'https://10.96.1.3:443/'" field:"" > retryAfterSeconds:0  Code:503}
0 (0; 2m7.439933945s): path /api/v1/proxy/namespaces/e2e-tests-proxy-2gwdz/pods/proxy-service-nkzv4-4kxts:1080/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server has prevented the request from succeeding Reason:InternalError Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'http://10.96.1.3:1080/'" field:"" > retryAfterSeconds:0  Code:503}
0 (0; 2m7.44040536s): path /api/v1/proxy/namespaces/e2e-tests-proxy-2gwdz/services/https:proxy-service-nkzv4:444/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server has prevented the request from succeeding Reason:InternalError Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'https://10.96.1.3:462/'" field:"" > retryAfterSeconds:0  Code:503}
1 (0; 2m7.292682633s): path /api/v1/namespaces/e2e-tests-proxy-2gwdz/pods/https:proxy-service-nkzv4-4kxts:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server has prevented the request from succeeding Reason:InternalError Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'https://10.96.1.3:443/'" field:"" > retryAfterSeconds:0  Code:503}
1 (0; 2m7.292923942s): path /api/v1/namespaces/e2e-tests-proxy-2gwdz/services/https:proxy-service-nkzv4:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server has prevented the request from succeeding Reason:InternalError Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'https://10.96.1.3:462/'" field:"" > retryAfterSeconds:0  Code:503}
1 (0; 2m7.363722929s): path /api/v1/proxy/namespaces/e2e-tests-proxy-2gwdz/services/https:proxy-service-nkzv4:444/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server has prevented the request from succeeding Reason:InternalError Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'https://10.96.1.3:462/'" field:"" > retryAfterSeconds:0  Code:503}
1 (0; 2m7.36274302s): path /api/v1/proxy/namespaces/e2e-tests-proxy-2gwdz/services/https:proxy-service-nkzv4:tlsportname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server has prevented the request from succeeding Reason:InternalError Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'https://10.96.1.3:462/'" field:"" > retryAfterSeconds:0  Code:503}
1 (0; 2m7.419236489s): path /api/v1/namespaces/e2e-tests-proxy-2gwdz/pods/https:proxy-service-nkzv4-4kxts:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server has prevented the request from succeeding Reason:InternalError Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'https://10.96.1.3:462/'" field:"" > retryAfterSeconds:0  Code:503}
2 (0; 2m7.309855121s): path /api/v1/namespaces/e2e-tests-proxy-2gwdz/pods/https:proxy-service-nkzv4-4kxts:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server has prevented the request from succeeding Reason:InternalError Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'https://10.96.1.3:443/'" field:"" > retryAfterSeconds:0  Code:503}
3 (0; 2m7.295595702s): path /api/v1/namespaces/e2e-tests-proxy-2gwdz/pods/https:proxy-service-nkzv4-4kxts:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server has prevented the request from succeeding Reason:InternalError Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'https://10.96.1.3:443/'" field:"" > retryAfterSeconds:0  Code:503}

Issues about this test specifically: #26164 #26210 #33998 #37158

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/221/
Multiple broken tests:

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:453
Expected error:
    <*errors.errorString | 0xc82105f490>: {
        s: "failed to wait for pods responding: pod with UID 30f6da27-e2eb-11e6-b38d-42010af00018 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-pcnvk/pods 1586} [{{ } {my-hostname-delete-node-hqgcg my-hostname-delete-node- e2e-tests-resize-nodes-pcnvk /api/v1/namespaces/e2e-tests-resize-nodes-pcnvk/pods/my-hostname-delete-node-hqgcg 30f6ebc8-e2eb-11e6-b38d-42010af00018 1261 0 {2017-01-25 02:44:04 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-pcnvk\",\"name\":\"my-hostname-delete-node\",\"uid\":\"30f52c59-e2eb-11e6-b38d-42010af00018\",\"apiVersion\":\"v1\",\"resourceVersion\":\"1242\"}}\n] [{v1 ReplicationController my-hostname-delete-node 30f52c59-e2eb-11e6-b38d-42010af00018 0xc8212f9277}] []} {[{default-token-7xbp1 {<nil> <nil> <nil> <nil> <nil> 0xc82134aba0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-7xbp1 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8212f9370 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-bb9d0353-81x9 0xc8215e5440 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-25 02:44:04 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-25 02:44:06 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-25 02:44:04 -0800 PST}  }]   10.240.0.3 10.96.1.3 2017-01-25T02:44:04-08:00 [] [{my-hostname-delete-node {<nil> 0xc8215c5ec0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://60ef025a5633062a90c5a72de9fe9565ddaf892390a27a5295c999d7f7523b1d}]}} {{ } {my-hostname-delete-node-ml9f1 my-hostname-delete-node- e2e-tests-resize-nodes-pcnvk /api/v1/namespaces/e2e-tests-resize-nodes-pcnvk/pods/my-hostname-delete-node-ml9f1 30f6fd6f-e2eb-11e6-b38d-42010af00018 1258 0 {2017-01-25 02:44:04 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-pcnvk\",\"name\":\"my-hostname-delete-node\",\"uid\":\"30f52c59-e2eb-11e6-b38d-42010af00018\",\"apiVersion\":\"v1\",\"resourceVersion\":\"1242\"}}\n] [{v1 ReplicationController my-hostname-delete-node 30f52c59-e2eb-11e6-b38d-42010af00018 0xc8212f9607}] []} {[{default-token-7xbp1 {<nil> <nil> <nil> <nil> <nil> 0xc82134ac00 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-7xbp1 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8212f9700 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-bb9d0353-f1k1 0xc8215e5600 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-25 02:44:04 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-25 02:44:06 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-25 02:44:04 -0800 PST}  }]   10.240.0.4 10.96.0.3 2017-01-25T02:44:04-08:00 [] [{my-hostname-delete-node {<nil> 0xc8215c5ee0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://cf51f594f105e918ea58c287f883dd0d4a6e67afd42bd743eeccdc327718e337}]}} {{ } {my-hostname-delete-node-zv8pr my-hostname-delete-node- e2e-tests-resize-nodes-pcnvk /api/v1/namespaces/e2e-tests-resize-nodes-pcnvk/pods/my-hostname-delete-node-zv8pr 73aa4073-e2eb-11e6-b38d-42010af00018 1442 0 {2017-01-25 02:45:56 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-pcnvk\",\"name\":\"my-hostname-delete-node\",\"uid\":\"30f52c59-e2eb-11e6-b38d-42010af00018\",\"apiVersion\":\"v1\",\"resourceVersion\":\"1353\"}}\n] [{v1 ReplicationController my-hostname-delete-node 30f52c59-e2eb-11e6-b38d-42010af00018 0xc8212f9997}] []} {[{default-token-7xbp1 {<nil> <nil> <nil> <nil> <nil> 0xc82134ac60 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-7xbp1 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8212f9a90 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-bb9d0353-f1k1 0xc8215e5780 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-25 02:45:56 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-25 02:45:57 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-25 02:45:56 -0800 PST}  }]   10.240.0.4 10.96.0.5 2017-01-25T02:45:56-08:00 [] [{my-hostname-delete-node {<nil> 0xc8215c5f00 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://418627162e735d2dbdf98d178e247460013d399afe7ee5217c57a767f980b017}]}}]}",
    }
    failed to wait for pods responding: pod with UID 30f6da27-e2eb-11e6-b38d-42010af00018 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-pcnvk/pods 1586} [{{ } {my-hostname-delete-node-hqgcg my-hostname-delete-node- e2e-tests-resize-nodes-pcnvk /api/v1/namespaces/e2e-tests-resize-nodes-pcnvk/pods/my-hostname-delete-node-hqgcg 30f6ebc8-e2eb-11e6-b38d-42010af00018 1261 0 {2017-01-25 02:44:04 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-pcnvk","name":"my-hostname-delete-node","uid":"30f52c59-e2eb-11e6-b38d-42010af00018","apiVersion":"v1","resourceVersion":"1242"}}
    ] [{v1 ReplicationController my-hostname-delete-node 30f52c59-e2eb-11e6-b38d-42010af00018 0xc8212f9277}] []} {[{default-token-7xbp1 {<nil> <nil> <nil> <nil> <nil> 0xc82134aba0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-7xbp1 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8212f9370 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-bb9d0353-81x9 0xc8215e5440 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-25 02:44:04 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-25 02:44:06 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-25 02:44:04 -0800 PST}  }]   10.240.0.3 10.96.1.3 2017-01-25T02:44:04-08:00 [] [{my-hostname-delete-node {<nil> 0xc8215c5ec0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://60ef025a5633062a90c5a72de9fe9565ddaf892390a27a5295c999d7f7523b1d}]}} {{ } {my-hostname-delete-node-ml9f1 my-hostname-delete-node- e2e-tests-resize-nodes-pcnvk /api/v1/namespaces/e2e-tests-resize-nodes-pcnvk/pods/my-hostname-delete-node-ml9f1 30f6fd6f-e2eb-11e6-b38d-42010af00018 1258 0 {2017-01-25 02:44:04 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-pcnvk","name":"my-hostname-delete-node","uid":"30f52c59-e2eb-11e6-b38d-42010af00018","apiVersion":"v1","resourceVersion":"1242"}}
    ] [{v1 ReplicationController my-hostname-delete-node 30f52c59-e2eb-11e6-b38d-42010af00018 0xc8212f9607}] []} {[{default-token-7xbp1 {<nil> <nil> <nil> <nil> <nil> 0xc82134ac00 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-7xbp1 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8212f9700 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-bb9d0353-f1k1 0xc8215e5600 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-25 02:44:04 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-25 02:44:06 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-25 02:44:04 -0800 PST}  }]   10.240.0.4 10.96.0.3 2017-01-25T02:44:04-08:00 [] [{my-hostname-delete-node {<nil> 0xc8215c5ee0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://cf51f594f105e918ea58c287f883dd0d4a6e67afd42bd743eeccdc327718e337}]}} {{ } {my-hostname-delete-node-zv8pr my-hostname-delete-node- e2e-tests-resize-nodes-pcnvk /api/v1/namespaces/e2e-tests-resize-nodes-pcnvk/pods/my-hostname-delete-node-zv8pr 73aa4073-e2eb-11e6-b38d-42010af00018 1442 0 {2017-01-25 02:45:56 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-pcnvk","name":"my-hostname-delete-node","uid":"30f52c59-e2eb-11e6-b38d-42010af00018","apiVersion":"v1","resourceVersion":"1353"}}
    ] [{v1 ReplicationController my-hostname-delete-node 30f52c59-e2eb-11e6-b38d-42010af00018 0xc8212f9997}] []} {[{default-token-7xbp1 {<nil> <nil> <nil> <nil> <nil> 0xc82134ac60 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-7xbp1 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8212f9a90 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-bb9d0353-f1k1 0xc8215e5780 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-25 02:45:56 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-25 02:45:57 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-25 02:45:56 -0800 PST}  }]   10.240.0.4 10.96.0.5 2017-01-25T02:45:56-08:00 [] [{my-hostname-delete-node {<nil> 0xc8215c5f00 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://418627162e735d2dbdf98d178e247460013d399afe7ee5217c57a767f980b017}]}}]}
not to have occurred

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200c7060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:372
Expected error:
    <*errors.errorString | 0xc820f7c630>: {
        s: "service verification failed for: 10.99.240.23\nexpected [service1-3bqg8 service1-d8rq3 service1-nv7kn]\nreceived [service1-3bqg8 service1-d8rq3]",
    }
    service verification failed for: 10.99.240.23
    expected [service1-3bqg8 service1-d8rq3 service1-nv7kn]
    received [service1-3bqg8 service1-d8rq3]
not to have occurred

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200c7060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Jan 25 08:39:41.490: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-bb9d0353-81x9:
 container "runtime": expected RSS memory (MB) < 314572800; got 530587648
node gke-bootstrap-e2e-default-pool-bb9d0353-f1k1:
 container "runtime": expected RSS memory (MB) < 314572800; got 517615616
node gke-bootstrap-e2e-default-pool-bb9d0353-hmrc:
 container "runtime": expected RSS memory (MB) < 314572800; got 524574720

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200c7060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200c7060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/222/
Multiple broken tests:

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:279
Expected error:
    <*errors.errorString | 0xc8226d6200>: {
        s: "service verification failed for: 10.99.244.164\nexpected [service2-6xs1n service2-gmnj5 service2-m8c8g]\nreceived [service2-gmnj5 service2-m8c8g]",
    }
    service verification failed for: 10.99.244.164
    expected [service2-6xs1n service2-gmnj5 service2-m8c8g]
    received [service2-gmnj5 service2-m8c8g]
not to have occurred

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc82007ff80>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:280
0 (0; 189.938971ms): path /api/v1/namespaces/e2e-tests-proxy-stgwx/pods/proxy-service-77cq7-rv4nr/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server has prevented the request from succeeding Reason:InternalError Details:name:"" group:"" kind:"" causes:<reason:"UnexpectedServerResponse" message:"Error: 'EOF'\nTrying to reach: 'http://10.96.3.2:80/'" field:"" > retryAfterSeconds:0  Code:503}

Issues about this test specifically: #26164 #26210 #33998 #37158

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc82007ff80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc82007ff80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Jan 25 15:04:19.809: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-320026ae-3z09:
 container "runtime": expected RSS memory (MB) < 314572800; got 515756032
node gke-bootstrap-e2e-default-pool-320026ae-gv28:
 container "runtime": expected RSS memory (MB) < 314572800; got 526565376
node gke-bootstrap-e2e-default-pool-320026ae-qmd4:
 container "runtime": expected RSS memory (MB) < 314572800; got 539078656

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc82007ff80>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/223/
Multiple broken tests:

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200c7060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200c7060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:372
Expected error:
    <*errors.errorString | 0xc8220c19a0>: {
        s: "service verification failed for: 10.99.244.122\nexpected [service1-gp4v3 service1-lmk2s service1-tx0tg]\nreceived [service1-gp4v3 service1-lmk2s]",
    }
    service verification failed for: 10.99.244.122
    expected [service1-gp4v3 service1-lmk2s service1-tx0tg]
    received [service1-gp4v3 service1-lmk2s]
not to have occurred

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200c7060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200c7060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Jan 25 20:36:19.910: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-77ffed2b-611m:
 container "runtime": expected RSS memory (MB) < 314572800; got 525488128
node gke-bootstrap-e2e-default-pool-77ffed2b-7n82:
 container "runtime": expected RSS memory (MB) < 314572800; got 517222400
node gke-bootstrap-e2e-default-pool-77ffed2b-mz5w:
 container "runtime": expected RSS memory (MB) < 314572800; got 529358848

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/224/
Multiple broken tests:

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc82007ff80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc82007ff80>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc82007ff80>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Jan 26 02:57:20.037: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-96e6ac8b-hl4d:
 container "runtime": expected RSS memory (MB) < 314572800; got 518115328
node gke-bootstrap-e2e-default-pool-96e6ac8b-jbr1:
 container "runtime": expected RSS memory (MB) < 314572800; got 539996160
node gke-bootstrap-e2e-default-pool-96e6ac8b-thp0:
 container "runtime": expected RSS memory (MB) < 314572800; got 537657344

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication between nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:267
Expected error:
    <*errors.errorString | 0xc820de8030>: {
        s: "pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2017-01-26 03:48:51 -0800 PST} FinishedAt:{Time:2017-01-26 03:49:01 -0800 PST} ContainerID:docker://0fcdf37fb2468235108a9d346a0c0b380bad4c8ef019891cfb617214c9489566}",
    }
    pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2017-01-26 03:48:51 -0800 PST} FinishedAt:{Time:2017-01-26 03:49:01 -0800 PST} ContainerID:docker://0fcdf37fb2468235108a9d346a0c0b380bad4c8ef019891cfb617214c9489566}
not to have occurred

Issues about this test specifically: #30131 #31402

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc82007ff80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/225/
Multiple broken tests:

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:52
Jan 26 11:48:05.595: timeout waiting 15m0s for pods size to be 3

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc822d800d0>: {
        s: "Not all pods in namespace 'kube-system' running and ready within 5m0s",
    }
    Not all pods in namespace 'kube-system' running and ready within 5m0s
not to have occurred

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc822a2dc20>: {
        s: "Not all pods in namespace 'kube-system' running and ready within 5m0s",
    }
    Not all pods in namespace 'kube-system' running and ready within 5m0s
not to have occurred

Issues about this test specifically: #35279

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Jan 26 10:13:48.891: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-71ff1c4d-nbl9:
 container "runtime": expected RSS memory (MB) < 314572800; got 536678400
node gke-bootstrap-e2e-default-pool-71ff1c4d-tvwh:
 container "runtime": expected RSS memory (MB) < 314572800; got 517279744
node gke-bootstrap-e2e-default-pool-71ff1c4d-z46x:
 container "runtime": expected RSS memory (MB) < 314572800; got 518139904

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98
Jan 26 11:08:46.469: At least one pod wasn't running and ready or succeeded at test start.

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc822eca2d0>: {
        s: "Not all pods in namespace 'kube-system' running and ready within 5m0s",
    }
    Not all pods in namespace 'kube-system' running and ready within 5m0s
not to have occurred

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:639
Jan 26 11:50:58.143: Pods on node gke-bootstrap-e2e-default-pool-71ff1c4d-nbl9 are not ready and running within 2m0s: gave up waiting for matching pods to be 'Running and Ready' after 2m0s

Issues about this test specifically: #30187 #35293 #35845

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820532f40>: {
        s: "Not all pods in namespace 'kube-system' running and ready within 5m0s",
    }
    Not all pods in namespace 'kube-system' running and ready within 5m0s
not to have occurred

Issues about this test specifically: #29516

Failed: [k8s.io] Docker Containers should use the image defaults if command and args are blank [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/docker_containers.go:42
Jan 26 07:49:34.175: Failed to create pod: No API token found for service account "default", retry after the token is automatically created and added to the service account

Issues about this test specifically: #34520

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:73
Jan 26 12:06:39.248: timeout waiting 15m0s for pods size to be 3

Issues about this test specifically: #28657 #30519 #33878

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/226/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Jan 26 16:11:10.080: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-457f83f1-rbdl:
 container "runtime": expected RSS memory (MB) < 314572800; got 530653184
node gke-bootstrap-e2e-default-pool-457f83f1-tfpv:
 container "runtime": expected RSS memory (MB) < 314572800; got 527347712
node gke-bootstrap-e2e-default-pool-457f83f1-js98:
 container "runtime": expected RSS memory (MB) < 314572800; got 511582208

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200d7060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200d7060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8243da510>: {
        s: "Namespace e2e-tests-dns-5qc26 is active",
    }
    Namespace e2e-tests-dns-5qc26 is active
not to have occurred

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:725
Jan 26 15:06:17.708: Failed waiting for pods to be running: Timeout waiting for 1 pods to be ready

Issues about this test specifically: #26134

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:453
Expected error:
    <*errors.errorString | 0xc820b96f80>: {
        s: "failed to wait for pods responding: pod with UID a22ed51d-e413-11e6-a3dc-42010af0001e is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-05ltx/pods 9945} [{{ } {my-hostname-delete-node-jlkwq my-hostname-delete-node- e2e-tests-resize-nodes-05ltx /api/v1/namespaces/e2e-tests-resize-nodes-05ltx/pods/my-hostname-delete-node-jlkwq a22efd7e-e413-11e6-a3dc-42010af0001e 9564 0 {2017-01-26 14:06:05 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-05ltx\",\"name\":\"my-hostname-delete-node\",\"uid\":\"a22d7b62-e413-11e6-a3dc-42010af0001e\",\"apiVersion\":\"v1\",\"resourceVersion\":\"9551\"}}\n] [{v1 ReplicationController my-hostname-delete-node a22d7b62-e413-11e6-a3dc-42010af0001e 0xc8211a17b7}] []} {[{default-token-j9z5x {<nil> <nil> <nil> <nil> <nil> 0xc8216ccc60 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-j9z5x true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8211a18b0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-457f83f1-rbdl 0xc8216bd480 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-26 14:06:05 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-26 14:06:06 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-26 14:06:05 -0800 PST}  }]   10.240.0.2 10.96.0.3 2017-01-26T14:06:05-08:00 [] [{my-hostname-delete-node {<nil> 0xc82140e4a0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://fa7fed84108968330b0a936a4938625b501194492d8f7d1320c6c3cfaaa2a262}]}} {{ } {my-hostname-delete-node-pb26z my-hostname-delete-node- e2e-tests-resize-nodes-05ltx /api/v1/namespaces/e2e-tests-resize-nodes-05ltx/pods/my-hostname-delete-node-pb26z 052706aa-e414-11e6-a3dc-42010af0001e 9798 0 {2017-01-26 14:08:51 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-05ltx\",\"name\":\"my-hostname-delete-node\",\"uid\":\"a22d7b62-e413-11e6-a3dc-42010af0001e\",\"apiVersion\":\"v1\",\"resourceVersion\":\"9746\"}}\n] [{v1 ReplicationController my-hostname-delete-node a22d7b62-e413-11e6-a3dc-42010af0001e 0xc8211a1be7}] []} {[{default-token-j9z5x {<nil> <nil> <nil> <nil> <nil> 0xc8216cccc0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-j9z5x true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8211a1ce0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-457f83f1-rbdl 0xc8216bd540 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-26 14:08:51 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-26 14:08:53 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-26 14:08:51 -0800 PST}  }]   10.240.0.2 10.96.0.8 2017-01-26T14:08:51-08:00 [] [{my-hostname-delete-node {<nil> 0xc82140e4c0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://1cbf866e7343ac1ecae7b9d71d4ad61fb2c6582ee28be96e112f3bd9e0b27d76}]}} {{ } {my-hostname-delete-node-ws2k0 my-hostname-delete-node- e2e-tests-resize-nodes-05ltx /api/v1/namespaces/e2e-tests-resize-nodes-05ltx/pods/my-hostname-delete-node-ws2k0 a22f2105-e413-11e6-a3dc-42010af0001e 9569 0 {2017-01-26 14:06:05 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-05ltx\",\"name\":\"my-hostname-delete-node\",\"uid\":\"a22d7b62-e413-11e6-a3dc-42010af0001e\",\"apiVersion\":\"v1\",\"resourceVersion\":\"9551\"}}\n] [{v1 ReplicationController my-hostname-delete-node a22d7b62-e413-11e6-a3dc-42010af0001e 0xc8211a1fa7}] []} {[{default-token-j9z5x {<nil> <nil> <nil> <nil> <nil> 0xc8216ccd20 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-j9z5x true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820bd0190 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-457f83f1-tfpv 0xc8216bd600 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-26 14:06:05 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-26 14:06:07 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-26 14:06:05 -0800 PST}  }]   10.240.0.5 10.96.3.3 2017-01-26T14:06:05-08:00 [] [{my-hostname-delete-node {<nil> 0xc82140e4e0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://23a10dc07aa2cf0b38c1d9d85fe7376a5f9551b357dc3a3c0d8e0253e0165f54}]}}]}",
    }
    failed to wait for pods responding: pod with UID a22ed51d-e413-11e6-a3dc-42010af0001e is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-05ltx/pods 9945} [{{ } {my-hostname-delete-node-jlkwq my-hostname-delete-node- e2e-tests-resize-nodes-05ltx /api/v1/namespaces/e2e-tests-resize-nodes-05ltx/pods/my-hostname-delete-node-jlkwq a22efd7e-e413-11e6-a3dc-42010af0001e 9564 0 {2017-01-26 14:06:05 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-05ltx","name":"my-hostname-delete-node","uid":"a22d7b62-e413-11e6-a3dc-42010af0001e","apiVersion":"v1","resourceVersion":"9551"}}
    ] [{v1 ReplicationController my-hostname-delete-node a22d7b62-e413-11e6-a3dc-42010af0001e 0xc8211a17b7}] []} {[{default-token-j9z5x {<nil> <nil> <nil> <nil> <nil> 0xc8216ccc60 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-j9z5x true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8211a18b0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-457f83f1-rbdl 0xc8216bd480 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-26 14:06:05 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-26 14:06:06 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-26 14:06:05 -0800 PST}  }]   10.240.0.2 10.96.0.3 2017-01-26T14:06:05-08:00 [] [{my-hostname-delete-node {<nil> 0xc82140e4a0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://fa7fed84108968330b0a936a4938625b501194492d8f7d1320c6c3cfaaa2a262}]}} {{ } {my-hostname-delete-node-pb26z my-hostname-delete-node- e2e-tests-resize-nodes-05ltx /api/v1/namespaces/e2e-tests-resize-nodes-05ltx/pods/my-hostname-delete-node-pb26z 052706aa-e414-11e6-a3dc-42010af0001e 9798 0 {2017-01-26 14:08:51 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-05ltx","name":"my-hostname-delete-node","uid":"a22d7b62-e413-11e6-a3dc-42010af0001e","apiVersion":"v1","resourceVersion":"9746"}}
    ] [{v1 ReplicationController my-hostname-delete-node a22d7b62-e413-11e6-a3dc-42010af0001e 0xc8211a1be7}] []} {[{default-token-j9z5x {<nil> <nil> <nil> <nil> <nil> 0xc8216cccc0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-j9z5x true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8211a1ce0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-457f83f1-rbdl 0xc8216bd540 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-26 14:08:51 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-26 14:08:53 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-26 14:08:51 -0800 PST}  }]   10.240.0.2 10.96.0.8 2017-01-26T14:08:51-08:00 [] [{my-hostname-delete-node {<nil> 0xc82140e4c0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://1cbf866e7343ac1ecae7b9d71d4ad61fb2c6582ee28be96e112f3bd9e0b27d76}]}} {{ } {my-hostname-delete-node-ws2k0 my-hostname-delete-node- e2e-tests-resize-nodes-05ltx /api/v1/namespaces/e2e-tests-resize-nodes-05ltx/pods/my-hostname-delete-node-ws2k0 a22f2105-e413-11e6-a3dc-42010af0001e 9569 0 {2017-01-26 14:06:05 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-05ltx","name":"my-hostname-delete-node","uid":"a22d7b62-e413-11e6-a3dc-42010af0001e","apiVersion":"v1","resourceVersion":"9551"}}
    ] [{v1 ReplicationController my-hostname-delete-node a22d7b62-e413-11e6-a3dc-42010af0001e 0xc8211a1fa7}] []} {[{default-token-j9z5x {<nil> <nil> <nil> <nil> <nil> 0xc8216ccd20 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-j9z5x true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820bd0190 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-457f83f1-tfpv 0xc8216bd600 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-26 14:06:05 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-26 14:06:07 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-26 14:06:05 -0800 PST}  }]   10.240.0.5 10.96.3.3 2017-01-26T14:06:05-08:00 [] [{my-hostname-delete-node {<nil> 0xc82140e4e0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://23a10dc07aa2cf0b38c1d9d85fe7376a5f9551b357dc3a3c0d8e0253e0165f54}]}}]}
not to have occurred

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication between nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:267
Expected error:
    <*errors.errorString | 0xc8218e4830>: {
        s: "pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2017-01-26 17:21:50 -0800 PST} FinishedAt:{Time:2017-01-26 17:22:00 -0800 PST} ContainerID:docker://9e7e050b85f60a4a182a8dfd0dda1512716b3935a336dc48ab03242d5f41f25e}",
    }
    pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2017-01-26 17:21:50 -0800 PST} FinishedAt:{Time:2017-01-26 17:22:00 -0800 PST} ContainerID:docker://9e7e050b85f60a4a182a8dfd0dda1512716b3935a336dc48ab03242d5f41f25e}
not to have occurred

Issues about this test specifically: #30131 #31402

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:132
Expected error:
    <*errors.errorString | 0xc8200d7060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26194 #26338 #30345 #34571

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200d7060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200d7060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Pods should get a host IP [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:227
Jan 26 18:36:53.230: Failed to create pod: No API token found for service account "default", retry after the token is automatically created and added to the service account

Issues about this test specifically: #33008

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:279
Expected error:
    <*errors.errorString | 0xc820e8b930>: {
        s: "service verification failed for: 10.99.241.132\nexpected [service3-7q791 service3-92rj4 service3-l7hxl]\nreceived [service3-92rj4 service3-l7hxl]",
    }
    service verification failed for: 10.99.241.132
    expected [service3-7q791 service3-92rj4 service3-l7hxl]
    received [service3-92rj4 service3-l7hxl]
not to have occurred

Issues about this test specifically: #26128 #26685 #33408 #36298

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/227/
Multiple broken tests:

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc82007ff80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:55
Expected error:
    <*errors.errorString | 0xc82080cc60>: {
        s: "pod 'wget-test' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2017-01-26 22:39:51 -0800 PST} FinishedAt:{Time:2017-01-26 22:40:21 -0800 PST} ContainerID:docker://70004caecb05ff085f2e1f0d84013a0f387d7e155b876560ac433a3b2ee99d71}",
    }
    pod 'wget-test' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2017-01-26 22:39:51 -0800 PST} FinishedAt:{Time:2017-01-26 22:40:21 -0800 PST} ContainerID:docker://70004caecb05ff085f2e1f0d84013a0f387d7e155b876560ac433a3b2ee99d71}
not to have occurred

Issues about this test specifically: #26171 #28188

Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:725
Jan 26 23:07:43.053: Could not reach HTTP service through 130.211.148.126:31883 after 5m0s: timed out waiting for the condition

Issues about this test specifically: #26134

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Jan 27 00:06:03.590: timeout waiting 15m0s for pods size to be 2

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc82007ff80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Pods should not be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1168
Jan 26 22:24:21.604: pod e2e-tests-pods-dldc0/liveness-http - expected number of restarts: %!t(int=0), found restarts: %!t(int32=2)

Issues about this test specifically: #29614

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] KubeProxy should test kube-proxy [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubeproxy.go:107
Response was:map[errors:[reading from udp connection failed. err:'read udp 10.96.1.4:54052->10.96.0.4:8081: i/o timeout' reading from udp connection failed. err:'read udp 10.96.1.4:54418->10.96.0.4:8081: i/o timeout' reading from udp connection failed. err:'read udp 10.96.1.4:55621->10.96.0.4:8081: i/o timeout' reading from udp connection failed. err:'read udp 10.96.1.4:42240->10.96.0.4:8081: i/o timeout' reading from udp connection failed. err:'read udp 10.96.1.4:37029->10.96.0.4:8081: i/o timeout']]
Expected
    <int>: 0
to be ==
    <int>: 1

Issues about this test specifically: #26490 #33669

Failed: [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:58
Expected error:
    <*errors.errorString | 0xc821fec300>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred

Issues about this test specifically: #31075 #36286 #38041

Failed: [k8s.io] Networking should function for intra-pod communication [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:225
Jan 26 23:15:18.436: Failed on attempt 16. Cleaning up. Details:
{
	"Hostname": "nettest-j87pv",
	"Sent": {
		"nettest-3cj38": 15,
		"nettest-j87pv": 15
	},
	"Received": {
		"nettest-3cj38": 15,
		"nettest-j87pv": 15
	},
	"Errors": null,
	"Log": [
		"e2e-tests-nettest-swz1m/nettest has 0 endpoints ([]), which is less than 3 as expected. Waiting for all endpoints to come up.",
		"Attempting to contact http://10.96.1.3:8080",
		"Attempting to contact http://10.96.2.3:8080",
		"Attempting to contact http://10.96.0.4:8080",
		"Warning: unable to contact the endpoint \"http://10.96.0.4:8080\": Post http://10.96.0.4:8080/write: dial tcp 10.96.0.4:8080: i/o timeout",
		"Attempting to contact http://10.96.0.4:8080",
		"Warning: unable to contact the endpoint \"http://10.96.0.4:8080\": Post http://10.96.0.4:8080/write: dial tcp 10.96.0.4:8080: i/o timeout",
		"Attempting to contact http://10.96.1.3:8080",
		"Attempting to contact http://10.96.2.3:8080",
		"Attempting to contact http://10.96.1.3:8080",
		"Attempting to contact http://10.96.2.3:8080",
		"Attempting to contact http://10.96.1.3:8080",
		"Attempting to contact http://10.96.2.3:8080",
		"Attempting to contact http://10.96.2.3:8080",
		"Attempting to contact http://10.96.1.3:8080",
		"Attempting to contact http://10.96.1.3:8080",
		"Attempting to contact http://10.96.2.3:8080",
		"Attempting to contact http://10.96.1.3:8080",
		"Attempting to contact http://10.96.2.3:8080",
		"Attempting to contact http://10.96.1.3:8080",
		"Attempting to contact http://10.96.2.3:8080",
		"Attempting to contact http://10.96.1.3:8080",
		"Attempting to contact http://10.96.2.3:8080",
		"Attempting to contact http://10.96.2.3:8080",
		"Attempting to contact http://10.96.1.3:8080",
		"Attempting to contact http://10.96.1.3:8080",
		"Attempting to contact http://10.96.2.3:8080",
		"Attempting to contact http://10.96.2.3:8080",
		"Attempting to contact http://10.96.1.3:8080",
		"Attempting to contact http://10.96.1.3:8080",
		"Attempting to contact http://10.96.2.3:8080",
		"Attempting to contact http://10.96.1.3:8080",
		"Attempting to contact http://10.96.2.3:8080",
		"Attempting to contact http://10.96.1.3:8080",
		"Attempting to contact http://10.96.2.3:8080",
		"Declaring failure for e2e-tests-nettest-swz1m/nettest with 2 sent and 2 received and 3 peers"
	],
	"StillContactingPeers": false
}

Issues about this test specifically: #26960 #27235

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:40
Jan 26 23:50:03.479: Did not get expected responses within the timeout period of 120.00 seconds.

Issues about this test specifically: #30981

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:279
Expected error:
    <*errors.errorString | 0xc821c3e040>: {
        s: "service verification failed for: 10.99.253.129\nexpected [service1-935m5 service1-b9zf6 service1-pklm1]\nreceived [service1-b9zf6 service1-pklm1 wget: download timed out]",
    }
    service verification failed for: 10.99.253.129
    expected [service1-935m5 service1-b9zf6 service1-pklm1]
    received [service1-b9zf6 service1-pklm1 wget: download timed out]
not to have occurred

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication between nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:267
Expected error:
    <*errors.errorString | 0xc820e20a30>: {
        s: "pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2017-01-26 22:28:44 -0800 PST} FinishedAt:{Time:2017-01-26 22:28:54 -0800 PST} ContainerID:docker://dcb337b5286296258932ed8040782314c88fd60e24a7ce3650ceaf7715e4b572}",
    }
    pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2017-01-26 22:28:44 -0800 PST} FinishedAt:{Time:2017-01-26 22:28:54 -0800 PST} ContainerID:docker://dcb337b5286296258932ed8040782314c88fd60e24a7ce3650ceaf7715e4b572}
not to have occurred

Issues about this test specifically: #30131 #31402

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:70
Jan 27 01:08:36.116: timeout waiting 15m0s for pods size to be 3

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Jan 26 22:15:24.987: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-88698579-41ql:
 container "runtime": expected RSS memory (MB) < 314572800; got 516284416
node gke-bootstrap-e2e-default-pool-88698579-fwz6:
 container "runtime": expected RSS memory (MB) < 314572800; got 525025280
node gke-bootstrap-e2e-default-pool-88698579-v03r:
 container "runtime": expected RSS memory (MB) < 314572800; got 530575360

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc82007ff80>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:406
Jan 27 00:15:35.799: Could not reach HTTP service through 130.211.148.126:30443 after 5m0s: timed out waiting for the condition

Issues about this test specifically: #28064 #28569 #34036

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:372
Expected error:
    <*errors.errorString | 0xc8221400b0>: {
        s: "service verification failed for: 10.99.245.222\nexpected [service1-6mndx service1-m7mj8 service1-wj1pk]\nreceived [service1-6mndx service1-m7mj8 wget: download timed out]",
    }
    service verification failed for: 10.99.245.222
    expected [service1-6mndx service1-m7mj8 service1-wj1pk]
    received [service1-6mndx service1-m7mj8 wget: download timed out]
not to have occurred

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Jan 26 23:44:44.462: timeout waiting 15m0s for pods size to be 1

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc82007ff80>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/228/
Multiple broken tests:

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:59
Jan 27 06:31:24.114: timeout waiting 15m0s for pods size to be 3

Issues about this test specifically: #27397 #27917 #31592

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:406
Jan 27 07:51:19.711: Could not reach HTTP service through 35.184.24.224:30259 after 5m0s: timed out waiting for the condition

Issues about this test specifically: #28064 #28569 #34036

Failed: [k8s.io] Deployment deployment should label adopted RSs and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:82
Expected error:
    <*errors.errorString | 0xc820170930>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred

Issues about this test specifically: #29629 #36270 #37462

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:331
Expected error:
    <*errors.errorString | 0xc8211680d0>: {
        s: "service verification failed for: 10.99.241.182\nexpected [service1-dlz69 service1-q2m1n service1-rw68v]\nreceived [service1-q2m1n service1-rw68v wget: download timed out]",
    }
    service verification failed for: 10.99.241.182
    expected [service1-dlz69 service1-q2m1n service1-rw68v]
    received [service1-q2m1n service1-rw68v wget: download timed out]
not to have occurred

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:304
Expected error:
    <*errors.errorString | 0xc8200b1060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26194 #26338 #30345 #34571

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:49
Jan 27 05:33:48.832: timeout waiting 15m0s for pods size to be 3

Issues about this test specifically: #30317 #31591 #37163

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:389
Expected error:
    <*errors.errorString | 0xc8200b1060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28337

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Jan 27 07:40:39.189: timeout waiting 15m0s for pods size to be 2

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:372
Expected error:
    <*errors.errorString | 0xc82143a0e0>: {
        s: "service verification failed for: 10.99.247.19\nexpected [service2-98pcl service2-gdzgz service2-wj966]\nreceived [service2-98pcl service2-gdzgz wget: download timed out]",
    }
    service verification failed for: 10.99.247.19
    expected [service2-98pcl service2-gdzgz service2-wj966]
    received [service2-98pcl service2-gdzgz wget: download timed out]
not to have occurred

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc8200b1060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26168 #27450

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable [replication controller] recreates pods scheduled on the unreachable node AND allows scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:544
Jan 27 03:40:09.913: Node gke-bootstrap-e2e-default-pool-1c0f2699-p1h8 did not become ready within 2m0s

Issues about this test specifically: #27324 #35852 #35880

Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:725
Jan 27 06:10:11.749: Could not reach HTTP service through 35.184.24.224:31195 after 5m0s: timed out waiting for the condition

Issues about this test specifically: #26134

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:453
Expected error:
    <*errors.errorString | 0xc82081be70>: {
        s: "failed to wait for pods responding: pod with UID e474819e-e47f-11e6-84c2-42010af00023 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-2511g/pods 3559} [{{ } {my-hostname-delete-node-48356 my-hostname-delete-node- e2e-tests-resize-nodes-2511g /api/v1/namespaces/e2e-tests-resize-nodes-2511g/pods/my-hostname-delete-node-48356 e474eafc-e47f-11e6-84c2-42010af00023 3227 0 {2017-01-27 03:01:02 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-2511g\",\"name\":\"my-hostname-delete-node\",\"uid\":\"e4727ca7-e47f-11e6-84c2-42010af00023\",\"apiVersion\":\"v1\",\"resourceVersion\":\"3214\"}}\n] [{v1 ReplicationController my-hostname-delete-node e4727ca7-e47f-11e6-84c2-42010af00023 0xc82057fb67}] []} {[{default-token-f9rkk {<nil> <nil> <nil> <nil> <nil> 0xc820df08a0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-f9rkk true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82057fd20 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-1c0f2699-ks6t 0xc821067600 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-27 03:01:02 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-27 03:01:03 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-27 03:01:02 -0800 PST}  }]   10.240.0.4 10.96.2.3 2017-01-27T03:01:02-08:00 [] [{my-hostname-delete-node {<nil> 0xc820dfe100 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://bc3b205836e9dd7c7928b00e34e17d584f16924c94c120797a3755742f1a4a7a}]}} {{ } {my-hostname-delete-node-h1689 my-hostname-delete-node- e2e-tests-resize-nodes-2511g /api/v1/namespaces/e2e-tests-resize-nodes-2511g/pods/my-hostname-delete-node-h1689 198478dc-e480-11e6-84c2-42010af00023 3403 0 {2017-01-27 03:02:31 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-2511g\",\"name\":\"my-hostname-delete-node\",\"uid\":\"e4727ca7-e47f-11e6-84c2-42010af00023\",\"apiVersion\":\"v1\",\"resourceVersion\":\"3306\"}}\n] [{v1 ReplicationController my-hostname-delete-node e4727ca7-e47f-11e6-84c2-42010af00023 0xc82057ffe7}] []} {[{default-token-f9rkk {<nil> <nil> <nil> <nil> <nil> 0xc820df0c30 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-f9rkk true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82081a2b0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-1c0f2699-p1h8 0xc821067740 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-27 03:02:31 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-27 03:02:33 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-27 03:02:31 -0800 PST}  }]   10.240.0.3 10.96.0.8 2017-01-27T03:02:31-08:00 [] [{my-hostname-delete-node {<nil> 0xc820dfe120 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://b1e07fccaa1b5a30201ce00b593fba1dae552d0264ecd1545c0b8f3484e41972}]}} {{ } {my-hostname-delete-node-nr06r my-hostname-delete-node- e2e-tests-resize-nodes-2511g /api/v1/namespaces/e2e-tests-resize-nodes-2511g/pods/my-hostname-delete-node-nr06r e4744961-e47f-11e6-84c2-42010af00023 3232 0 {2017-01-27 03:01:02 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-2511g\",\"name\":\"my-hostname-delete-node\",\"uid\":\"e4727ca7-e47f-11e6-84c2-42010af00023\",\"apiVersion\":\"v1\",\"resourceVersion\":\"3214\"}}\n] [{v1 ReplicationController my-hostname-delete-node e4727ca7-e47f-11e6-84c2-42010af00023 0xc82081a567}] []} {[{default-token-f9rkk {<nil> <nil> <nil> <nil> <nil> 0xc820df0c90 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-f9rkk true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82081a670 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-1c0f2699-p1h8 0xc821067840 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-27 03:01:02 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-27 03:01:03 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-27 03:01:02 -0800 PST}  }]   10.240.0.3 10.96.0.3 2017-01-27T03:01:02-08:00 [] [{my-hostname-delete-node {<nil> 0xc820dfe140 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://127a96f3c5c3ec3d71a14e8dee4daa6ed4dd7d47331965502667ab00d1789716}]}}]}",
    }
    failed to wait for pods responding: pod with UID e474819e-e47f-11e6-84c2-42010af00023 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-2511g/pods 3559} [{{ } {my-hostname-delete-node-48356 my-hostname-delete-node- e2e-tests-resize-nodes-2511g /api/v1/namespaces/e2e-tests-resize-nodes-2511g/pods/my-hostname-delete-node-48356 e474eafc-e47f-11e6-84c2-42010af00023 3227 0 {2017-01-27 03:01:02 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-2511g","name":"my-hostname-delete-node","uid":"e4727ca7-e47f-11e6-84c2-42010af00023","apiVersion":"v1","resourceVersion":"3214"}}
    ] [{v1 ReplicationController my-hostname-delete-node e4727ca7-e47f-11e6-84c2-42010af00023 0xc82057fb67}] []} {[{default-token-f9rkk {<nil> <nil> <nil> <nil> <nil> 0xc820df08a0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-f9rkk true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82057fd20 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-1c0f2699-ks6t 0xc821067600 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-27 03:01:02 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-27 03:01:03 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-27 03:01:02 -0800 PST}  }]   10.240.0.4 10.96.2.3 2017-01-27T03:01:02-08:00 [] [{my-hostname-delete-node {<nil> 0xc820dfe100 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://bc3b205836e9dd7c7928b00e34e17d584f16924c94c120797a3755742f1a4a7a}]}} {{ } {my-hostname-delete-node-h1689 my-hostname-delete-node- e2e-tests-resize-nodes-2511g /api/v1/namespaces/e2e-tests-resize-nodes-2511g/pods/my-hostname-delete-node-h1689 198478dc-e480-11e6-84c2-42010af00023 3403 0 {2017-01-27 03:02:31 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-2511g","name":"my-hostname-delete-node","uid":"e4727ca7-e47f-11e6-84c2-42010af00023","apiVersion":"v1","resourceVersion":"3306"}}
    ] [{v1 ReplicationController my-hostname-delete-node e4727ca7-e47f-11e6-84c2-42010af00023 0xc82057ffe7}] []} {[{default-token-f9rkk {<nil> <nil> <nil> <nil> <nil> 0xc820df0c30 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-f9rkk true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82081a2b0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-1c0f2699-p1h8 0xc821067740 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-27 03:02:31 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-27 03:02:33 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-27 03:02:31 -0800 PST}  }]   10.240.0.3 10.96.0.8 2017-01-27T03:02:31-08:00 [] [{my-hostname-delete-node {<nil> 0xc820dfe120 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://b1e07fccaa1b5a30201ce00b593fba1dae552d0264ecd1545c0b8f3484e41972}]}} {{ } {my-hostname-delete-node-nr06r my-hostname-delete-node- e2e-tests-resize-nodes-2511g /api/v1/namespaces/e2e-tests-resize-nodes-2511g/pods/my-hostname-delete-node-nr06r e4744961-e47f-11e6-84c2-42010af00023 3232 0 {2017-01-27 03:01:02 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-2511g","name":"my-hostname-delete-node","uid":"e4727ca7-e47f-11e6-84c2-42010af00023","apiVersion":"v1","resourceVersion":"3214"}}
    ] [{v1 ReplicationController my-hostname-delete-node e4727ca7-e47f-11e6-84c2-42010af00023 0xc82081a567}] []} {[{default-token-f9rkk {<nil> <nil> <nil> <nil> <nil> 0xc820df0c90 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-f9rkk true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc82081a670 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-1c0f2699-p1h8 0xc821067840 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-27 03:01:02 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-27 03:01:03 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-27 03:01:02 -0800 PST}  }]   10.240.0.3 10.96.0.3 2017-01-27T03:01:02-08:00 [] [{my-hostname-delete-node {<nil> 0xc820dfe140 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://127a96f3c5c3ec3d71a14e8dee4daa6ed4dd7d47331965502667ab00d1789716}]}}]}
not to have occurred

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200b1060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200b1060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:279
Expected error:
    <*errors.errorString | 0xc82291e080>: {
        s: "service verification failed for: 10.99.255.248\nexpected [service1-30s6l service1-k6mq6 service1-n46qk]\nreceived [service1-30s6l service1-n46qk wget: download timed out]",
    }
    service verification failed for: 10.99.255.248
    expected [service1-30s6l service1-k6mq6 service1-n46qk]
    received [service1-30s6l service1-n46qk wget: download timed out]
not to have occurred

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200b1060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:968
Jan 27 05:42:11.302: expected un-ready endpoint for Service webserver within 5m0s, stdout: 

Issues about this test specifically: #26172

Failed: [k8s.io] KubeProxy should test kube-proxy [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubeproxy.go:107
Expected
    <int>: 0
to be ==
    <int>: 1

Issues about this test specifically: #26490 #33669

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Jan 27 09:12:36.629: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-1c0f2699-p1h8:
 container "runtime": expected RSS memory (MB) < 314572800; got 533413888
node gke-bootstrap-e2e-default-pool-1c0f2699-ks6t:
 container "runtime": expected RSS memory (MB) < 314572800; got 532168704
node gke-bootstrap-e2e-default-pool-1c0f2699-l03h:
 container "runtime": expected RSS memory (MB) < 314572800; got 539750400

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication between nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:267
Expected error:
    <*errors.errorString | 0xc8219d8630>: {
        s: "pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2017-01-27 05:46:16 -0800 PST} FinishedAt:{Time:2017-01-27 05:46:26 -0800 PST} ContainerID:docker://754d620ff335d313e5b340902f835b7bb2ab1aa2b4137b5ddd264e173a3b3413}",
    }
    pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2017-01-27 05:46:16 -0800 PST} FinishedAt:{Time:2017-01-27 05:46:26 -0800 PST} ContainerID:docker://754d620ff335d313e5b340902f835b7bb2ab1aa2b4137b5ddd264e173a3b3413}
not to have occurred

Issues about this test specifically: #30131 #31402

Failed: [k8s.io] Networking should function for intra-pod communication [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:225
Jan 27 07:13:15.635: Failed on attempt 2. Cleaning up. Details:
{
	"Hostname": "nettest-8p4l3",
	"Sent": {
		"nettest-5tt4n": 15,
		"nettest-8p4l3": 15
	},
	"Received": {
		"nettest-5tt4n": 15,
		"nettest-8p4l3": 15
	},
	"Errors": null,
	"Log": [
		"e2e-tests-nettest-xtfvl/nettest has 0 endpoints ([]), which is less than 3 as expected. Waiting for all endpoints to come up.",
		"Attempting to contact http://10.96.0.3:8080",
		"Attempting to contact http://10.96.2.3:8080",
		"Warning: unable to contact the endpoint \"http://10.96.2.3:8080\": Post http://10.96.2.3:8080/write: dial tcp 10.96.2.3:8080: i/o timeout",
		"Attempting to contact http://10.96.4.3:8080",
		"Attempting to contact http://10.96.0.3:8080",
		"Attempting to contact http://10.96.4.3:8080",
		"Attempting to contact http://10.96.0.3:8080",
		"Attempting to contact http://10.96.4.3:8080",
		"Attempting to contact http://10.96.0.3:8080",
		"Attempting to contact http://10.96.4.3:8080",
		"Attempting to contact http://10.96.0.3:8080",
		"Attempting to contact http://10.96.4.3:8080",
		"Attempting to contact http://10.96.0.3:8080",
		"Attempting to contact http://10.96.4.3:8080",
		"Attempting to contact http://10.96.0.3:8080",
		"Attempting to contact http://10.96.4.3:8080",
		"Attempting to contact http://10.96.4.3:8080",
		"Attempting to contact http://10.96.0.3:8080",
		"Attempting to contact http://10.96.4.3:8080",
		"Attempting to contact http://10.96.0.3:8080",
		"Attempting to contact http://10.96.0.3:8080",
		"Attempting to contact http://10.96.4.3:8080",
		"Attempting to contact http://10.96.0.3:8080",
		"Attempting to contact http://10.96.4.3:8080",
		"Attempting to contact http://10.96.0.3:8080",
		"Attempting to contact http://10.96.4.3:8080",
		"Attempting to contact http://10.96.0.3:8080",
		"Attempting to contact http://10.96.4.3:8080",
		"Attempting to contact http://10.96.0.3:8080",
		"Attempting to contact http://10.96.4.3:8080",
		"Attempting to contact http://10.96.4.3:8080",
		"Attempting to contact http://10.96.0.3:8080"
	],
	"StillContactingPeers": false
}

Issues about this test specifically: #26960 #27235

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:52
Jan 27 06:51:29.465: timeout waiting 15m0s for pods size to be 3

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200b1060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:270
Jan 27 08:49:20.167: Frontend service did not start serving content in 600 seconds.

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:70
Jan 27 07:08:02.467: timeout waiting 15m0s for pods size to be 3

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/229/
Multiple broken tests:

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:73
Jan 27 12:42:27.178: Number of replicas has changed: expected 3, got 1

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:372
Expected error:
    <*errors.errorString | 0xc822260bc0>: {
        s: "service verification failed for: 10.99.241.1\nexpected [service1-kgh9f service1-qf1wc service1-qh93d]\nreceived [service1-kgh9f wget: download timed out]",
    }
    service verification failed for: 10.99.241.1
    expected [service1-kgh9f service1-qf1wc service1-qh93d]
    received [service1-kgh9f wget: download timed out]
not to have occurred

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Jan 27 15:46:11.998: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-8ede1a92-17vg:
 container "runtime": expected RSS memory (MB) < 314572800; got 536543232
node gke-bootstrap-e2e-default-pool-8ede1a92-p2c7:
 container "runtime": expected RSS memory (MB) < 314572800; got 540442624
node gke-bootstrap-e2e-default-pool-8ede1a92-p9kq:
 container "runtime": expected RSS memory (MB) < 314572800; got 521895936

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] KubeProxy should test kube-proxy [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubeproxy.go:107
Expected error:
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26490 #33669

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:406
Jan 27 12:48:49.016: Could not reach HTTP service through 104.154.142.200:30810 after 5m0s: timed out waiting for the condition

Issues about this test specifically: #28064 #28569 #34036

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:55
Expected error:
    <*errors.errorString | 0xc822178a20>: {
        s: "pod 'wget-test' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2017-01-27 13:15:26 -0800 PST} FinishedAt:{Time:2017-01-27 13:15:56 -0800 PST} ContainerID:docker://89540629a8069c38e578122f26d3905896b06238b97c5727217beda57c113f4a}",
    }
    pod 'wget-test' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2017-01-27 13:15:26 -0800 PST} FinishedAt:{Time:2017-01-27 13:15:56 -0800 PST} ContainerID:docker://89540629a8069c38e578122f26d3905896b06238b97c5727217beda57c113f4a}
not to have occurred

Issues about this test specifically: #26171 #28188

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:453
Expected error:
    <*errors.errorString | 0xc822261e60>: {
        s: "failed to wait for pods responding: pod with UID 17f1a375-e4d8-11e6-a553-42010af00030 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-3rtv2/pods 25415} [{{ } {my-hostname-delete-node-b16q2 my-hostname-delete-node- e2e-tests-resize-nodes-3rtv2 /api/v1/namespaces/e2e-tests-resize-nodes-3rtv2/pods/my-hostname-delete-node-b16q2 9e353f12-e4d8-11e6-a553-42010af00030 25276 0 {2017-01-27 13:36:09 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-3rtv2\",\"name\":\"my-hostname-delete-node\",\"uid\":\"17efa04e-e4d8-11e6-a553-42010af00030\",\"apiVersion\":\"v1\",\"resourceVersion\":\"25206\"}}\n] [{v1 ReplicationController my-hostname-delete-node 17efa04e-e4d8-11e6-a553-42010af00030 0xc8225218d7}] []} {[{default-token-n2d7s {<nil> <nil> <nil> <nil> <nil> 0xc822092120 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-n2d7s true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8225219f0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-8ede1a92-p2c7 0xc820ee3480 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-27 13:36:10 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-27 13:36:10 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-27 13:36:09 -0800 PST}  }]   10.240.0.4 10.96.0.4 2017-01-27T13:36:10-08:00 [] [{my-hostname-delete-node {<nil> 0xc8213b2220 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://d1c8d9f3cb7c0f7853fb123fea8e6313e147452a807ea43f5b9a6105cab8c66b}]}} {{ } {my-hostname-delete-node-cw211 my-hostname-delete-node- e2e-tests-resize-nodes-3rtv2 /api/v1/namespaces/e2e-tests-resize-nodes-3rtv2/pods/my-hostname-delete-node-cw211 9e3cb772-e4d8-11e6-a553-42010af00030 25278 0 {2017-01-27 13:36:10 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-3rtv2\",\"name\":\"my-hostname-delete-node\",\"uid\":\"17efa04e-e4d8-11e6-a553-42010af00030\",\"apiVersion\":\"v1\",\"resourceVersion\":\"25265\"}}\n] [{v1 ReplicationController my-hostname-delete-node 17efa04e-e4d8-11e6-a553-42010af00030 0xc822521e27}] []} {[{default-token-n2d7s {<nil> <nil> <nil> <nil> <nil> 0xc822092180 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-n2d7s true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc822521f60 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-8ede1a92-p9kq 0xc820ee3540 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-27 13:36:10 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-27 13:36:11 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-27 13:36:10 -0800 PST}  }]   10.240.0.3 10.96.1.4 2017-01-27T13:36:10-08:00 [] [{my-hostname-delete-node {<nil> 0xc8213b2240 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://0dba8c37442552037a2d8dd8c868fb72203b20e8b5f1326c623aeee2e14cfc2d}]}} {{ } {my-hostname-delete-node-ddcm5 my-hostname-delete-node- e2e-tests-resize-nodes-3rtv2 /api/v1/namespaces/e2e-tests-resize-nodes-3rtv2/pods/my-hostname-delete-node-ddcm5 17f1916e-e4d8-11e6-a553-42010af00030 24962 0 {2017-01-27 13:32:24 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-3rtv2\",\"name\":\"my-hostname-delete-node\",\"uid\":\"17efa04e-e4d8-11e6-a553-42010af00030\",\"apiVersion\":\"v1\",\"resourceVersion\":\"24944\"}}\n] [{v1 ReplicationController my-hostname-delete-node 17efa04e-e4d8-11e6-a553-42010af00030 0xc8222602c7}] []} {[{default-token-n2d7s {<nil> <nil> <nil> <nil> <nil> 0xc8220921e0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-n2d7s true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8222603c0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-8ede1a92-p9kq 0xc820ee3740 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-27 13:32:24 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-27 13:32:25 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-27 13:32:24 -0800 PST}  }]   10.240.0.3 10.96.1.3 2017-01-27T13:32:24-08:00 [] [{my-hostname-delete-node {<nil> 0xc8213b2260 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://351fa0eb09388c5ccdc48913c6869639a1e409b870f9493a8fa0419e0f1a2d5d}]}}]}",
    }
    failed to wait for pods responding: pod with UID 17f1a375-e4d8-11e6-a553-42010af00030 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-3rtv2/pods 25415} [{{ } {my-hostname-delete-node-b16q2 my-hostname-delete-node- e2e-tests-resize-nodes-3rtv2 /api/v1/namespaces/e2e-tests-resize-nodes-3rtv2/pods/my-hostname-delete-node-b16q2 9e353f12-e4d8-11e6-a553-42010af00030 25276 0 {2017-01-27 13:36:09 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-3rtv2","name":"my-hostname-delete-node","uid":"17efa04e-e4d8-11e6-a553-42010af00030","apiVersion":"v1","resourceVersion":"25206"}}
    ] [{v1 ReplicationController my-hostname-delete-node 17efa04e-e4d8-11e6-a553-42010af00030 0xc8225218d7}] []} {[{default-token-n2d7s {<nil> <nil> <nil> <nil> <nil> 0xc822092120 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-n2d7s true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8225219f0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-8ede1a92-p2c7 0xc820ee3480 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-27 13:36:10 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-27 13:36:10 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-27 13:36:09 -0800 PST}  }]   10.240.0.4 10.96.0.4 2017-01-27T13:36:10-08:00 [] [{my-hostname-delete-node {<nil> 0xc8213b2220 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://d1c8d9f3cb7c0f7853fb123fea8e6313e147452a807ea43f5b9a6105cab8c66b}]}} {{ } {my-hostname-delete-node-cw211 my-hostname-delete-node- e2e-tests-resize-nodes-3rtv2 /api/v1/namespaces/e2e-tests-resize-nodes-3rtv2/pods/my-hostname-delete-node-cw211 9e3cb772-e4d8-11e6-a553-42010af00030 25278 0 {2017-01-27 13:36:10 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-3rtv2","name":"my-hostname-delete-node","uid":"17efa04e-e4d8-11e6-a553-42010af00030","apiVersion":"v1","resourceVersion":"25265"}}
    ] [{v1 ReplicationController my-hostname-delete-node 17efa04e-e4d8-11e6-a553-42010af00030 0xc822521e27}] []} {[{default-token-n2d7s {<nil> <nil> <nil> <nil> <nil> 0xc822092180 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-n2d7s true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc822521f60 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-8ede1a92-p9kq 0xc820ee3540 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-27 13:36:10 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-27 13:36:11 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-27 13:36:10 -0800 PST}  }]   10.240.0.3 10.96.1.4 2017-01-27T13:36:10-08:00 [] [{my-hostname-delete-node {<nil> 0xc8213b2240 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://0dba8c37442552037a2d8dd8c868fb72203b20e8b5f1326c623aeee2e14cfc2d}]}} {{ } {my-hostname-delete-node-ddcm5 my-hostname-delete-node- e2e-tests-resize-nodes-3rtv2 /api/v1/namespaces/e2e-tests-resize-nodes-3rtv2/pods/my-hostname-delete-node-ddcm5 17f1916e-e4d8-11e6-a553-42010af00030 24962 0 {2017-01-27 13:32:24 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-3rtv2","name":"my-hostname-delete-node","uid":"17efa04e-e4d8-11e6-a553-42010af00030","apiVersion":"v1","resourceVersion":"24944"}}
    ] [{v1 ReplicationController my-hostname-delete-node 17efa04e-e4d8-11e6-a553-42010af00030 0xc8222602c7}] []} {[{default-token-n2d7s {<nil> <nil> <nil> <nil> <nil> 0xc8220921e0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-n2d7s true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8222603c0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-8ede1a92-p9kq 0xc820ee3740 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-27 13:32:24 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-27 13:32:25 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-27 13:32:24 -0800 PST}  }]   10.240.0.3 10.96.1.3 2017-01-27T13:32:24-08:00 [] [{my-hostname-delete-node {<nil> 0xc8213b2260 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://351fa0eb09388c5ccdc48913c6869639a1e409b870f9493a8fa0419e0f1a2d5d}]}}]}
not to have occurred

Issues about this test specifically: #27233 #36204

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/230/
Multiple broken tests:

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200e40b0>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200e40b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200e40b0>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Jan 27 17:59:07.880: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-74e2fbb9-qd88:
 container "runtime": expected RSS memory (MB) < 314572800; got 514887680
node gke-bootstrap-e2e-default-pool-74e2fbb9-1pqb:
 container "runtime": expected RSS memory (MB) < 314572800; got 518004736
node gke-bootstrap-e2e-default-pool-74e2fbb9-3sld:
 container "runtime": expected RSS memory (MB) < 314572800; got 532787200

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:331
Expected error:
    <*errors.errorString | 0xc82154ed90>: {
        s: "service verification failed for: 10.99.244.148\nexpected [service2-688hs service2-k62k8 service2-vh4tp]\nreceived [service2-688hs service2-vh4tp]",
    }
    service verification failed for: 10.99.244.148
    expected [service2-688hs service2-k62k8 service2-vh4tp]
    received [service2-688hs service2-vh4tp]
not to have occurred

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:279
Expected error:
    <*errors.errorString | 0xc824d12de0>: {
        s: "service verification failed for: 10.99.242.120\nexpected [service2-23mpg service2-qf6p2 service2-whrcp]\nreceived [service2-23mpg service2-qf6p2]",
    }
    service verification failed for: 10.99.242.120
    expected [service2-23mpg service2-qf6p2 service2-whrcp]
    received [service2-23mpg service2-qf6p2]
not to have occurred

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200e40b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:725
Jan 27 17:07:44.948: Failed waiting for pods to be running: Timeout waiting for 1 pods to be ready

Issues about this test specifically: #26134

Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication between nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:267
Expected error:
    <*errors.errorString | 0xc82558a670>: {
        s: "pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2017-01-27 19:47:56 -0800 PST} FinishedAt:{Time:2017-01-27 19:48:06 -0800 PST} ContainerID:docker://a521a22d33e4f602e81aae337427ed2b67a909346ee30e3df44ef689621c5e10}",
    }
    pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2017-01-27 19:47:56 -0800 PST} FinishedAt:{Time:2017-01-27 19:48:06 -0800 PST} ContainerID:docker://a521a22d33e4f602e81aae337427ed2b67a909346ee30e3df44ef689621c5e10}
not to have occurred

Issues about this test specifically: #30131 #31402

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/231/
Multiple broken tests:

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:70
Jan 28 06:26:17.658: timeout waiting 15m0s for pods size to be 3

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable [replication controller] recreates pods scheduled on the unreachable node AND allows scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:544
Expected error:
    <*errors.errorString | 0xc8214740c0>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27324 #35852 #35880

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:331
Expected error:
    <*errors.errorString | 0xc822d4a3d0>: {
        s: "service verification failed for: 10.99.248.155\nexpected [service2-2jgxd service2-n9f80 service2-pq786]\nreceived [service2-2jgxd service2-n9f80 wget: download timed out]",
    }
    service verification failed for: 10.99.248.155
    expected [service2-2jgxd service2-n9f80 service2-pq786]
    received [service2-2jgxd service2-n9f80 wget: download timed out]
not to have occurred

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Jan 28 02:09:48.022: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-e4975718-3trg:
 container "runtime": expected RSS memory (MB) < 314572800; got 528371712
node gke-bootstrap-e2e-default-pool-e4975718-61ps:
 container "runtime": expected RSS memory (MB) < 314572800; got 517005312
node gke-bootstrap-e2e-default-pool-e4975718-bk8b:
 container "runtime": expected RSS memory (MB) < 314572800; got 525094912

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc8200c7060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26168 #27450

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200c7060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:725
Jan 28 04:56:16.287: Could not reach HTTP service through 104.154.208.210:31561 after 5m0s: timed out waiting for the condition

Issues about this test specifically: #26134

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:968
Jan 28 06:07:18.434: expected un-ready endpoint for Service webserver within 5m0s, stdout: 

Issues about this test specifically: #26172 #40644

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200c7060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:270
Jan 28 03:45:55.022: Frontend service did not start serving content in 600 seconds.

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8218956b0>: {
        s: "Not all pods in namespace 'kube-system' running and ready within 5m0s",
    }
    Not all pods in namespace 'kube-system' running and ready within 5m0s
not to have occurred

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication between nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:267
Expected error:
    <*errors.errorString | 0xc8264b2550>: {
        s: "pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2017-01-28 04:56:38 -0800 PST} FinishedAt:{Time:2017-01-28 04:56:48 -0800 PST} ContainerID:docker://8a0e1ace41c3ea1208cf2767a28da96ce7596ab12d3a831f0cab01cc18184829}",
    }
    pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2017-01-28 04:56:38 -0800 PST} FinishedAt:{Time:2017-01-28 04:56:48 -0800 PST} ContainerID:docker://8a0e1ace41c3ea1208cf2767a28da96ce7596ab12d3a831f0cab01cc18184829}
not to have occurred

Issues about this test specifically: #30131 #31402

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc82182cee0>: {
        s: "Not all pods in namespace 'kube-system' running and ready within 5m0s",
    }
    Not all pods in namespace 'kube-system' running and ready within 5m0s
not to have occurred

Issues about this test specifically: #33883

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:59
Jan 28 05:24:41.980: timeout waiting 15m0s for pods size to be 3

Issues about this test specifically: #27397 #27917 #31592

Failed: [k8s.io] KubeProxy should test kube-proxy [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubeproxy.go:107
Expected
    <int>: 0
to be ==
    <int>: 1

Issues about this test specifically: #26490 #33669

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98
Jan 28 04:35:24.706: At least one pod wasn't running and ready or succeeded at test start.

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200c7060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:279
Expected error:
    <*errors.errorString | 0xc822f5fd30>: {
        s: "service verification failed for: 10.99.243.102\nexpected [service1-6pdj0 service1-ck1gk service1-rz88w]\nreceived [service1-6pdj0 service1-rz88w]",
    }
    service verification failed for: 10.99.243.102
    expected [service1-6pdj0 service1-ck1gk service1-rz88w]
    received [service1-6pdj0 service1-rz88w]
not to have occurred

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc822f5f320>: {
        s: "Not all pods in namespace 'kube-system' running and ready within 5m0s",
    }
    Not all pods in namespace 'kube-system' running and ready within 5m0s
not to have occurred

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200c7060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Networking should function for intra-pod communication [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:225
Jan 28 06:09:45.296: Failed on attempt 2. Cleaning up. Details:
{
	"Hostname": "nettest-1v146",
	"Sent": {
		"nettest-1v146": 15,
		"nettest-pjtm5": 15
	},
	"Received": {
		"nettest-1v146": 15,
		"nettest-pjtm5": 15
	},
	"Errors": null,
	"Log": [
		"e2e-tests-nettest-hjgjg/nettest has 0 endpoints ([]), which is less than 3 as expected. Waiting for all endpoints to come up.",
		"Attempting to contact http://10.96.1.10:8080",
		"Warning: unable to contact the endpoint \"http://10.96.1.10:8080\": Post http://10.96.1.10:8080/write: dial tcp 10.96.1.10:8080: i/o timeout",
		"Attempting to contact http://10.96.2.3:8080",
		"Attempting to contact http://10.96.3.3:8080",
		"Attempting to contact http://10.96.2.3:8080",
		"Attempting to contact http://10.96.3.3:8080",
		"Attempting to contact http://10.96.3.3:8080",
		"Attempting to contact http://10.96.2.3:8080",
		"Attempting to contact http://10.96.2.3:8080",
		"Attempting to contact http://10.96.3.3:8080",
		"Attempting to contact http://10.96.2.3:8080",
		"Attempting to contact http://10.96.3.3:8080",
		"Attempting to contact http://10.96.2.3:8080",
		"Attempting to contact http://10.96.3.3:8080",
		"Attempting to contact http://10.96.2.3:8080",
		"Attempting to contact http://10.96.3.3:8080",
		"Attempting to contact http://10.96.2.3:8080",
		"Attempting to contact http://10.96.3.3:8080",
		"Attempting to contact http://10.96.2.3:8080",
		"Attempting to contact http://10.96.3.3:8080",
		"Attempting to contact http://10.96.2.3:8080",
		"Attempting to contact http://10.96.3.3:8080",
		"Attempting to contact http://10.96.2.3:8080",
		"Attempting to contact http://10.96.3.3:8080",
		"Attempting to contact http://10.96.2.3:8080",
		"Attempting to contact http://10.96.3.3:8080",
		"Attempting to contact http://10.96.2.3:8080",
		"Attempting to contact http://10.96.3.3:8080",
		"Attempting to contact http://10.96.2.3:8080",
		"Attempting to contact http://10.96.3.3:8080",
		"Attempting to contact http://10.96.2.3:8080",
		"Attempting to contact http://10.96.3.3:8080",
		"Declaring failure for e2e-tests-nettest-hjgjg/nettest with 2 sent and 2 received and 3 peers"
	],
	"StillContactingPeers": false
}

Issues about this test specifically: #26960 #27235

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:453
Expected error:
    <*errors.errorString | 0xc8213f45e0>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27233 #36204

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/232/
Multiple broken tests:

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200c5060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200c5060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication between nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:267
Expected error:
    <*errors.errorString | 0xc8214aaa20>: {
        s: "pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2017-01-28 11:38:47 -0800 PST} FinishedAt:{Time:2017-01-28 11:38:57 -0800 PST} ContainerID:docker://8ec60ebb382a887d488967b65abc7232f34db4e8fca4a53bff6105f5fa9b8c79}",
    }
    pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2017-01-28 11:38:47 -0800 PST} FinishedAt:{Time:2017-01-28 11:38:57 -0800 PST} ContainerID:docker://8ec60ebb382a887d488967b65abc7232f34db4e8fca4a53bff6105f5fa9b8c79}
not to have occurred

Issues about this test specifically: #30131 #31402

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Jan 28 09:42:46.084: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-1b18aa17-7x0d:
 container "runtime": expected RSS memory (MB) < 314572800; got 516198400
node gke-bootstrap-e2e-default-pool-1b18aa17-km60:
 container "runtime": expected RSS memory (MB) < 314572800; got 527335424
node gke-bootstrap-e2e-default-pool-1b18aa17-w2pt:
 container "runtime": expected RSS memory (MB) < 314572800; got 524152832

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable [replication controller] recreates pods scheduled on the unreachable node AND allows scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:544
Expected error:
    <*errors.errorString | 0xc8214b8be0>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27324 #35852 #35880

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200c5060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200c5060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/233/
Multiple broken tests:

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Jan 28 17:12:15.770: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-c571f131-2jn0:
 container "runtime": expected RSS memory (MB) < 314572800; got 531537920
node gke-bootstrap-e2e-default-pool-c571f131-h0k4:
 container "runtime": expected RSS memory (MB) < 314572800; got 510005248
node gke-bootstrap-e2e-default-pool-c571f131-kdtz:
 container "runtime": expected RSS memory (MB) < 314572800; got 519221248

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200ee0b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:372
Expected error:
    <*errors.errorString | 0xc820b6ae60>: {
        s: "service verification failed for: 10.99.245.128\nexpected [service1-d1f8j service1-hxwkh service1-t8l66]\nreceived [service1-d1f8j service1-t8l66]",
    }
    service verification failed for: 10.99.245.128
    expected [service1-d1f8j service1-hxwkh service1-t8l66]
    received [service1-d1f8j service1-t8l66]
not to have occurred

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:453
Expected error:
    <*errors.errorString | 0xc8213cce10>: {
        s: "failed to wait for pods responding: pod with UID 56929fb3-e5c9-11e6-adfa-42010af00007 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-3s656/pods 31435} [{{ } {my-hostname-delete-node-45vwq my-hostname-delete-node- e2e-tests-resize-nodes-3s656 /api/v1/namespaces/e2e-tests-resize-nodes-3s656/pods/my-hostname-delete-node-45vwq 8e3fdce4-e5c9-11e6-adfa-42010af00007 31288 0 {2017-01-28 18:20:51 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-3s656\",\"name\":\"my-hostname-delete-node\",\"uid\":\"5690d422-e5c9-11e6-adfa-42010af00007\",\"apiVersion\":\"v1\",\"resourceVersion\":\"31169\"}}\n] [{v1 ReplicationController my-hostname-delete-node 5690d422-e5c9-11e6-adfa-42010af00007 0xc8216d14e7}] []} {[{default-token-mrpk5 {<nil> <nil> <nil> <nil> <nil> 0xc823f0ff80 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-mrpk5 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8216d15e0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-c571f131-kdtz 0xc82370f3c0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-28 18:20:51 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-28 18:21:00 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-28 18:20:51 -0800 PST}  }]   10.240.0.4 10.96.1.4 2017-01-28T18:20:51-08:00 [] [{my-hostname-delete-node {<nil> 0xc821f87ea0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://fe32e9052662471df7ca152034fd870fb61bc34af45b0d4880ed7827163c5d21}]}} {{ } {my-hostname-delete-node-g1vxb my-hostname-delete-node- e2e-tests-resize-nodes-3s656 /api/v1/namespaces/e2e-tests-resize-nodes-3s656/pods/my-hostname-delete-node-g1vxb 5692c902-e5c9-11e6-adfa-42010af00007 31088 0 {2017-01-28 18:19:18 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-3s656\",\"name\":\"my-hostname-delete-node\",\"uid\":\"5690d422-e5c9-11e6-adfa-42010af00007\",\"apiVersion\":\"v1\",\"resourceVersion\":\"31068\"}}\n] [{v1 ReplicationController my-hostname-delete-node 5690d422-e5c9-11e6-adfa-42010af00007 0xc8216d1877}] []} {[{default-token-mrpk5 {<nil> <nil> <nil> <nil> <nil> 0xc821f6a000 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-mrpk5 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8216d1970 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-c571f131-h0k4 0xc82370f4c0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-28 18:19:18 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-28 18:19:21 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-28 18:19:18 -0800 PST}  }]   10.240.0.5 10.96.3.4 2017-01-28T18:19:18-08:00 [] [{my-hostname-delete-node {<nil> 0xc821f87ec0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://dd2a1f77ba2e580977460dbd8e234c2957d19cb17c090252e34bcc33be279140}]}} {{ } {my-hostname-delete-node-whlhn my-hostname-delete-node- e2e-tests-resize-nodes-3s656 /api/v1/namespaces/e2e-tests-resize-nodes-3s656/pods/my-hostname-delete-node-whlhn 56927d9b-e5c9-11e6-adfa-42010af00007 31090 0 {2017-01-28 18:19:18 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-3s656\",\"name\":\"my-hostname-delete-node\",\"uid\":\"5690d422-e5c9-11e6-adfa-42010af00007\",\"apiVersion\":\"v1\",\"resourceVersion\":\"31068\"}}\n] [{v1 ReplicationController my-hostname-delete-node 5690d422-e5c9-11e6-adfa-42010af00007 0xc8216d1c07}] []} {[{default-token-mrpk5 {<nil> <nil> <nil> <nil> <nil> 0xc821f6a060 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-mrpk5 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8216d1d00 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-c571f131-h0k4 0xc82370f580 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-28 18:19:18 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-28 18:19:21 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-28 18:19:18 -0800 PST}  }]   10.240.0.5 10.96.3.3 2017-01-28T18:19:18-08:00 [] [{my-hostname-delete-node {<nil> 0xc821f87ee0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://dc6b10b2b964c068a748a3b06e635520f76c745630ddca183be3da6be7427144}]}}]}",
    }
    failed to wait for pods responding: pod with UID 56929fb3-e5c9-11e6-adfa-42010af00007 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-3s656/pods 31435} [{{ } {my-hostname-delete-node-45vwq my-hostname-delete-node- e2e-tests-resize-nodes-3s656 /api/v1/namespaces/e2e-tests-resize-nodes-3s656/pods/my-hostname-delete-node-45vwq 8e3fdce4-e5c9-11e6-adfa-42010af00007 31288 0 {2017-01-28 18:20:51 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-3s656","name":"my-hostname-delete-node","uid":"5690d422-e5c9-11e6-adfa-42010af00007","apiVersion":"v1","resourceVersion":"31169"}}
    ] [{v1 ReplicationController my-hostname-delete-node 5690d422-e5c9-11e6-adfa-42010af00007 0xc8216d14e7}] []} {[{default-token-mrpk5 {<nil> <nil> <nil> <nil> <nil> 0xc823f0ff80 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-mrpk5 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8216d15e0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-c571f131-kdtz 0xc82370f3c0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-28 18:20:51 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-28 18:21:00 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-28 18:20:51 -0800 PST}  }]   10.240.0.4 10.96.1.4 2017-01-28T18:20:51-08:00 [] [{my-hostname-delete-node {<nil> 0xc821f87ea0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://fe32e9052662471df7ca152034fd870fb61bc34af45b0d4880ed7827163c5d21}]}} {{ } {my-hostname-delete-node-g1vxb my-hostname-delete-node- e2e-tests-resize-nodes-3s656 /api/v1/namespaces/e2e-tests-resize-nodes-3s656/pods/my-hostname-delete-node-g1vxb 5692c902-e5c9-11e6-adfa-42010af00007 31088 0 {2017-01-28 18:19:18 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-3s656","name":"my-hostname-delete-node","uid":"5690d422-e5c9-11e6-adfa-42010af00007","apiVersion":"v1","resourceVersion":"31068"}}
    ] [{v1 ReplicationController my-hostname-delete-node 5690d422-e5c9-11e6-adfa-42010af00007 0xc8216d1877}] []} {[{default-token-mrpk5 {<nil> <nil> <nil> <nil> <nil> 0xc821f6a000 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-mrpk5 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8216d1970 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-c571f131-h0k4 0xc82370f4c0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-28 18:19:18 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-28 18:19:21 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-28 18:19:18 -0800 PST}  }]   10.240.0.5 10.96.3.4 2017-01-28T18:19:18-08:00 [] [{my-hostname-delete-node {<nil> 0xc821f87ec0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://dd2a1f77ba2e580977460dbd8e234c2957d19cb17c090252e34bcc33be279140}]}} {{ } {my-hostname-delete-node-whlhn my-hostname-delete-node- e2e-tests-resize-nodes-3s656 /api/v1/namespaces/e2e-tests-resize-nodes-3s656/pods/my-hostname-delete-node-whlhn 56927d9b-e5c9-11e6-adfa-42010af00007 31090 0 {2017-01-28 18:19:18 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-3s656","name":"my-hostname-delete-node","uid":"5690d422-e5c9-11e6-adfa-42010af00007","apiVersion":"v1","resourceVersion":"31068"}}
    ] [{v1 ReplicationController my-hostname-delete-node 5690d422-e5c9-11e6-adfa-42010af00007 0xc8216d1c07}] []} {[{default-token-mrpk5 {<nil> <nil> <nil> <nil> <nil> 0xc821f6a060 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-mrpk5 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8216d1d00 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-c571f131-h0k4 0xc82370f580 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-28 18:19:18 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-28 18:19:21 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-28 18:19:18 -0800 PST}  }]   10.240.0.5 10.96.3.3 2017-01-28T18:19:18-08:00 [] [{my-hostname-delete-node {<nil> 0xc821f87ee0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://dc6b10b2b964c068a748a3b06e635520f76c745630ddca183be3da6be7427144}]}}]}
not to have occurred

Issues about this test specifically: #27233 #36204

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/234/
Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Jan 28 23:43:09.747: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-4298daaa-hshz:
 container "runtime": expected RSS memory (MB) < 314572800; got 531202048
node gke-bootstrap-e2e-default-pool-4298daaa-vv7v:
 container "runtime": expected RSS memory (MB) < 314572800; got 510881792
node gke-bootstrap-e2e-default-pool-4298daaa-xt9c:
 container "runtime": expected RSS memory (MB) < 314572800; got 525565952

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200c1060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200c1060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200c1060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200c1060>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:453
Expected error:
    <*errors.errorString | 0xc8211094c0>: {
        s: "failed to wait for pods responding: pod with UID 5a53e8dd-e5f7-11e6-aa08-42010af00018 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-hc6n2/pods 25727} [{{ } {my-hostname-delete-node-l6sp4 my-hostname-delete-node- e2e-tests-resize-nodes-hc6n2 /api/v1/namespaces/e2e-tests-resize-nodes-hc6n2/pods/my-hostname-delete-node-l6sp4 5a53c2de-e5f7-11e6-aa08-42010af00018 25313 0 {2017-01-28 23:48:41 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-hc6n2\",\"name\":\"my-hostname-delete-node\",\"uid\":\"5a51bf97-e5f7-11e6-aa08-42010af00018\",\"apiVersion\":\"v1\",\"resourceVersion\":\"25298\"}}\n] [{v1 ReplicationController my-hostname-delete-node 5a51bf97-e5f7-11e6-aa08-42010af00018 0xc820ff4f77}] []} {[{default-token-52mpp {<nil> <nil> <nil> <nil> <nil> 0xc826e1ba10 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-52mpp true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820ff50f0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-4298daaa-vv7v 0xc82238ea00 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-28 23:48:41 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-28 23:48:42 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-28 23:48:41 -0800 PST}  }]   10.240.0.5 10.96.3.3 2017-01-28T23:48:41-08:00 [] [{my-hostname-delete-node {<nil> 0xc82181b540 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://279a2a8d0e7ba845996d9fa750350667ba8d1a5e2fbd935b868da27924de9624}]}} {{ } {my-hostname-delete-node-lq2jv my-hostname-delete-node- e2e-tests-resize-nodes-hc6n2 /api/v1/namespaces/e2e-tests-resize-nodes-hc6n2/pods/my-hostname-delete-node-lq2jv aa6cd781-e5f7-11e6-aa08-42010af00018 25558 0 {2017-01-28 23:50:56 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-hc6n2\",\"name\":\"my-hostname-delete-node\",\"uid\":\"5a51bf97-e5f7-11e6-aa08-42010af00018\",\"apiVersion\":\"v1\",\"resourceVersion\":\"25432\"}}\n] [{v1 ReplicationController my-hostname-delete-node 5a51bf97-e5f7-11e6-aa08-42010af00018 0xc820ff54b7}] []} {[{default-token-52mpp {<nil> <nil> <nil> <nil> <nil> 0xc826e1bb90 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-52mpp true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820ff5600 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-4298daaa-vv7v 0xc82238eac0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-28 23:50:55 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-28 23:50:57 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-28 23:50:56 -0800 PST}  }]   10.240.0.5 10.96.3.4 2017-01-28T23:50:55-08:00 [] [{my-hostname-delete-node {<nil> 0xc82181b560 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://86aa758d38b7f818df8680039c6a9d527059ca02fc6e8432ec4c751f19ac4324}]}} {{ } {my-hostname-delete-node-n2vm6 my-hostname-delete-node- e2e-tests-resize-nodes-hc6n2 /api/v1/namespaces/e2e-tests-resize-nodes-hc6n2/pods/my-hostname-delete-node-n2vm6 5a53da45-e5f7-11e6-aa08-42010af00018 25319 0 {2017-01-28 23:48:41 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-hc6n2\",\"name\":\"my-hostname-delete-node\",\"uid\":\"5a51bf97-e5f7-11e6-aa08-42010af00018\",\"apiVersion\":\"v1\",\"resourceVersion\":\"25298\"}}\n] [{v1 ReplicationController my-hostname-delete-node 5a51bf97-e5f7-11e6-aa08-42010af00018 0xc820ff5997}] []} {[{default-token-52mpp {<nil> <nil> <nil> <nil> <nil> 0xc826e1bbf0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-52mpp true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820ff5a90 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-4298daaa-xt9c 0xc82238ec00 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-28 23:48:41 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-28 23:48:44 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-28 23:48:41 -0800 PST}  }]   10.240.0.2 10.96.0.3 2017-01-28T23:48:41-08:00 [] [{my-hostname-delete-node {<nil> 0xc82181b580 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://b389921eb551d216e146c16a47fa31376e45f8c6b043437e2459ee5f2ec2183e}]}}]}",
    }
    failed to wait for pods responding: pod with UID 5a53e8dd-e5f7-11e6-aa08-42010af00018 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-hc6n2/pods 25727} [{{ } {my-hostname-delete-node-l6sp4 my-hostname-delete-node- e2e-tests-resize-nodes-hc6n2 /api/v1/namespaces/e2e-tests-resize-nodes-hc6n2/pods/my-hostname-delete-node-l6sp4 5a53c2de-e5f7-11e6-aa08-42010af00018 25313 0 {2017-01-28 23:48:41 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-hc6n2","name":"my-hostname-delete-node","uid":"5a51bf97-e5f7-11e6-aa08-42010af00018","apiVersion":"v1","resourceVersion":"25298"}}
    ] [{v1 ReplicationController my-hostname-delete-node 5a51bf97-e5f7-11e6-aa08-42010af00018 0xc820ff4f77}] []} {[{default-token-52mpp {<nil> <nil> <nil> <nil> <nil> 0xc826e1ba10 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-52mpp true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820ff50f0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-4298daaa-vv7v 0xc82238ea00 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-28 23:48:41 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-28 23:48:42 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-28 23:48:41 -0800 PST}  }]   10.240.0.5 10.96.3.3 2017-01-28T23:48:41-08:00 [] [{my-hostname-delete-node {<nil> 0xc82181b540 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://279a2a8d0e7ba845996d9fa750350667ba8d1a5e2fbd935b868da27924de9624}]}} {{ } {my-hostname-delete-node-lq2jv my-hostname-delete-node- e2e-tests-resize-nodes-hc6n2 /api/v1/namespaces/e2e-tests-resize-nodes-hc6n2/pods/my-hostname-delete-node-lq2jv aa6cd781-e5f7-11e6-aa08-42010af00018 25558 0 {2017-01-28 23:50:56 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-hc6n2","name":"my-hostname-delete-node","uid":"5a51bf97-e5f7-11e6-aa08-42010af00018","apiVersion":"v1","resourceVersion":"25432"}}
    ] [{v1 ReplicationController my-hostname-delete-node 5a51bf97-e5f7-11e6-aa08-42010af00018 0xc820ff54b7}] []} {[{default-token-52mpp {<nil> <nil> <nil> <nil> <nil> 0xc826e1bb90 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-52mpp true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820ff5600 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-4298daaa-vv7v 0xc82238eac0 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-28 23:50:55 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-28 23:50:57 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-28 23:50:56 -0800 PST}  }]   10.240.0.5 10.96.3.4 2017-01-28T23:50:55-08:00 [] [{my-hostname-delete-node {<nil> 0xc82181b560 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://86aa758d38b7f818df8680039c6a9d527059ca02fc6e8432ec4c751f19ac4324}]}} {{ } {my-hostname-delete-node-n2vm6 my-hostname-delete-node- e2e-tests-resize-nodes-hc6n2 /api/v1/namespaces/e2e-tests-resize-nodes-hc6n2/pods/my-hostname-delete-node-n2vm6 5a53da45-e5f7-11e6-aa08-42010af00018 25319 0 {2017-01-28 23:48:41 -0800 PST} <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-hc6n2","name":"my-hostname-delete-node","uid":"5a51bf97-e5f7-11e6-aa08-42010af00018","apiVersion":"v1","resourceVersion":"25298"}}
    ] [{v1 ReplicationController my-hostname-delete-node 5a51bf97-e5f7-11e6-aa08-42010af00018 0xc820ff5997}] []} {[{default-token-52mpp {<nil> <nil> <nil> <nil> <nil> 0xc826e1bbf0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-52mpp true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820ff5a90 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-4298daaa-xt9c 0xc82238ec00 []  } {Running [{Initialized True {0001-01-01 00:00:00 +0000 UTC} {2017-01-28 23:48:41 -0800 PST}  } {Ready True {0001-01-01 00:00:00 +0000 UTC} {2017-01-28 23:48:44 -0800 PST}  } {PodScheduled True {0001-01-01 00:00:00 +0000 UTC} {2017-01-28 23:48:41 -0800 PST}  }]   10.240.0.2 10.96.0.3 2017-01-28T23:48:41-08:00 [] [{my-hostname-delete-node {<nil> 0xc82181b580 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://b389921eb551d216e146c16a47fa31376e45f8c6b043437e2459ee5f2ec2183e}]}}]}
not to have occurred

Issues about this test specifically: #27233 #36204

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/235/
Multiple broken tests:

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200e60b0>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Jan 29 07:46:07.728: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-544ba067-19tw:
 container "runtime": expected RSS memory (MB) < 314572800; got 526327808
node gke-bootstrap-e2e-default-pool-544ba067-251k:
 container "runtime": expected RSS memory (MB) < 314572800; got 539086848
node gke-bootstrap-e2e-default-pool-544ba067-ngrp:
 container "runtime": expected RSS memory (MB) < 314572800; got 530649088

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200e60b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200e60b0>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200e60b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/236/
Multiple broken tests:

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8200e40b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc8200e40b0>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] KubeProxy should test kube-proxy [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubeproxy.go:107
Expected error:
    <*errors.errorString | 0xc8200e40b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #26490 #33669

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8200e40b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc8200e40b0>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Jan 29 11:09:15.851: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-002618f8-pjbg:
 container "runtime": expected RSS memory (MB) < 314572800; got 527642624
node gke-bootstrap-e2e-default-pool-002618f8-4wcs:
 container "runtime": expected RSS memory (MB) < 314572800; got 513695744
node gke-bootstrap-e2e-default-pool-002618f8-n9z7:
 container "runtime": expected RSS memory (MB) < 314572800; got 526626816

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-master/237/
Multiple broken tests:

Failed: Test {e2e.go}

exit status 1

Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483

Failed: [k8s.io] Pods should invoke init containers on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:798
Expected
    <bool>: false
to be true

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] V1Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Issues about this test specifically: #27704 #30127 #30602 #31070

Failed: [k8s.io] Pods should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:913
Expected
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #29954

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Jan 29 17:22:14.227: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-1d6d2c10-05gr:
 container "runtime": expected RSS memory (MB) < 314572800; got 521457664
node gke-bootstrap-e2e-default-pool-1d6d2c10-499c:
 container "runtime": expected RSS memory (MB) < 314572800; got 520704000
node gke-bootstrap-e2e-default-pool-1d6d2c10-9s52:
 container "runtime": expected RSS memory (MB) < 314572800; got 523784192

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

Failed: [k8s.io] Pods should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:1022
Expected
    <*errors.errorString | 0xc820019180>: {
        s: "timed out waiting for the condition",
    }
to be nil

Issues about this test specifically: #27465

Failed: [k8s.io] Pods should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:729
Expected
    <bool>: false
to be true

Issues about this test specifically: #26131

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/flake Categorizes issue or PR as related to a flaky test.
Projects
None yet
Development

No branches or pull requests

2 participants