Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubernetes-e2e-gke-container_vm-1.4-gci-1.5-upgrade-cluster: broken test run #37924

Closed
k8s-github-robot opened this issue Dec 2, 2016 · 2 comments
Assignees
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-container_vm-1.4-gci-1.5-upgrade-cluster/595/

Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:521
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://130.211.212.155 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-7g33s -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"namespace\":\"e2e-tests-kubectl-7g33s\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-7g33s/services/redis-master\", \"uid\":\"4c0acfdd-b857-11e6-bc27-42010af00016\", \"resourceVersion\":\"44974\", \"creationTimestamp\":\"2016-12-02T06:19:35Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\"}, \"spec\":map[string]interface {}{\"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"port\":6379, \"targetPort\":\"redis-server\", \"protocol\":\"TCP\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.246.187\"}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc8209f55e0 exit status 1 <nil> true [0xc8210be2c8 0xc8210be2e0 0xc8210be2f8] [0xc8210be2c8 0xc8210be2e0 0xc8210be2f8] [0xc8210be2d8 0xc8210be2f0] [0xaf84c0 0xaf84c0] 0xc8210d0360}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"namespace\":\"e2e-tests-kubectl-7g33s\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-7g33s/services/redis-master\", \"uid\":\"4c0acfdd-b857-11e6-bc27-42010af00016\", \"resourceVersion\":\"44974\", \"creationTimestamp\":\"2016-12-02T06:19:35Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\"}, \"spec\":map[string]interface {}{\"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"port\":6379, \"targetPort\":\"redis-server\", \"protocol\":\"TCP\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.246.187\"}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://130.211.212.155 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-7g33s -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"namespace":"e2e-tests-kubectl-7g33s", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-7g33s/services/redis-master", "uid":"4c0acfdd-b857-11e6-bc27-42010af00016", "resourceVersion":"44974", "creationTimestamp":"2016-12-02T06:19:35Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master"}, "spec":map[string]interface {}{"type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"port":6379, "targetPort":"redis-server", "protocol":"TCP"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.246.187"}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc8209f55e0 exit status 1 <nil> true [0xc8210be2c8 0xc8210be2e0 0xc8210be2f8] [0xc8210be2c8 0xc8210be2e0 0xc8210be2f8] [0xc8210be2d8 0xc8210be2f0] [0xaf84c0 0xaf84c0] 0xc8210d0360}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"namespace":"e2e-tests-kubectl-7g33s", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-7g33s/services/redis-master", "uid":"4c0acfdd-b857-11e6-bc27-42010af00016", "resourceVersion":"44974", "creationTimestamp":"2016-12-02T06:19:35Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master"}, "spec":map[string]interface {}{"type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"port":6379, "targetPort":"redis-server", "protocol":"TCP"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.246.187"}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc8220c2620>: {
        s: "Namespace e2e-tests-services-tv8v5 is active",
    }
    Namespace e2e-tests-services-tv8v5 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:92
Failed to update the first deployment's overlapping annotation
Expected error:
    <*errors.errorString | 0xc820018c30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1244

Issues about this test specifically: #31502 #32947

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:67
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:59

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:428
Expected error:
    <*net.OpError | 0xc820ba22d0>: {
        Op: "dial",
        Net: "tcp",
        Source: nil,
        Addr: {
            IP: "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xff\x82\xd3ԛ",
            Port: 443,
            Zone: "",
        },
        Err: {
            Syscall: "getsockopt",
            Err: 0x6f,
        },
    }
    dial tcp 130.211.212.155:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:419

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:63
Expected error:
    <*errors.StatusError | 0xc8211d2500>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \\\"gke-e1c2ee1293a829f188b4\\\"?'\\nTrying to reach: 'https://gke-jenkins-e2e-default-pool-5a039e0c-ulfd:10250/logs/'\") has prevented the request from succeeding",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-e1c2ee1293a829f188b4\"?'\nTrying to reach: 'https://gke-jenkins-e2e-default-pool-5a039e0c-ulfd:10250/logs/'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'No SSH tunnels currently open. Were the targets able to accept an ssh-key for user \"gke-e1c2ee1293a829f188b4\"?'\nTrying to reach: 'https://gke-jenkins-e2e-default-pool-5a039e0c-ulfd:10250/logs/'") has prevented the request from succeeding
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:332

Issues about this test specifically: #32936

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820c859c0>: {
        s: "Namespace e2e-tests-services-tv8v5 is active",
    }
    Namespace e2e-tests-services-tv8v5 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc821e12ce0>: {
        s: "Namespace e2e-tests-services-tv8v5 is active",
    }
    Namespace e2e-tests-services-tv8v5 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #34223

Failed: [k8s.io] V1Job should fail a job [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc820018c30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:201

Issues about this test specifically: #37427

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
    <*errors.errorString | 0xc820cf1060>: {
        s: "Namespace e2e-tests-services-tv8v5 is active",
    }
    Namespace e2e-tests-services-tv8v5 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211

Issues about this test specifically: #28091

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc820018c30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:197

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #37177

Previous issues for this suite: #37759

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence. area/test-infra labels Dec 2, 2016
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-container_vm-1.4-gci-1.5-upgrade-cluster/596/

Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:521
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.32.191 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-0dvmk -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-0dvmk\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-0dvmk/services/redis-master\", \"uid\":\"67f0db9c-b887-11e6-a0b4-42010af00017\", \"resourceVersion\":\"35830\", \"creationTimestamp\":\"2016-12-02T12:03:58Z\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.250.92\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc8210b6de0 exit status 1 <nil> true [0xc8211be100 0xc8211be118 0xc8211be130] [0xc8211be100 0xc8211be118 0xc8211be130] [0xc8211be110 0xc8211be128] [0xaf84c0 0xaf84c0] 0xc820f7e660}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-0dvmk\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-0dvmk/services/redis-master\", \"uid\":\"67f0db9c-b887-11e6-a0b4-42010af00017\", \"resourceVersion\":\"35830\", \"creationTimestamp\":\"2016-12-02T12:03:58Z\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.127.250.92\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.32.191 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-0dvmk -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-0dvmk", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-0dvmk/services/redis-master", "uid":"67f0db9c-b887-11e6-a0b4-42010af00017", "resourceVersion":"35830", "creationTimestamp":"2016-12-02T12:03:58Z"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.250.92", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc8210b6de0 exit status 1 <nil> true [0xc8211be100 0xc8211be118 0xc8211be130] [0xc8211be100 0xc8211be118 0xc8211be130] [0xc8211be110 0xc8211be128] [0xaf84c0 0xaf84c0] 0xc820f7e660}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-0dvmk", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-0dvmk/services/redis-master", "uid":"67f0db9c-b887-11e6-a0b4-42010af00017", "resourceVersion":"35830", "creationTimestamp":"2016-12-02T12:03:58Z"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.127.250.92", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:452
Expected error:
    <*errors.errorString | 0xc821ee2aa0>: {
        s: "failed to wait for pods responding: pod with UID 0baf3eef-b88f-11e6-a0b4-42010af00017 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-v4qv3/pods 42848} [{{ } {my-hostname-delete-node-qk8kf my-hostname-delete-node- e2e-tests-resize-nodes-v4qv3 /api/v1/namespaces/e2e-tests-resize-nodes-v4qv3/pods/my-hostname-delete-node-qk8kf 3a8e6a3d-b88f-11e6-a0b4-42010af00017 42681 0 2016-12-02 04:59:58 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-v4qv3\",\"name\":\"my-hostname-delete-node\",\"uid\":\"0baca032-b88f-11e6-a0b4-42010af00017\",\"apiVersion\":\"v1\",\"resourceVersion\":\"42581\"}}\n] [{v1 ReplicationController my-hostname-delete-node 0baca032-b88f-11e6-a0b4-42010af00017 0xc820745027}] [] } {[{default-token-234v1 {<nil> <nil> <nil> <nil> <nil> 0xc821e83260 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-234v1 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820745150 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-c351fdc4-xu77 0xc821a30d40 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-02 04:59:58 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-12-02 05:00:00 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-02 04:59:58 -0800 PST  }]   10.240.0.2 10.124.2.59 2016-12-02 04:59:58 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821c9d980 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://6a0c37ea54c0081ecb0a07e8da94056930ce29a6d8b79e27296b0d56782a8639}]}} {{ } {my-hostname-delete-node-qt76t my-hostname-delete-node- e2e-tests-resize-nodes-v4qv3 /api/v1/namespaces/e2e-tests-resize-nodes-v4qv3/pods/my-hostname-delete-node-qt76t 0baf5aed-b88f-11e6-a0b4-42010af00017 42510 0 2016-12-02 04:58:39 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-v4qv3\",\"name\":\"my-hostname-delete-node\",\"uid\":\"0baca032-b88f-11e6-a0b4-42010af00017\",\"apiVersion\":\"v1\",\"resourceVersion\":\"42493\"}}\n] [{v1 ReplicationController my-hostname-delete-node 0baca032-b88f-11e6-a0b4-42010af00017 0xc820745437}] [] } {[{default-token-234v1 {<nil> <nil> <nil> <nil> <nil> 0xc821e832c0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-234v1 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820745530 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-c351fdc4-oqdv 0xc821a30e00 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-02 04:58:39 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-12-02 04:58:41 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-02 04:58:39 -0800 PST  }]   10.240.0.5 10.124.3.9 2016-12-02 04:58:39 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821c9d9a0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://27d8b8fda2bac9642313d423ad98a706c2c5ec95ee1c9b31d8c9b834c6641701}]}} {{ } {my-hostname-delete-node-wzljf my-hostname-delete-node- e2e-tests-resize-nodes-v4qv3 /api/v1/namespaces/e2e-tests-resize-nodes-v4qv3/pods/my-hostname-delete-node-wzljf 0bafadf7-b88f-11e6-a0b4-42010af00017 42512 0 2016-12-02 04:58:39 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-v4qv3\",\"name\":\"my-hostname-delete-node\",\"uid\":\"0baca032-b88f-11e6-a0b4-42010af00017\",\"apiVersion\":\"v1\",\"resourceVersion\":\"42493\"}}\n] [{v1 ReplicationController my-hostname-delete-node 0baca032-b88f-11e6-a0b4-42010af00017 0xc8207457d7}] [] } {[{default-token-234v1 {<nil> <nil> <nil> <nil> <nil> 0xc821e83320 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-234v1 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820745950 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-c351fdc4-oqdv 0xc821a30ec0 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-02 04:58:39 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-12-02 04:58:41 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-02 04:58:39 -0800 PST  }]   10.240.0.5 10.124.3.8 2016-12-02 04:58:39 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821c9d9c0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://899a1eacf8af913c095ddfa109f34e3b90855ccb225f346e8b37152fd2aa9136}]}}]}",
    }
    failed to wait for pods responding: pod with UID 0baf3eef-b88f-11e6-a0b4-42010af00017 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-v4qv3/pods 42848} [{{ } {my-hostname-delete-node-qk8kf my-hostname-delete-node- e2e-tests-resize-nodes-v4qv3 /api/v1/namespaces/e2e-tests-resize-nodes-v4qv3/pods/my-hostname-delete-node-qk8kf 3a8e6a3d-b88f-11e6-a0b4-42010af00017 42681 0 2016-12-02 04:59:58 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-v4qv3","name":"my-hostname-delete-node","uid":"0baca032-b88f-11e6-a0b4-42010af00017","apiVersion":"v1","resourceVersion":"42581"}}
    ] [{v1 ReplicationController my-hostname-delete-node 0baca032-b88f-11e6-a0b4-42010af00017 0xc820745027}] [] } {[{default-token-234v1 {<nil> <nil> <nil> <nil> <nil> 0xc821e83260 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-234v1 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820745150 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-c351fdc4-xu77 0xc821a30d40 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-02 04:59:58 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-12-02 05:00:00 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-02 04:59:58 -0800 PST  }]   10.240.0.2 10.124.2.59 2016-12-02 04:59:58 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821c9d980 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://6a0c37ea54c0081ecb0a07e8da94056930ce29a6d8b79e27296b0d56782a8639}]}} {{ } {my-hostname-delete-node-qt76t my-hostname-delete-node- e2e-tests-resize-nodes-v4qv3 /api/v1/namespaces/e2e-tests-resize-nodes-v4qv3/pods/my-hostname-delete-node-qt76t 0baf5aed-b88f-11e6-a0b4-42010af00017 42510 0 2016-12-02 04:58:39 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-v4qv3","name":"my-hostname-delete-node","uid":"0baca032-b88f-11e6-a0b4-42010af00017","apiVersion":"v1","resourceVersion":"42493"}}
    ] [{v1 ReplicationController my-hostname-delete-node 0baca032-b88f-11e6-a0b4-42010af00017 0xc820745437}] [] } {[{default-token-234v1 {<nil> <nil> <nil> <nil> <nil> 0xc821e832c0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-234v1 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820745530 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-c351fdc4-oqdv 0xc821a30e00 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-02 04:58:39 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-12-02 04:58:41 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-02 04:58:39 -0800 PST  }]   10.240.0.5 10.124.3.9 2016-12-02 04:58:39 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821c9d9a0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://27d8b8fda2bac9642313d423ad98a706c2c5ec95ee1c9b31d8c9b834c6641701}]}} {{ } {my-hostname-delete-node-wzljf my-hostname-delete-node- e2e-tests-resize-nodes-v4qv3 /api/v1/namespaces/e2e-tests-resize-nodes-v4qv3/pods/my-hostname-delete-node-wzljf 0bafadf7-b88f-11e6-a0b4-42010af00017 42512 0 2016-12-02 04:58:39 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-v4qv3","name":"my-hostname-delete-node","uid":"0baca032-b88f-11e6-a0b4-42010af00017","apiVersion":"v1","resourceVersion":"42493"}}
    ] [{v1 ReplicationController my-hostname-delete-node 0baca032-b88f-11e6-a0b4-42010af00017 0xc8207457d7}] [] } {[{default-token-234v1 {<nil> <nil> <nil> <nil> <nil> 0xc821e83320 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-234v1 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc820745950 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-c351fdc4-oqdv 0xc821a30ec0 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-02 04:58:39 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-12-02 04:58:41 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-02 04:58:39 -0800 PST  }]   10.240.0.5 10.124.3.8 2016-12-02 04:58:39 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821c9d9c0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://899a1eacf8af913c095ddfa109f34e3b90855ccb225f346e8b37152fd2aa9136}]}}]}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:451

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable [replication controller] recreates pods scheduled on the unreachable node AND allows scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:543
Expected error:
    <*errors.errorString | 0xc82125e180>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:338

Issues about this test specifically: #27324 #35852 #35880

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:67
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:59

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:92
Failed to update the first deployment's overlapping annotation
Expected error:
    <*errors.errorString | 0xc820178b90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1244

Issues about this test specifically: #31502 #32947

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc820178b90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:197

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #37177

Failed: [k8s.io] V1Job should fail a job [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc820178b90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:201

Issues about this test specifically: #37427

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gke-container_vm-1.4-gci-1.5-upgrade-cluster/597/

Multiple broken tests:

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:269
Dec  2 11:36:15.823: CPU usage exceeding limits:
 node gke-jenkins-e2e-default-pool-7136b4de-m14s:
 container "kubelet": expected 50th% usage < 0.350; got 0.396, container "kubelet": expected 95th% usage < 0.500; got 0.839
node gke-jenkins-e2e-default-pool-7136b4de-pru0:
 container "kubelet": expected 95th% usage < 0.500; got 0.557, container "kubelet": expected 50th% usage < 0.350; got 0.380
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:187

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:521
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.68.204 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-q8m0x -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"creationTimestamp\":\"2016-12-02T15:01:20Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-q8m0x\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-q8m0x/services/redis-master\", \"uid\":\"2ee994ae-b8a0-11e6-a938-42010af00028\", \"resourceVersion\":\"5518\"}, \"spec\":map[string]interface {}{\"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"clusterIP\":\"10.127.242.192\", \"type\":\"ClusterIP\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\"}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc82159c220 exit status 1 <nil> true [0xc820f223d0 0xc820f22408 0xc820f224b0] [0xc820f223d0 0xc820f22408 0xc820f224b0] [0xc820f223f8 0xc820f224a0] [0xaf84c0 0xaf84c0] 0xc8215b5380}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"creationTimestamp\":\"2016-12-02T15:01:20Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-q8m0x\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-q8m0x/services/redis-master\", \"uid\":\"2ee994ae-b8a0-11e6-a938-42010af00028\", \"resourceVersion\":\"5518\"}, \"spec\":map[string]interface {}{\"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"clusterIP\":\"10.127.242.192\", \"type\":\"ClusterIP\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\"}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.197.68.204 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-q8m0x -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"apiVersion":"v1", "metadata":map[string]interface {}{"creationTimestamp":"2016-12-02T15:01:20Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-q8m0x", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-q8m0x/services/redis-master", "uid":"2ee994ae-b8a0-11e6-a938-42010af00028", "resourceVersion":"5518"}, "spec":map[string]interface {}{"sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"role":"master", "app":"redis"}, "clusterIP":"10.127.242.192", "type":"ClusterIP"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service"}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc82159c220 exit status 1 <nil> true [0xc820f223d0 0xc820f22408 0xc820f224b0] [0xc820f223d0 0xc820f22408 0xc820f224b0] [0xc820f223f8 0xc820f224a0] [0xaf84c0 0xaf84c0] 0xc8215b5380}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"apiVersion":"v1", "metadata":map[string]interface {}{"creationTimestamp":"2016-12-02T15:01:20Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-q8m0x", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-q8m0x/services/redis-master", "uid":"2ee994ae-b8a0-11e6-a938-42010af00028", "resourceVersion":"5518"}, "spec":map[string]interface {}{"sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"role":"master", "app":"redis"}, "clusterIP":"10.127.242.192", "type":"ClusterIP"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service"}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:452
Expected error:
    <*errors.errorString | 0xc82141b340>: {
        s: "failed to wait for pods responding: pod with UID 023fa02e-b8af-11e6-84da-42010af00028 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-5n0xb/pods 19038} [{{ } {my-hostname-delete-node-84k9h my-hostname-delete-node- e2e-tests-resize-nodes-5n0xb /api/v1/namespaces/e2e-tests-resize-nodes-5n0xb/pods/my-hostname-delete-node-84k9h 023f852e-b8af-11e6-84da-42010af00028 18693 0 2016-12-02 08:47:27 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-5n0xb\",\"name\":\"my-hostname-delete-node\",\"uid\":\"023d9e23-b8af-11e6-84da-42010af00028\",\"apiVersion\":\"v1\",\"resourceVersion\":\"18667\"}}\n] [{v1 ReplicationController my-hostname-delete-node 023d9e23-b8af-11e6-84da-42010af00028 0xc821b91897}] [] } {[{default-token-552tb {<nil> <nil> <nil> <nil> <nil> 0xc82179bb00 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-552tb true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821b919b0 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-7136b4de-c385 0xc821a8f2c0 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-02 08:47:27 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-12-02 08:47:38 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-02 08:47:27 -0800 PST  }]   10.240.0.4 10.124.2.146 2016-12-02 08:47:27 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc8217d67a0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://977036887cc64971fecfffa272f2ee759e4dd1865f560d6e65f083a6914eb64b}]}} {{ } {my-hostname-delete-node-8lxx7 my-hostname-delete-node- e2e-tests-resize-nodes-5n0xb /api/v1/namespaces/e2e-tests-resize-nodes-5n0xb/pods/my-hostname-delete-node-8lxx7 40d3b8e6-b8af-11e6-84da-42010af00028 18874 0 2016-12-02 08:49:12 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-5n0xb\",\"name\":\"my-hostname-delete-node\",\"uid\":\"023d9e23-b8af-11e6-84da-42010af00028\",\"apiVersion\":\"v1\",\"resourceVersion\":\"18778\"}}\n] [{v1 ReplicationController my-hostname-delete-node 023d9e23-b8af-11e6-84da-42010af00028 0xc821b91c67}] [] } {[{default-token-552tb {<nil> <nil> <nil> <nil> <nil> 0xc82179bb60 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-552tb true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821b91d70 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-7136b4de-pru0 0xc821a8f380 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-02 08:49:12 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-12-02 08:49:14 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-02 08:49:12 -0800 PST  }]   10.240.0.2 10.124.3.5 2016-12-02 08:49:12 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc8217d67c0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://db1735f681bae25e72cf59e4dd74e702d55cface2cbedd18e665f24986001222}]}} {{ } {my-hostname-delete-node-t7g7r my-hostname-delete-node- e2e-tests-resize-nodes-5n0xb /api/v1/namespaces/e2e-tests-resize-nodes-5n0xb/pods/my-hostname-delete-node-t7g7r 023faed5-b8af-11e6-84da-42010af00028 18714 0 2016-12-02 08:47:27 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-5n0xb\",\"name\":\"my-hostname-delete-node\",\"uid\":\"023d9e23-b8af-11e6-84da-42010af00028\",\"apiVersion\":\"v1\",\"resourceVersion\":\"18667\"}}\n] [{v1 ReplicationController my-hostname-delete-node 023d9e23-b8af-11e6-84da-42010af00028 0xc821b8c037}] [] } {[{default-token-552tb {<nil> <nil> <nil> <nil> <nil> 0xc82179bbf0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-552tb true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821b8c130 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-7136b4de-pru0 0xc821a8f480 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-02 08:47:27 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-12-02 08:47:50 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-02 08:47:27 -0800 PST  }]   10.240.0.2 10.124.3.253 2016-12-02 08:47:27 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc8217d67e0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://28f3e93702e9325056ff5864699cfa77fae336c41f10b27f2b06952f9d3eb13f}]}}]}",
    }
    failed to wait for pods responding: pod with UID 023fa02e-b8af-11e6-84da-42010af00028 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-5n0xb/pods 19038} [{{ } {my-hostname-delete-node-84k9h my-hostname-delete-node- e2e-tests-resize-nodes-5n0xb /api/v1/namespaces/e2e-tests-resize-nodes-5n0xb/pods/my-hostname-delete-node-84k9h 023f852e-b8af-11e6-84da-42010af00028 18693 0 2016-12-02 08:47:27 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-5n0xb","name":"my-hostname-delete-node","uid":"023d9e23-b8af-11e6-84da-42010af00028","apiVersion":"v1","resourceVersion":"18667"}}
    ] [{v1 ReplicationController my-hostname-delete-node 023d9e23-b8af-11e6-84da-42010af00028 0xc821b91897}] [] } {[{default-token-552tb {<nil> <nil> <nil> <nil> <nil> 0xc82179bb00 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-552tb true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821b919b0 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-7136b4de-c385 0xc821a8f2c0 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-02 08:47:27 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-12-02 08:47:38 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-02 08:47:27 -0800 PST  }]   10.240.0.4 10.124.2.146 2016-12-02 08:47:27 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc8217d67a0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://977036887cc64971fecfffa272f2ee759e4dd1865f560d6e65f083a6914eb64b}]}} {{ } {my-hostname-delete-node-8lxx7 my-hostname-delete-node- e2e-tests-resize-nodes-5n0xb /api/v1/namespaces/e2e-tests-resize-nodes-5n0xb/pods/my-hostname-delete-node-8lxx7 40d3b8e6-b8af-11e6-84da-42010af00028 18874 0 2016-12-02 08:49:12 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-5n0xb","name":"my-hostname-delete-node","uid":"023d9e23-b8af-11e6-84da-42010af00028","apiVersion":"v1","resourceVersion":"18778"}}
    ] [{v1 ReplicationController my-hostname-delete-node 023d9e23-b8af-11e6-84da-42010af00028 0xc821b91c67}] [] } {[{default-token-552tb {<nil> <nil> <nil> <nil> <nil> 0xc82179bb60 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-552tb true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821b91d70 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-7136b4de-pru0 0xc821a8f380 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-02 08:49:12 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-12-02 08:49:14 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-02 08:49:12 -0800 PST  }]   10.240.0.2 10.124.3.5 2016-12-02 08:49:12 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc8217d67c0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://db1735f681bae25e72cf59e4dd74e702d55cface2cbedd18e665f24986001222}]}} {{ } {my-hostname-delete-node-t7g7r my-hostname-delete-node- e2e-tests-resize-nodes-5n0xb /api/v1/namespaces/e2e-tests-resize-nodes-5n0xb/pods/my-hostname-delete-node-t7g7r 023faed5-b8af-11e6-84da-42010af00028 18714 0 2016-12-02 08:47:27 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-5n0xb","name":"my-hostname-delete-node","uid":"023d9e23-b8af-11e6-84da-42010af00028","apiVersion":"v1","resourceVersion":"18667"}}
    ] [{v1 ReplicationController my-hostname-delete-node 023d9e23-b8af-11e6-84da-42010af00028 0xc821b8c037}] [] } {[{default-token-552tb {<nil> <nil> <nil> <nil> <nil> 0xc82179bbf0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-552tb true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821b8c130 <nil> ClusterFirst map[] default gke-jenkins-e2e-default-pool-7136b4de-pru0 0xc821a8f480 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-02 08:47:27 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-12-02 08:47:50 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-02 08:47:27 -0800 PST  }]   10.240.0.2 10.124.3.253 2016-12-02 08:47:27 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc8217d67e0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://28f3e93702e9325056ff5864699cfa77fae336c41f10b27f2b06952f9d3eb13f}]}}]}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:451

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:92
Failed to update the first deployment's overlapping annotation
Expected error:
    <*errors.errorString | 0xc8201aca50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1244

Issues about this test specifically: #31502 #32947

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
    <*errors.errorString | 0xc8201aca50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:197

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585

Failed: [k8s.io] V1Job should fail a job [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
    <*errors.errorString | 0xc8201aca50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:201

Issues about this test specifically: #37427

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:67
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:59

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

@fejta fejta closed this as completed Dec 7, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

3 participants