Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubernetes-e2e-gce-1.4-1.5-upgrade-master: broken test run #37744

Closed
k8s-github-robot opened this issue Dec 1, 2016 · 9 comments
Closed

kubernetes-e2e-gce-1.4-1.5-upgrade-master: broken test run #37744

k8s-github-robot opened this issue Dec 1, 2016 · 9 comments
Assignees
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@k8s-github-robot
Copy link

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gce-1.4-1.5-upgrade-master/66/

Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:219
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.76.213 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-aovxu] []  0xc821661980  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821661f80 exit status 1 <nil> true [0xc820bda2b8 0xc820bda4a8 0xc820bda4b8] [0xc820bda2b8 0xc820bda4a8 0xc820bda4b8] [0xc820bda3d0 0xc820bda4a0 0xc820bda4b0] [0xafa5c0 0xafa720 0xafa720] 0xc82163fa40}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.76.213 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-aovxu] []  0xc821661980  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821661f80 exit status 1 <nil> true [0xc820bda2b8 0xc820bda4a8 0xc820bda4b8] [0xc820bda2b8 0xc820bda4a8 0xc820bda4b8] [0xc820bda3d0 0xc820bda4a0 0xc820bda4b0] [0xafa5c0 0xafa720 0xafa720] 0xc82163fa40}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] Addon update should propagate add-on file changes [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/addon_update.go:316
Expected error:
    <*errors.errorString | 0xc821657120>: {
        s: "error waiting for service e2e-tests-addon-update-test-b66cs/addon-test to appear: timed out waiting for the condition",
    }
    error waiting for service e2e-tests-addon-update-test-b66cs/addon-test to appear: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/addon_update.go:320

Issues about this test specifically: #35600

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:452
Expected error:
    <*errors.errorString | 0xc820db1fa0>: {
        s: "failed to wait for pods responding: pod with UID b17ad014-b5f4-11e6-829e-42010af00002 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-ja3kh/pods 12478} [{{ } {my-hostname-delete-node-gkgfy my-hostname-delete-node- e2e-tests-resize-nodes-ja3kh /api/v1/namespaces/e2e-tests-resize-nodes-ja3kh/pods/my-hostname-delete-node-gkgfy edef930b-b5f4-11e6-829e-42010af00002 12308 0 2016-11-28 21:30:24 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-ja3kh\",\"name\":\"my-hostname-delete-node\",\"uid\":\"b17891ee-b5f4-11e6-829e-42010af00002\",\"apiVersion\":\"v1\",\"resourceVersion\":\"12201\"}}\n] [{v1 ReplicationController my-hostname-delete-node b17891ee-b5f4-11e6-829e-42010af00002 0xc8207caf17}] [] } {[{default-token-ngg49 {<nil> <nil> <nil> <nil> <nil> 0xc820af6450 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-ngg49 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8207cb060 <nil> ClusterFirst map[] default jenkins-e2e-minion-group-yrkd 0xc820568880 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-28 21:30:24 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-28 21:30:26 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-28 21:30:25 -0800 PST  }]   10.240.0.3 10.180.1.56 2016-11-28 21:30:24 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc82139c940 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://4f62fd0f80f6bf1099ea6c6ec3a2e93609390b1bd63069035d37c04df20c8604}]}} {{ } {my-hostname-delete-node-ksovd my-hostname-delete-node- e2e-tests-resize-nodes-ja3kh /api/v1/namespaces/e2e-tests-resize-nodes-ja3kh/pods/my-hostname-delete-node-ksovd b17ae2c8-b5f4-11e6-829e-42010af00002 12110 0 2016-11-28 21:28:43 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-ja3kh\",\"name\":\"my-hostname-delete-node\",\"uid\":\"b17891ee-b5f4-11e6-829e-42010af00002\",\"apiVersion\":\"v1\",\"resourceVersion\":\"12092\"}}\n] [{v1 ReplicationController my-hostname-delete-node b17891ee-b5f4-11e6-829e-42010af00002 0xc8207cb307}] [] } {[{default-token-ngg49 {<nil> <nil> <nil> <nil> <nil> 0xc820af64b0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-ngg49 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8207cb530 <nil> ClusterFirst map[] default jenkins-e2e-minion-group-yrkd 0xc820568a00 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-28 21:28:43 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-28 21:28:45 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-28 21:28:43 -0800 PST  }]   10.240.0.3 10.180.1.54 2016-11-28 21:28:43 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc82139c980 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://88d7f8c0e156a137a814cb823049fc7b434a4e9eb2fb5079c3f294ca0aeef96a}]}} {{ } {my-hostname-delete-node-phh4y my-hostname-delete-node- e2e-tests-resize-nodes-ja3kh /api/v1/namespaces/e2e-tests-resize-nodes-ja3kh/pods/my-hostname-delete-node-phh4y b17af3a7-b5f4-11e6-829e-42010af00002 12105 0 2016-11-28 21:28:43 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-ja3kh\",\"name\":\"my-hostname-delete-node\",\"uid\":\"b17891ee-b5f4-11e6-829e-42010af00002\",\"apiVersion\":\"v1\",\"resourceVersion\":\"12092\"}}\n] [{v1 ReplicationController my-hostname-delete-node b17891ee-b5f4-11e6-829e-42010af00002 0xc8207cb947}] [] } {[{default-token-ngg49 {<nil> <nil> <nil> <nil> <nil> 0xc820af6510 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-ngg49 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8207cba70 <nil> ClusterFirst map[] default jenkins-e2e-minion-group-waov 0xc820568b40 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-28 21:28:43 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-28 21:28:44 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-28 21:28:43 -0800 PST  }]   10.240.0.6 10.180.4.10 2016-11-28 21:28:43 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc82139c9c0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://4cb8918a3637ab09c13f2eb8811fda7521d635737c502312c897a9cbe31fdb24}]}}]}",
    }
    failed to wait for pods responding: pod with UID b17ad014-b5f4-11e6-829e-42010af00002 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-ja3kh/pods 12478} [{{ } {my-hostname-delete-node-gkgfy my-hostname-delete-node- e2e-tests-resize-nodes-ja3kh /api/v1/namespaces/e2e-tests-resize-nodes-ja3kh/pods/my-hostname-delete-node-gkgfy edef930b-b5f4-11e6-829e-42010af00002 12308 0 2016-11-28 21:30:24 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-ja3kh","name":"my-hostname-delete-node","uid":"b17891ee-b5f4-11e6-829e-42010af00002","apiVersion":"v1","resourceVersion":"12201"}}
    ] [{v1 ReplicationController my-hostname-delete-node b17891ee-b5f4-11e6-829e-42010af00002 0xc8207caf17}] [] } {[{default-token-ngg49 {<nil> <nil> <nil> <nil> <nil> 0xc820af6450 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-ngg49 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8207cb060 <nil> ClusterFirst map[] default jenkins-e2e-minion-group-yrkd 0xc820568880 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-28 21:30:24 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-28 21:30:26 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-28 21:30:25 -0800 PST  }]   10.240.0.3 10.180.1.56 2016-11-28 21:30:24 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc82139c940 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://4f62fd0f80f6bf1099ea6c6ec3a2e93609390b1bd63069035d37c04df20c8604}]}} {{ } {my-hostname-delete-node-ksovd my-hostname-delete-node- e2e-tests-resize-nodes-ja3kh /api/v1/namespaces/e2e-tests-resize-nodes-ja3kh/pods/my-hostname-delete-node-ksovd b17ae2c8-b5f4-11e6-829e-42010af00002 12110 0 2016-11-28 21:28:43 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-ja3kh","name":"my-hostname-delete-node","uid":"b17891ee-b5f4-11e6-829e-42010af00002","apiVersion":"v1","resourceVersion":"12092"}}
    ] [{v1 ReplicationController my-hostname-delete-node b17891ee-b5f4-11e6-829e-42010af00002 0xc8207cb307}] [] } {[{default-token-ngg49 {<nil> <nil> <nil> <nil> <nil> 0xc820af64b0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-ngg49 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8207cb530 <nil> ClusterFirst map[] default jenkins-e2e-minion-group-yrkd 0xc820568a00 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-28 21:28:43 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-28 21:28:45 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-28 21:28:43 -0800 PST  }]   10.240.0.3 10.180.1.54 2016-11-28 21:28:43 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc82139c980 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://88d7f8c0e156a137a814cb823049fc7b434a4e9eb2fb5079c3f294ca0aeef96a}]}} {{ } {my-hostname-delete-node-phh4y my-hostname-delete-node- e2e-tests-resize-nodes-ja3kh /api/v1/namespaces/e2e-tests-resize-nodes-ja3kh/pods/my-hostname-delete-node-phh4y b17af3a7-b5f4-11e6-829e-42010af00002 12105 0 2016-11-28 21:28:43 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-ja3kh","name":"my-hostname-delete-node","uid":"b17891ee-b5f4-11e6-829e-42010af00002","apiVersion":"v1","resourceVersion":"12092"}}
    ] [{v1 ReplicationController my-hostname-delete-node b17891ee-b5f4-11e6-829e-42010af00002 0xc8207cb947}] [] } {[{default-token-ngg49 {<nil> <nil> <nil> <nil> <nil> 0xc820af6510 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-ngg49 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8207cba70 <nil> ClusterFirst map[] default jenkins-e2e-minion-group-waov 0xc820568b40 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-28 21:28:43 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-28 21:28:44 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-28 21:28:43 -0800 PST  }]   10.240.0.6 10.180.4.10 2016-11-28 21:28:43 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc82139c9c0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://4cb8918a3637ab09c13f2eb8811fda7521d635737c502312c897a9cbe31fdb24}]}}]}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:451

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:43
Expected error:
    <*errors.StatusError | 0xc82116b480>: {
        ErrStatus: {
            TypeMeta: {Kind: "Status", APIVersion: "v1"},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:261

Issues about this test specifically: #29647 #35627

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:67
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:59

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: UpgradeTest {e2e.go}

error running Upgrade Ginkgo tests: exit status 1

Failed: [k8s.io] Kibana Logging Instances Is Alive should check that the Kibana logging instance is alive {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kibana_logging.go:42
Expected error:
    <*errors.StatusError | 0xc8211b9b80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"unknown\") has prevented the request from succeeding (get services kibana-logging)",
            Reason: "InternalError",
            Details: {
                Name: "kibana-logging",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "unknown",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("unknown") has prevented the request from succeeding (get services kibana-logging)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kibana_logging.go:98

Issues about this test specifically: #31420

Failed: [k8s.io] ScheduledJob should not schedule jobs when suspended [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:94
Expected error:
    <*runtime.notRegisteredErr | 0xc8218fb180>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:80

Issues about this test specifically: #32035 #34472

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:275
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.76.213 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-m9ths] []  0xc8201fbde0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820fee4e0 exit status 1 <nil> true [0xc8200ba098 0xc8200ba130 0xc8200ba150] [0xc8200ba098 0xc8200ba130 0xc8200ba150] [0xc8200ba0c0 0xc8200ba120 0xc8200ba140] [0xafa5c0 0xafa720 0xafa720] 0xc820cdb500}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.76.213 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-m9ths] []  0xc8201fbde0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820fee4e0 exit status 1 <nil> true [0xc8200ba098 0xc8200ba130 0xc8200ba150] [0xc8200ba098 0xc8200ba130 0xc8200ba150] [0xc8200ba0c0 0xc8200ba120 0xc8200ba140] [0xafa5c0 0xafa720 0xafa720] 0xc820cdb500}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] ScheduledJob should not emit unexpected warnings {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:178
Expected error:
    <*runtime.notRegisteredErr | 0xc82128fa00>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:163

Issues about this test specifically: #32034 #34910

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:92
Failed to update the first deployment's overlapping annotation
Expected error:
    <*errors.errorString | 0xc820017ab0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1244

Issues about this test specifically: #31502 #32947

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:521
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.76.213 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-6qt0s -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"namespace\":\"e2e-tests-kubectl-6qt0s\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-6qt0s/services/redis-master\", \"uid\":\"9be44f8d-b5fa-11e6-829e-42010af00002\", \"resourceVersion\":\"17026\", \"creationTimestamp\":\"2016-11-29T06:11:04Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"targetPort\":\"redis-server\", \"protocol\":\"TCP\", \"port\":6379}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.0.90.196\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc820fd8820 exit status 1 <nil> true [0xc82072aa00 0xc82072aa18 0xc82072aa30] [0xc82072aa00 0xc82072aa18 0xc82072aa30] [0xc82072aa10 0xc82072aa28] [0xafa720 0xafa720] 0xc8207c3c80}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"namespace\":\"e2e-tests-kubectl-6qt0s\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-6qt0s/services/redis-master\", \"uid\":\"9be44f8d-b5fa-11e6-829e-42010af00002\", \"resourceVersion\":\"17026\", \"creationTimestamp\":\"2016-11-29T06:11:04Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"targetPort\":\"redis-server\", \"protocol\":\"TCP\", \"port\":6379}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.0.90.196\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.76.213 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-6qt0s -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"namespace":"e2e-tests-kubectl-6qt0s", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-6qt0s/services/redis-master", "uid":"9be44f8d-b5fa-11e6-829e-42010af00002", "resourceVersion":"17026", "creationTimestamp":"2016-11-29T06:11:04Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"targetPort":"redis-server", "protocol":"TCP", "port":6379}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.0.90.196", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc820fd8820 exit status 1 <nil> true [0xc82072aa00 0xc82072aa18 0xc82072aa30] [0xc82072aa00 0xc82072aa18 0xc82072aa30] [0xc82072aa10 0xc82072aa28] [0xafa720 0xafa720] 0xc8207c3c80}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"namespace":"e2e-tests-kubectl-6qt0s", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-6qt0s/services/redis-master", "uid":"9be44f8d-b5fa-11e6-829e-42010af00002", "resourceVersion":"17026", "creationTimestamp":"2016-11-29T06:11:04Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"targetPort":"redis-server", "protocol":"TCP", "port":6379}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.0.90.196", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28523 #35741

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:756
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.76.213 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-twy84] []  0xc82189a7c0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc82189a400 exit status 1 <nil> true [0xc8202d4920 0xc8202d49a0 0xc8202d49b0] [0xc8202d4920 0xc8202d49a0 0xc8202d49b0] [0xc8202d4928 0xc8202d4990 0xc8202d49a8] [0xafa5c0 0xafa720 0xafa720] 0xc8214fea20}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.76.213 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-twy84] []  0xc82189a7c0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc82189a400 exit status 1 <nil> true [0xc8202d4920 0xc8202d49a0 0xc8202d49b0] [0xc8202d4920 0xc8202d49a0 0xc8202d49b0] [0xc8202d4928 0xc8202d4990 0xc8202d49a8] [0xafa5c0 0xafa720 0xafa720] 0xc8214fea20}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28493 #29964

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.76.213 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-l12tq] []  0xc821b457e0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8217940a0 exit status 1 <nil> true [0xc82072a0e8 0xc82072a120 0xc82072a140] [0xc82072a0e8 0xc82072a120 0xc82072a140] [0xc82072a0f0 0xc82072a118 0xc82072a138] [0xafa5c0 0xafa720 0xafa720] 0xc820cd5b60}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.76.213 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-l12tq] []  0xc821b457e0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8217940a0 exit status 1 <nil> true [0xc82072a0e8 0xc82072a120 0xc82072a140] [0xc82072a0e8 0xc82072a120 0xc82072a140] [0xc82072a0f0 0xc82072a118 0xc82072a138] [0xafa5c0 0xafa720 0xafa720] 0xc820cd5b60}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.76.213 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-mkwwk] []  0xc8214c81a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8214c89e0 exit status 1 <nil> true [0xc820252c40 0xc820252c68 0xc820252c78] [0xc820252c40 0xc820252c68 0xc820252c78] [0xc820252c48 0xc820252c60 0xc820252c70] [0xafa5c0 0xafa720 0xafa720] 0xc8214fef60}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.76.213 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-mkwwk] []  0xc8214c81a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8214c89e0 exit status 1 <nil> true [0xc820252c40 0xc820252c68 0xc820252c78] [0xc820252c40 0xc820252c68 0xc820252c78] [0xc820252c48 0xc820252c60 0xc820252c70] [0xafa5c0 0xafa720 0xafa720] 0xc8214fef60}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #27156 #28979 #30489 #33649

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:233
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.76.213 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-9y7jy] []  0xc8220e1c00  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821b0a3a0 exit status 1 <nil> true [0xc8211b43a0 0xc8211b44c8 0xc8211b4540] [0xc8211b43a0 0xc8211b44c8 0xc8211b4540] [0xc8211b43a8 0xc8211b44b8 0xc8211b44d0] [0xafa5c0 0xafa720 0xafa720] 0xc8214ff500}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.76.213 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-9y7jy] []  0xc8220e1c00  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821b0a3a0 exit status 1 <nil> true [0xc8211b43a0 0xc8211b44c8 0xc8211b4540] [0xc8211b43a0 0xc8211b44c8 0xc8211b4540] [0xc8211b43a8 0xc8211b44b8 0xc8211b44d0] [0xafa5c0 0xafa720 0xafa720] 0xc8214ff500}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.76.213 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-x3ze5] []  0xc820dd2960  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820dd32a0 exit status 1 <nil> true [0xc8211b4228 0xc8211b42c8 0xc8211b42e8] [0xc8211b4228 0xc8211b42c8 0xc8211b42e8] [0xc8211b4238 0xc8211b42c0 0xc8211b42d8] [0xafa5c0 0xafa720 0xafa720] 0xc82163f020}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.76.213 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-x3ze5] []  0xc820dd2960  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820dd32a0 exit status 1 <nil> true [0xc8211b4228 0xc8211b42c8 0xc8211b42e8] [0xc8211b4228 0xc8211b42c8 0xc8211b42e8] [0xc8211b4238 0xc8211b42c0 0xc8211b42d8] [0xafa5c0 0xafa720 0xafa720] 0xc82163f020}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:792
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.76.213 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-zqoyj] []  0xc8211a7480  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8211a7ca0 exit status 1 <nil> true [0xc8211b4398 0xc8211b4510 0xc8211b4090] [0xc8211b4398 0xc8211b4510 0xc8211b4090] [0xc8211b43a0 0xc8211b43c0 0xc8211b4518] [0xafa5c0 0xafa720 0xafa720] 0xc8207c2780}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.76.213 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-zqoyj] []  0xc8211a7480  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8211a7ca0 exit status 1 <nil> true [0xc8211b4398 0xc8211b4510 0xc8211b4090] [0xc8211b4398 0xc8211b4510 0xc8211b4090] [0xc8211b43a0 0xc8211b43c0 0xc8211b4518] [0xafa5c0 0xafa720 0xafa720] 0xc8207c2780}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.76.213 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-p8zip] []  0xc820fe1360  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820fe1920 exit status 1 <nil> true [0xc820f4a7b8 0xc820f4a7e0 0xc820f4a7f0] [0xc820f4a7b8 0xc820f4a7e0 0xc820f4a7f0] [0xc820f4a7c0 0xc820f4a7d8 0xc820f4a7e8] [0xafa5c0 0xafa720 0xafa720] 0xc821183920}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.76.213 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-p8zip] []  0xc820fe1360  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820fe1920 exit status 1 <nil> true [0xc820f4a7b8 0xc820f4a7e0 0xc820f4a7f0] [0xc820f4a7b8 0xc820f4a7e0 0xc820f4a7f0] [0xc820f4a7c0 0xc820f4a7d8 0xc820f4a7e8] [0xafa5c0 0xafa720 0xafa720] 0xc821183920}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] ScheduledJob should not schedule new jobs when ForbidConcurrent [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:125
Expected error:
    <*runtime.notRegisteredErr | 0xc8215bdd00>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:101

Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:781
Nov 28 20:43:41.154: Timeout waiting for service "mutability-test" to have a load balancer
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1967

Issues about this test specifically: #26134

Failed: [k8s.io] ScheduledJob should schedule multiple jobs concurrently {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:72
Expected error:
    <*runtime.notRegisteredErr | 0xc820f73240>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:57

Issues about this test specifically: #31657 #35876

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.76.213 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-oqldx] []  0xc8213a3980  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821206c00 exit status 1 <nil> true [0xc8202d4f18 0xc8202d4f40 0xc8202d4f60] [0xc8202d4f18 0xc8202d4f40 0xc8202d4f60] [0xc8202d4f20 0xc8202d4f38 0xc8202d4f50] [0xafa5c0 0xafa720 0xafa720] 0xc821625aa0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.76.213 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-oqldx] []  0xc8213a3980  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821206c00 exit status 1 <nil> true [0xc8202d4f18 0xc8202d4f40 0xc8202d4f60] [0xc8202d4f18 0xc8202d4f40 0xc8202d4f60] [0xc8202d4f20 0xc8202d4f38 0xc8202d4f50] [0xafa5c0 0xafa720 0xafa720] 0xc821625aa0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] ScheduledJob should replace jobs when ReplaceConcurrent {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:156
Expected error:
    <*runtime.notRegisteredErr | 0xc8217f2a80>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:132

Issues about this test specifically: #30542 #31460 #31479 #31552 #32032

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] master upgrade should maintain responsive services [Feature:MasterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:53
Nov 28 20:18:02.120: Timeout waiting for service "service-test" to have a load balancer
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2342
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gce-1.4-1.5-upgrade-master/67/

Multiple broken tests:

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:792
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.64.189 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-xoh69] []  0xc8207a94e0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8207a9b40 exit status 1 <nil> true [0xc820ebc0c0 0xc820ebc0f8 0xc820ebc108] [0xc820ebc0c0 0xc820ebc0f8 0xc820ebc108] [0xc820ebc0c8 0xc820ebc0f0 0xc820ebc100] [0xafa5c0 0xafa720 0xafa720] 0xc82147fb60}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.64.189 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-xoh69] []  0xc8207a94e0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8207a9b40 exit status 1 <nil> true [0xc820ebc0c0 0xc820ebc0f8 0xc820ebc108] [0xc820ebc0c0 0xc820ebc0f8 0xc820ebc108] [0xc820ebc0c8 0xc820ebc0f0 0xc820ebc100] [0xafa5c0 0xafa720 0xafa720] 0xc82147fb60}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] ScheduledJob should replace jobs when ReplaceConcurrent {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:156
Expected error:
    <*runtime.notRegisteredErr | 0xc821418b40>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:132

Issues about this test specifically: #30542 #31460 #31479 #31552 #32032

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:521
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.64.189 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-0p0ya -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"creationTimestamp\":\"2016-11-29T17:07:27Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-0p0ya\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-0p0ya/services/redis-master\", \"uid\":\"4e3135e0-b656-11e6-86dc-42010af00002\", \"resourceVersion\":\"51828\"}, \"spec\":map[string]interface {}{\"clusterIP\":\"10.0.200.133\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\"}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc820f60220 exit status 1 <nil> true [0xc820e9e970 0xc820e9ea20 0xc820e9ea38] [0xc820e9e970 0xc820e9ea20 0xc820e9ea38] [0xc820e9ea10 0xc820e9ea30] [0xafa720 0xafa720] 0xc8213f6f00}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"creationTimestamp\":\"2016-11-29T17:07:27Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-0p0ya\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-0p0ya/services/redis-master\", \"uid\":\"4e3135e0-b656-11e6-86dc-42010af00002\", \"resourceVersion\":\"51828\"}, \"spec\":map[string]interface {}{\"clusterIP\":\"10.0.200.133\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\"}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.64.189 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-0p0ya -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"apiVersion":"v1", "metadata":map[string]interface {}{"creationTimestamp":"2016-11-29T17:07:27Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-0p0ya", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-0p0ya/services/redis-master", "uid":"4e3135e0-b656-11e6-86dc-42010af00002", "resourceVersion":"51828"}, "spec":map[string]interface {}{"clusterIP":"10.0.200.133", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service"}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc820f60220 exit status 1 <nil> true [0xc820e9e970 0xc820e9ea20 0xc820e9ea38] [0xc820e9e970 0xc820e9ea20 0xc820e9ea38] [0xc820e9ea10 0xc820e9ea30] [0xafa720 0xafa720] 0xc8213f6f00}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"apiVersion":"v1", "metadata":map[string]interface {}{"creationTimestamp":"2016-11-29T17:07:27Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-0p0ya", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-0p0ya/services/redis-master", "uid":"4e3135e0-b656-11e6-86dc-42010af00002", "resourceVersion":"51828"}, "spec":map[string]interface {}{"clusterIP":"10.0.200.133", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service"}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28523 #35741

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.64.189 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-46jl0] []  0xc82119c260  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc82119c840 exit status 1 <nil> true [0xc820e9e4a0 0xc820e9e590 0xc820e9e5c0] [0xc820e9e4a0 0xc820e9e590 0xc820e9e5c0] [0xc820e9e4a8 0xc820e9e588 0xc820e9e598] [0xafa5c0 0xafa720 0xafa720] 0xc8212332c0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.64.189 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-46jl0] []  0xc82119c260  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc82119c840 exit status 1 <nil> true [0xc820e9e4a0 0xc820e9e590 0xc820e9e5c0] [0xc820e9e4a0 0xc820e9e590 0xc820e9e5c0] [0xc820e9e4a8 0xc820e9e588 0xc820e9e598] [0xafa5c0 0xafa720 0xafa720] 0xc8212332c0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] ScheduledJob should not schedule new jobs when ForbidConcurrent [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:125
Expected error:
    <*runtime.notRegisteredErr | 0xc821446900>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:101

Failed: UpgradeTest {e2e.go}

error running Upgrade Ginkgo tests: exit status 1

Failed: [k8s.io] Addon update should propagate add-on file changes [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/addon_update.go:316
Expected error:
    <*errors.errorString | 0xc82156ef30>: {
        s: "error waiting for service e2e-tests-addon-update-test-qb337/addon-test to appear: timed out waiting for the condition",
    }
    error waiting for service e2e-tests-addon-update-test-qb337/addon-test to appear: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/addon_update.go:320

Issues about this test specifically: #35600

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:756
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.64.189 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-3n208] []  0xc821281b20  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820b68260 exit status 1 <nil> true [0xc820ebd3e8 0xc820ebd418 0xc820ebd430] [0xc820ebd3e8 0xc820ebd418 0xc820ebd430] [0xc820ebd3f8 0xc820ebd4b0 0xc820ebd420] [0xafa5c0 0xafa720 0xafa720] 0xc821496660}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.64.189 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-3n208] []  0xc821281b20  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820b68260 exit status 1 <nil> true [0xc820ebd3e8 0xc820ebd418 0xc820ebd430] [0xc820ebd3e8 0xc820ebd418 0xc820ebd430] [0xc820ebd3f8 0xc820ebd4b0 0xc820ebd420] [0xafa5c0 0xafa720 0xafa720] 0xc821496660}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28493 #29964

Failed: [k8s.io] Kibana Logging Instances Is Alive should check that the Kibana logging instance is alive {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kibana_logging.go:42
Expected error:
    <*errors.StatusError | 0xc820ab6680>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"unknown\") has prevented the request from succeeding (get services kibana-logging)",
            Reason: "InternalError",
            Details: {
                Name: "kibana-logging",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "unknown",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("unknown") has prevented the request from succeeding (get services kibana-logging)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kibana_logging.go:98

Issues about this test specifically: #31420

Failed: [k8s.io] ScheduledJob should not emit unexpected warnings {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:178
Expected error:
    <*runtime.notRegisteredErr | 0xc8211acc00>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:163

Issues about this test specifically: #32034 #34910

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.64.189 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-gd7iw] []  0xc821428b60  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8214291e0 exit status 1 <nil> true [0xc8211ec640 0xc8211ec680 0xc8211ec690] [0xc8211ec640 0xc8211ec680 0xc8211ec690] [0xc8211ec658 0xc8211ec678 0xc8211ec688] [0xafa5c0 0xafa720 0xafa720] 0xc8219d73e0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.64.189 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-gd7iw] []  0xc821428b60  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8214291e0 exit status 1 <nil> true [0xc8211ec640 0xc8211ec680 0xc8211ec690] [0xc8211ec640 0xc8211ec680 0xc8211ec690] [0xc8211ec658 0xc8211ec678 0xc8211ec688] [0xafa5c0 0xafa720 0xafa720] 0xc8219d73e0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #27156 #28979 #30489 #33649

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.64.189 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-19zgx] []  0xc820da4860  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820da5180 exit status 1 <nil> true [0xc8200b6480 0xc8200b64b8 0xc8200b64e0] [0xc8200b6480 0xc8200b64b8 0xc8200b64e0] [0xc8200b6490 0xc8200b64b0 0xc8200b64d0] [0xafa5c0 0xafa720 0xafa720] 0xc8209e3c80}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.64.189 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-19zgx] []  0xc820da4860  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820da5180 exit status 1 <nil> true [0xc8200b6480 0xc8200b64b8 0xc8200b64e0] [0xc8200b6480 0xc8200b64b8 0xc8200b64e0] [0xc8200b6490 0xc8200b64b0 0xc8200b64d0] [0xafa5c0 0xafa720 0xafa720] 0xc8209e3c80}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:219
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.64.189 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-giv96] []  0xc8218bdec0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821024a60 exit status 1 <nil> true [0xc821204778 0xc8212047a0 0xc8212047b0] [0xc821204778 0xc8212047a0 0xc8212047b0] [0xc821204780 0xc821204798 0xc8212047a8] [0xafa5c0 0xafa720 0xafa720] 0xc8219d7d40}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.64.189 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-giv96] []  0xc8218bdec0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821024a60 exit status 1 <nil> true [0xc821204778 0xc8212047a0 0xc8212047b0] [0xc821204778 0xc8212047a0 0xc8212047b0] [0xc821204780 0xc821204798 0xc8212047a8] [0xafa5c0 0xafa720 0xafa720] 0xc8219d7d40}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.64.189 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-x5pui] []  0xc82183d680  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc82183dd40 exit status 1 <nil> true [0xc8200b6c20 0xc8200b6e60 0xc8200b6e78] [0xc8200b6c20 0xc8200b6e60 0xc8200b6e78] [0xc8200b6df8 0xc8200b6e58 0xc8200b6e70] [0xafa5c0 0xafa720 0xafa720] 0xc8207b2120}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.64.189 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-x5pui] []  0xc82183d680  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc82183dd40 exit status 1 <nil> true [0xc8200b6c20 0xc8200b6e60 0xc8200b6e78] [0xc8200b6c20 0xc8200b6e60 0xc8200b6e78] [0xc8200b6df8 0xc8200b6e58 0xc8200b6e70] [0xafa5c0 0xafa720 0xafa720] 0xc8207b2120}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] master upgrade should maintain responsive services [Feature:MasterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:53
Nov 29 03:12:08.350: Timeout waiting for service "service-test" to have a load balancer
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2342

Failed: [k8s.io] ScheduledJob should not schedule jobs when suspended [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:94
Expected error:
    <*runtime.notRegisteredErr | 0xc820e72140>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:80

Issues about this test specifically: #32035 #34472

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:43
Expected error:
    <*errors.StatusError | 0xc8209dae00>: {
        ErrStatus: {
            TypeMeta: {Kind: "Status", APIVersion: "v1"},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:261

Issues about this test specifically: #29647 #35627

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:275
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.64.189 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-ancfr] []  0xc821856ee0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821857800 exit status 1 <nil> true [0xc820e9e740 0xc820e9e768 0xc820e9e778] [0xc820e9e740 0xc820e9e768 0xc820e9e778] [0xc820e9e748 0xc820e9e760 0xc820e9e770] [0xafa5c0 0xafa720 0xafa720] 0xc821155080}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.64.189 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-ancfr] []  0xc821856ee0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821857800 exit status 1 <nil> true [0xc820e9e740 0xc820e9e768 0xc820e9e778] [0xc820e9e740 0xc820e9e768 0xc820e9e778] [0xc820e9e748 0xc820e9e760 0xc820e9e770] [0xafa5c0 0xafa720 0xafa720] 0xc821155080}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:781
Nov 29 08:36:01.223: Timeout waiting for service "mutability-test" to have a load balancer
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1967

Issues about this test specifically: #26134

Failed: [k8s.io] ScheduledJob should schedule multiple jobs concurrently {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:72
Expected error:
    <*runtime.notRegisteredErr | 0xc820a51a80>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:57

Issues about this test specifically: #31657 #35876

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.64.189 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-uajul] []  0xc820f156a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820f15fe0 exit status 1 <nil> true [0xc8212580f0 0xc821258118 0xc821258128] [0xc8212580f0 0xc821258118 0xc821258128] [0xc8212580f8 0xc821258110 0xc821258120] [0xafa5c0 0xafa720 0xafa720] 0xc8212922a0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.64.189 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-uajul] []  0xc820f156a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820f15fe0 exit status 1 <nil> true [0xc8212580f0 0xc821258118 0xc821258128] [0xc8212580f0 0xc821258118 0xc821258128] [0xc8212580f8 0xc821258110 0xc821258120] [0xafa5c0 0xafa720 0xafa720] 0xc8212922a0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:67
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:59

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:233
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.64.189 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-jzjh8] []  0xc8208b8f60  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8208b9660 exit status 1 <nil> true [0xc82017a9c0 0xc82017ab40 0xc82017ab58] [0xc82017a9c0 0xc82017ab40 0xc82017ab58] [0xc82017aa08 0xc82017ab20 0xc82017ab50] [0xafa5c0 0xafa720 0xafa720] 0xc8205ba5a0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://146.148.64.189 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-jzjh8] []  0xc8208b8f60  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8208b9660 exit status 1 <nil> true [0xc82017a9c0 0xc82017ab40 0xc82017ab58] [0xc82017a9c0 0xc82017ab40 0xc82017ab58] [0xc82017aa08 0xc82017ab20 0xc82017ab50] [0xafa5c0 0xafa720 0xafa720] 0xc8205ba5a0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:92
Failed to update the first deployment's overlapping annotation
Expected error:
    <*errors.errorString | 0xc82017ca00>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1244

Issues about this test specifically: #31502 #32947

@k8s-github-robot k8s-github-robot added kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence. area/test-infra labels Dec 1, 2016
@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gce-1.4-1.5-upgrade-master/68/

Multiple broken tests:

Failed: [k8s.io] ScheduledJob should schedule multiple jobs concurrently {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:72
Expected error:
    <*runtime.notRegisteredErr | 0xc8210d5d80>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:57

Issues about this test specifically: #31657 #35876

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.153.129 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-sm99x] []  0xc8214ebb80  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820d00460 exit status 1 <nil> true [0xc820190db8 0xc820190f58 0xc820190f68] [0xc820190db8 0xc820190f58 0xc820190f68] [0xc820190e38 0xc820190f48 0xc820190f60] [0xafa5c0 0xafa720 0xafa720] 0xc820c10480}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.153.129 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-sm99x] []  0xc8214ebb80  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820d00460 exit status 1 <nil> true [0xc820190db8 0xc820190f58 0xc820190f68] [0xc820190db8 0xc820190f58 0xc820190f68] [0xc820190e38 0xc820190f48 0xc820190f60] [0xafa5c0 0xafa720 0xafa720] 0xc820c10480}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #31151 #35586

Failed: UpgradeTest {e2e.go}

error running Upgrade Ginkgo tests: exit status 1

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:219
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.153.129 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-dhco5] []  0xc820c5aca0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820c5b340 exit status 1 <nil> true [0xc821868720 0xc821868758 0xc821868768] [0xc821868720 0xc821868758 0xc821868768] [0xc821868728 0xc821868748 0xc821868760] [0xafa5c0 0xafa720 0xafa720] 0xc8212f5e00}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.153.129 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-dhco5] []  0xc820c5aca0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820c5b340 exit status 1 <nil> true [0xc821868720 0xc821868758 0xc821868768] [0xc821868720 0xc821868758 0xc821868768] [0xc821868728 0xc821868748 0xc821868760] [0xafa5c0 0xafa720 0xafa720] 0xc8212f5e00}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] Addon update should propagate add-on file changes [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/addon_update.go:316
Expected error:
    <*errors.errorString | 0xc82149f2a0>: {
        s: "error waiting for service e2e-tests-addon-update-test-hgr7o/addon-test to appear: timed out waiting for the condition",
    }
    error waiting for service e2e-tests-addon-update-test-hgr7o/addon-test to appear: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/addon_update.go:320

Issues about this test specifically: #35600

Failed: [k8s.io] ScheduledJob should not emit unexpected warnings {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:178
Expected error:
    <*runtime.notRegisteredErr | 0xc821390c00>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:163

Issues about this test specifically: #32034 #34910

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:92
Failed to update the first deployment's overlapping annotation
Expected error:
    <*errors.errorString | 0xc820016b40>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1244

Issues about this test specifically: #31502 #32947

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:756
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.153.129 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-t11rn] []  0xc8212be880  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8212bf440 exit status 1 <nil> true [0xc820090698 0xc8200902a8 0xc8200902e0] [0xc820090698 0xc8200902a8 0xc8200902e0] [0xc820090a60 0xc8200902a0 0xc8200902d8] [0xafa5c0 0xafa720 0xafa720] 0xc820e96b40}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.153.129 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-t11rn] []  0xc8212be880  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8212bf440 exit status 1 <nil> true [0xc820090698 0xc8200902a8 0xc8200902e0] [0xc820090698 0xc8200902a8 0xc8200902e0] [0xc820090a60 0xc8200902a0 0xc8200902d8] [0xafa5c0 0xafa720 0xafa720] 0xc820e96b40}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28493 #29964

Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:781
Nov 29 14:18:32.303: Timeout waiting for service "mutability-test" to have a load balancer
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1967

Issues about this test specifically: #26134

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.153.129 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-61gef] []  0xc8208dee80  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8208df440 exit status 1 <nil> true [0xc8210402a0 0xc8210402c8 0xc821040308] [0xc8210402a0 0xc8210402c8 0xc821040308] [0xc8210402a8 0xc8210402c0 0xc821040300] [0xafa5c0 0xafa720 0xafa720] 0xc8214c12c0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.153.129 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-61gef] []  0xc8208dee80  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8208df440 exit status 1 <nil> true [0xc8210402a0 0xc8210402c8 0xc821040308] [0xc8210402a0 0xc8210402c8 0xc821040308] [0xc8210402a8 0xc8210402c0 0xc821040300] [0xafa5c0 0xafa720 0xafa720] 0xc8214c12c0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] Kibana Logging Instances Is Alive should check that the Kibana logging instance is alive {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kibana_logging.go:42
Expected error:
    <*errors.StatusError | 0xc82147ab80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"unknown\") has prevented the request from succeeding (get services kibana-logging)",
            Reason: "InternalError",
            Details: {
                Name: "kibana-logging",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "unknown",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("unknown") has prevented the request from succeeding (get services kibana-logging)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kibana_logging.go:98

Issues about this test specifically: #31420

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:43
Expected error:
    <*errors.StatusError | 0xc82123e300>: {
        ErrStatus: {
            TypeMeta: {Kind: "Status", APIVersion: "v1"},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:261

Issues about this test specifically: #29647 #35627

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:521
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.153.129 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-gjor5 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"namespace\":\"e2e-tests-kubectl-gjor5\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-gjor5/services/redis-master\", \"uid\":\"0bd488e9-b685-11e6-a51d-42010af00002\", \"resourceVersion\":\"42026\", \"creationTimestamp\":\"2016-11-29T22:42:02Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\"}, \"spec\":map[string]interface {}{\"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"port\":6379, \"targetPort\":\"redis-server\", \"protocol\":\"TCP\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.0.7.148\"}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc821fbb320 exit status 1 <nil> true [0xc820ee2058 0xc820ee2070 0xc820ee2088] [0xc820ee2058 0xc820ee2070 0xc820ee2088] [0xc820ee2068 0xc820ee2080] [0xafa720 0xafa720] 0xc8212d2540}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"namespace\":\"e2e-tests-kubectl-gjor5\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-gjor5/services/redis-master\", \"uid\":\"0bd488e9-b685-11e6-a51d-42010af00002\", \"resourceVersion\":\"42026\", \"creationTimestamp\":\"2016-11-29T22:42:02Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\"}, \"spec\":map[string]interface {}{\"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"port\":6379, \"targetPort\":\"redis-server\", \"protocol\":\"TCP\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.0.7.148\"}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.153.129 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-gjor5 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"namespace":"e2e-tests-kubectl-gjor5", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-gjor5/services/redis-master", "uid":"0bd488e9-b685-11e6-a51d-42010af00002", "resourceVersion":"42026", "creationTimestamp":"2016-11-29T22:42:02Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master"}, "spec":map[string]interface {}{"type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"port":6379, "targetPort":"redis-server", "protocol":"TCP"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.0.7.148"}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc821fbb320 exit status 1 <nil> true [0xc820ee2058 0xc820ee2070 0xc820ee2088] [0xc820ee2058 0xc820ee2070 0xc820ee2088] [0xc820ee2068 0xc820ee2080] [0xafa720 0xafa720] 0xc8212d2540}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"namespace":"e2e-tests-kubectl-gjor5", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-gjor5/services/redis-master", "uid":"0bd488e9-b685-11e6-a51d-42010af00002", "resourceVersion":"42026", "creationTimestamp":"2016-11-29T22:42:02Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master"}, "spec":map[string]interface {}{"type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"port":6379, "targetPort":"redis-server", "protocol":"TCP"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.0.7.148"}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28523 #35741

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:67
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:59

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:792
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.153.129 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-cta3i] []  0xc82118b500  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc82118bca0 exit status 1 <nil> true [0xc8200363c8 0xc8200364d0 0xc820036540] [0xc8200363c8 0xc8200364d0 0xc820036540] [0xc8200363d0 0xc8200364c8 0xc820036528] [0xafa5c0 0xafa720 0xafa720] 0xc82166dbc0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.153.129 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-cta3i] []  0xc82118b500  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc82118bca0 exit status 1 <nil> true [0xc8200363c8 0xc8200364d0 0xc820036540] [0xc8200363c8 0xc8200364d0 0xc820036540] [0xc8200363d0 0xc8200364c8 0xc820036528] [0xafa5c0 0xafa720 0xafa720] 0xc82166dbc0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] ScheduledJob should not schedule new jobs when ForbidConcurrent [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:125
Expected error:
    <*runtime.notRegisteredErr | 0xc821892700>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:101

Failed: [k8s.io] ScheduledJob should not schedule jobs when suspended [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:94
Expected error:
    <*runtime.notRegisteredErr | 0xc82104e500>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:80

Issues about this test specifically: #32035 #34472

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:275
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.153.129 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-g8j4p] []  0xc820b696a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc82141c1a0 exit status 1 <nil> true [0xc820ee2250 0xc820ee22d0 0xc820ee22e0] [0xc820ee2250 0xc820ee22d0 0xc820ee22e0] [0xc820ee2258 0xc820ee22c8 0xc820ee22d8] [0xafa5c0 0xafa720 0xafa720] 0xc8211a3a40}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.153.129 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-g8j4p] []  0xc820b696a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc82141c1a0 exit status 1 <nil> true [0xc820ee2250 0xc820ee22d0 0xc820ee22e0] [0xc820ee2250 0xc820ee22d0 0xc820ee22e0] [0xc820ee2258 0xc820ee22c8 0xc820ee22d8] [0xafa5c0 0xafa720 0xafa720] 0xc8211a3a40}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.153.129 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-u0r7l] []  0xc82145a580  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc82145ab40 exit status 1 <nil> true [0xc820037040 0xc820037080 0xc8200370a0] [0xc820037040 0xc820037080 0xc8200370a0] [0xc820037048 0xc820037078 0xc820037088] [0xafa5c0 0xafa720 0xafa720] 0xc821bf7500}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.153.129 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-u0r7l] []  0xc82145a580  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc82145ab40 exit status 1 <nil> true [0xc820037040 0xc820037080 0xc8200370a0] [0xc820037040 0xc820037080 0xc8200370a0] [0xc820037048 0xc820037078 0xc820037088] [0xafa5c0 0xafa720 0xafa720] 0xc821bf7500}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.153.129 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-oizoz] []  0xc820f1d240  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820f1d9a0 exit status 1 <nil> true [0xc820036b58 0xc820036c30 0xc820036c40] [0xc820036b58 0xc820036c30 0xc820036c40] [0xc820036b68 0xc820036c28 0xc820036c38] [0xafa5c0 0xafa720 0xafa720] 0xc8214fecc0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.153.129 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-oizoz] []  0xc820f1d240  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820f1d9a0 exit status 1 <nil> true [0xc820036b58 0xc820036c30 0xc820036c40] [0xc820036b58 0xc820036c30 0xc820036c40] [0xc820036b68 0xc820036c28 0xc820036c38] [0xafa5c0 0xafa720 0xafa720] 0xc8214fecc0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:233
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.153.129 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-5a6tw] []  0xc8212f68c0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8212f6f60 exit status 1 <nil> true [0xc820190cb8 0xc820190d70 0xc820190dd8] [0xc820190cb8 0xc820190d70 0xc820190dd8] [0xc820190cc0 0xc820190d68 0xc820190dd0] [0xafa5c0 0xafa720 0xafa720] 0xc8211982a0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.153.129 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-5a6tw] []  0xc8212f68c0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8212f6f60 exit status 1 <nil> true [0xc820190cb8 0xc820190d70 0xc820190dd8] [0xc820190cb8 0xc820190d70 0xc820190dd8] [0xc820190cc0 0xc820190d68 0xc820190dd0] [0xafa5c0 0xafa720 0xafa720] 0xc8211982a0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.153.129 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-dfy3b] []  0xc820a0e2c0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820a0e940 exit status 1 <nil> true [0xc8200903d0 0xc820090418 0xc820090428] [0xc8200903d0 0xc820090418 0xc820090428] [0xc8200903d8 0xc820090410 0xc820090420] [0xafa5c0 0xafa720 0xafa720] 0xc821324240}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.153.129 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-dfy3b] []  0xc820a0e2c0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820a0e940 exit status 1 <nil> true [0xc8200903d0 0xc820090418 0xc820090428] [0xc8200903d0 0xc820090418 0xc820090428] [0xc8200903d8 0xc820090410 0xc820090420] [0xafa5c0 0xafa720 0xafa720] 0xc821324240}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #27156 #28979 #30489 #33649

Failed: [k8s.io] ScheduledJob should replace jobs when ReplaceConcurrent {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:156
Expected error:
    <*runtime.notRegisteredErr | 0xc8211b8880>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:132

Issues about this test specifically: #30542 #31460 #31479 #31552 #32032

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] master upgrade should maintain responsive services [Feature:MasterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:53
Nov 29 09:55:51.095: Timeout waiting for service "service-test" to have a load balancer
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2342

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:452
Expected error:
    <*errors.errorString | 0xc8218fa700>: {
        s: "failed to wait for pods responding: pod with UID 348b7ed6-b678-11e6-a51d-42010af00002 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-cxkc7/pods 31672} [{{ } {my-hostname-delete-node-2cskp my-hostname-delete-node- e2e-tests-resize-nodes-cxkc7 /api/v1/namespaces/e2e-tests-resize-nodes-cxkc7/pods/my-hostname-delete-node-2cskp 7c4dafe1-b678-11e6-a51d-42010af00002 31490 0 2016-11-29 13:12:07 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-cxkc7\",\"name\":\"my-hostname-delete-node\",\"uid\":\"34885917-b678-11e6-a51d-42010af00002\",\"apiVersion\":\"v1\",\"resourceVersion\":\"31374\"}}\n] [{v1 ReplicationController my-hostname-delete-node 34885917-b678-11e6-a51d-42010af00002 0xc821bb8877}] [] } {[{default-token-wxfy2 {<nil> <nil> <nil> <nil> <nil> 0xc8219d8630 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-wxfy2 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821bb89a0 <nil> ClusterFirst map[] default jenkins-e2e-minion-group-v8at 0xc821f09ec0 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-29 13:12:07 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-29 13:12:09 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-29 13:12:07 -0800 PST  }]   10.240.0.6 10.180.4.152 2016-11-29 13:12:07 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc8218d9d80 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://6376e70d445da43d8bee2011c0aace680dcb57e81597352d5961842b15e6f428}]}} {{ } {my-hostname-delete-node-k9dw3 my-hostname-delete-node- e2e-tests-resize-nodes-cxkc7 /api/v1/namespaces/e2e-tests-resize-nodes-cxkc7/pods/my-hostname-delete-node-k9dw3 348b0b50-b678-11e6-a51d-42010af00002 31267 0 2016-11-29 13:10:07 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-cxkc7\",\"name\":\"my-hostname-delete-node\",\"uid\":\"34885917-b678-11e6-a51d-42010af00002\",\"apiVersion\":\"v1\",\"resourceVersion\":\"31253\"}}\n] [{v1 ReplicationController my-hostname-delete-node 34885917-b678-11e6-a51d-42010af00002 0xc821bb8cf7}] [] } {[{default-token-wxfy2 {<nil> <nil> <nil> <nil> <nil> 0xc8219d8690 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-wxfy2 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821bb8e20 <nil> ClusterFirst map[] default jenkins-e2e-minion-group-v8at 0xc821f09f80 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-29 13:10:07 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-29 13:10:08 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-29 13:10:07 -0800 PST  }]   10.240.0.6 10.180.4.151 2016-11-29 13:10:07 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc8218d9da0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://a06653491660a17540cc6073fa74eb1f9200de681878851f255b32c28825f372}]}} {{ } {my-hostname-delete-node-oaly6 my-hostname-delete-node- e2e-tests-resize-nodes-cxkc7 /api/v1/namespaces/e2e-tests-resize-nodes-cxkc7/pods/my-hostname-delete-node-oaly6 348d2b37-b678-11e6-a51d-42010af00002 31271 0 2016-11-29 13:10:07 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-cxkc7\",\"name\":\"my-hostname-delete-node\",\"uid\":\"34885917-b678-11e6-a51d-42010af00002\",\"apiVersion\":\"v1\",\"resourceVersion\":\"31253\"}}\n] [{v1 ReplicationController my-hostname-delete-node 34885917-b678-11e6-a51d-42010af00002 0xc821bb9307}] [] } {[{default-token-wxfy2 {<nil> <nil> <nil> <nil> <nil> 0xc8219d86f0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-wxfy2 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821bb9480 <nil> ClusterFirst map[] default jenkins-e2e-minion-group-eq85 0xc821e38080 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-29 13:10:07 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-29 13:10:08 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-29 13:10:07 -0800 PST  }]   10.240.0.3 10.180.1.174 2016-11-29 13:10:07 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc8218d9de0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://1def04468c32133070e6611b114aac4ac55f80d0cce5e825d0cc12d78a3a3071}]}}]}",
    }
    failed to wait for pods responding: pod with UID 348b7ed6-b678-11e6-a51d-42010af00002 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-cxkc7/pods 31672} [{{ } {my-hostname-delete-node-2cskp my-hostname-delete-node- e2e-tests-resize-nodes-cxkc7 /api/v1/namespaces/e2e-tests-resize-nodes-cxkc7/pods/my-hostname-delete-node-2cskp 7c4dafe1-b678-11e6-a51d-42010af00002 31490 0 2016-11-29 13:12:07 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-cxkc7","name":"my-hostname-delete-node","uid":"34885917-b678-11e6-a51d-42010af00002","apiVersion":"v1","resourceVersion":"31374"}}
    ] [{v1 ReplicationController my-hostname-delete-node 34885917-b678-11e6-a51d-42010af00002 0xc821bb8877}] [] } {[{default-token-wxfy2 {<nil> <nil> <nil> <nil> <nil> 0xc8219d8630 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-wxfy2 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821bb89a0 <nil> ClusterFirst map[] default jenkins-e2e-minion-group-v8at 0xc821f09ec0 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-29 13:12:07 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-29 13:12:09 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-29 13:12:07 -0800 PST  }]   10.240.0.6 10.180.4.152 2016-11-29 13:12:07 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc8218d9d80 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://6376e70d445da43d8bee2011c0aace680dcb57e81597352d5961842b15e6f428}]}} {{ } {my-hostname-delete-node-k9dw3 my-hostname-delete-node- e2e-tests-resize-nodes-cxkc7 /api/v1/namespaces/e2e-tests-resize-nodes-cxkc7/pods/my-hostname-delete-node-k9dw3 348b0b50-b678-11e6-a51d-42010af00002 31267 0 2016-11-29 13:10:07 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-cxkc7","name":"my-hostname-delete-node","uid":"34885917-b678-11e6-a51d-42010af00002","apiVersion":"v1","resourceVersion":"31253"}}
    ] [{v1 ReplicationController my-hostname-delete-node 34885917-b678-11e6-a51d-42010af00002 0xc821bb8cf7}] [] } {[{default-token-wxfy2 {<nil> <nil> <nil> <nil> <nil> 0xc8219d8690 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-wxfy2 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821bb8e20 <nil> ClusterFirst map[] default jenkins-e2e-minion-group-v8at 0xc821f09f80 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-29 13:10:07 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-29 13:10:08 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-29 13:10:07 -0800 PST  }]   10.240.0.6 10.180.4.151 2016-11-29 13:10:07 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc8218d9da0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://a06653491660a17540cc6073fa74eb1f9200de681878851f255b32c28825f372}]}} {{ } {my-hostname-delete-node-oaly6 my-hostname-delete-node- e2e-tests-resize-nodes-cxkc7 /api/v1/namespaces/e2e-tests-resize-nodes-cxkc7/pods/my-hostname-delete-node-oaly6 348d2b37-b678-11e6-a51d-42010af00002 31271 0 2016-11-29 13:10:07 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-cxkc7","name":"my-hostname-delete-node","uid":"34885917-b678-11e6-a51d-42010af00002","apiVersion":"v1","resourceVersion":"31253"}}
    ] [{v1 ReplicationController my-hostname-delete-node 34885917-b678-11e6-a51d-42010af00002 0xc821bb9307}] [] } {[{default-token-wxfy2 {<nil> <nil> <nil> <nil> <nil> 0xc8219d86f0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-wxfy2 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc821bb9480 <nil> ClusterFirst map[] default jenkins-e2e-minion-group-eq85 0xc821e38080 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-11-29 13:10:07 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-11-29 13:10:08 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-11-29 13:10:07 -0800 PST  }]   10.240.0.3 10.180.1.174 2016-11-29 13:10:07 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc8218d9de0 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://1def04468c32133070e6611b114aac4ac55f80d0cce5e825d0cc12d78a3a3071}]}}]}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:451

Issues about this test specifically: #27233 #36204

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gce-1.4-1.5-upgrade-master/69/

Multiple broken tests:

Failed: [k8s.io] Addon update should propagate add-on file changes [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/addon_update.go:316
Expected error:
    <*errors.errorString | 0xc821acf670>: {
        s: "error waiting for service e2e-tests-addon-update-test-92qyv/addon-test to appear: timed out waiting for the condition",
    }
    error waiting for service e2e-tests-addon-update-test-92qyv/addon-test to appear: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/addon_update.go:320

Issues about this test specifically: #35600

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:92
Failed to update the first deployment's overlapping annotation
Expected error:
    <*errors.errorString | 0xc82017ab90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1244

Issues about this test specifically: #31502 #32947

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:219
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.52.239 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-v0vpb] []  0xc820c79940  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821086020 exit status 1 <nil> true [0xc820036850 0xc820036988 0xc8200369a0] [0xc820036850 0xc820036988 0xc8200369a0] [0xc820036858 0xc820036978 0xc8200369a8] [0xafa5c0 0xafa720 0xafa720] 0xc820e8fc80}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.52.239 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-v0vpb] []  0xc820c79940  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821086020 exit status 1 <nil> true [0xc820036850 0xc820036988 0xc8200369a0] [0xc820036850 0xc820036988 0xc8200369a0] [0xc820036858 0xc820036978 0xc8200369a8] [0xafa5c0 0xafa720 0xafa720] 0xc820e8fc80}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: UpgradeTest {e2e.go}

error running Upgrade Ginkgo tests: exit status 1

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.52.239 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-vqmsq] []  0xc820e10580  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820e10d00 exit status 1 <nil> true [0xc8216f8490 0xc8216f84c8 0xc8216f84d8] [0xc8216f8490 0xc8216f84c8 0xc8216f84d8] [0xc8216f84a0 0xc8216f84c0 0xc8216f84d0] [0xafa5c0 0xafa720 0xafa720] 0xc820b36600}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.52.239 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-vqmsq] []  0xc820e10580  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820e10d00 exit status 1 <nil> true [0xc8216f8490 0xc8216f84c8 0xc8216f84d8] [0xc8216f8490 0xc8216f84c8 0xc8216f84d8] [0xc8216f84a0 0xc8216f84c0 0xc8216f84d0] [0xafa5c0 0xafa720 0xafa720] 0xc820b36600}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:781
Nov 29 20:10:11.151: Timeout waiting for service "mutability-test" to have a load balancer
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1967

Issues about this test specifically: #26134

Failed: [k8s.io] Kibana Logging Instances Is Alive should check that the Kibana logging instance is alive {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kibana_logging.go:42
Expected error:
    <*errors.StatusError | 0xc820e55b80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"unknown\") has prevented the request from succeeding (get services kibana-logging)",
            Reason: "InternalError",
            Details: {
                Name: "kibana-logging",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "unknown",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("unknown") has prevented the request from succeeding (get services kibana-logging)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kibana_logging.go:98

Issues about this test specifically: #31420

Failed: [k8s.io] ScheduledJob should not emit unexpected warnings {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:178
Expected error:
    <*runtime.notRegisteredErr | 0xc8216d8600>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:163

Issues about this test specifically: #32034 #34910

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:756
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.52.239 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-4jqap] []  0xc820e88d60  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820e893e0 exit status 1 <nil> true [0xc8200364b0 0xc8200364d8 0xc8200364e8] [0xc8200364b0 0xc8200364d8 0xc8200364e8] [0xc8200364b8 0xc8200364d0 0xc8200364e0] [0xafa5c0 0xafa720 0xafa720] 0xc8205648a0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.52.239 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-4jqap] []  0xc820e88d60  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820e893e0 exit status 1 <nil> true [0xc8200364b0 0xc8200364d8 0xc8200364e8] [0xc8200364b0 0xc8200364d8 0xc8200364e8] [0xc8200364b8 0xc8200364d0 0xc8200364e0] [0xafa5c0 0xafa720 0xafa720] 0xc8205648a0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28493 #29964

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:43
Expected error:
    <*errors.StatusError | 0xc820eee380>: {
        ErrStatus: {
            TypeMeta: {Kind: "Status", APIVersion: "v1"},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:261

Issues about this test specifically: #29647 #35627

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.52.239 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-6s1si] []  0xc82155cc00  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc82155d260 exit status 1 <nil> true [0xc8212a0610 0xc8212a01c8 0xc8212a01d8] [0xc8212a0610 0xc8212a01c8 0xc8212a01d8] [0xc8212a0618 0xc8212a0728 0xc8212a01d0] [0xafa5c0 0xafa720 0xafa720] 0xc8216c7f80}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.52.239 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-6s1si] []  0xc82155cc00  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc82155d260 exit status 1 <nil> true [0xc8212a0610 0xc8212a01c8 0xc8212a01d8] [0xc8212a0610 0xc8212a01c8 0xc8212a01d8] [0xc8212a0618 0xc8212a0728 0xc8212a01d0] [0xafa5c0 0xafa720 0xafa720] 0xc8216c7f80}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #27156 #28979 #30489 #33649

Failed: [k8s.io] ScheduledJob should not schedule jobs when suspended [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:94
Expected error:
    <*runtime.notRegisteredErr | 0xc8216ce480>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:80

Issues about this test specifically: #32035 #34472

Failed: [k8s.io] ScheduledJob should replace jobs when ReplaceConcurrent {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:156
Expected error:
    <*runtime.notRegisteredErr | 0xc82108e4c0>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:132

Issues about this test specifically: #30542 #31460 #31479 #31552 #32032

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] master upgrade should maintain responsive services [Feature:MasterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:53
Nov 29 16:36:13.493: Timeout waiting for service "service-test" to have a load balancer
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2342

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:275
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.52.239 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-5muzs] []  0xc820e9a940  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820e9af80 exit status 1 <nil> true [0xc820241080 0xc8202410a8 0xc8202410b8] [0xc820241080 0xc8202410a8 0xc8202410b8] [0xc820241088 0xc8202410a0 0xc8202410b0] [0xafa5c0 0xafa720 0xafa720] 0xc820a77c80}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.52.239 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-5muzs] []  0xc820e9a940  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820e9af80 exit status 1 <nil> true [0xc820241080 0xc8202410a8 0xc8202410b8] [0xc820241080 0xc8202410a8 0xc8202410b8] [0xc820241088 0xc8202410a0 0xc8202410b0] [0xafa5c0 0xafa720 0xafa720] 0xc820a77c80}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:521
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.52.239 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-u8cv4 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"spec\":map[string]interface {}{\"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.0.30.57\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"creationTimestamp\":\"2016-11-30T04:52:31Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-u8cv4\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-u8cv4/services/redis-master\", \"uid\":\"cd4ada0f-b6b8-11e6-8054-42010af00002\", \"resourceVersion\":\"38500\"}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc82141fb20 exit status 1 <nil> true [0xc8216f8250 0xc8216f8280 0xc8216f8368] [0xc8216f8250 0xc8216f8280 0xc8216f8368] [0xc8216f8268 0xc8216f8360] [0xafa720 0xafa720] 0xc821549080}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"spec\":map[string]interface {}{\"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.0.30.57\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"creationTimestamp\":\"2016-11-30T04:52:31Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-u8cv4\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-u8cv4/services/redis-master\", \"uid\":\"cd4ada0f-b6b8-11e6-8054-42010af00002\", \"resourceVersion\":\"38500\"}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.52.239 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-u8cv4 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"spec":map[string]interface {}{"selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.0.30.57", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"creationTimestamp":"2016-11-30T04:52:31Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-u8cv4", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-u8cv4/services/redis-master", "uid":"cd4ada0f-b6b8-11e6-8054-42010af00002", "resourceVersion":"38500"}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc82141fb20 exit status 1 <nil> true [0xc8216f8250 0xc8216f8280 0xc8216f8368] [0xc8216f8250 0xc8216f8280 0xc8216f8368] [0xc8216f8268 0xc8216f8360] [0xafa720 0xafa720] 0xc821549080}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"spec":map[string]interface {}{"selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.0.30.57", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"creationTimestamp":"2016-11-30T04:52:31Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-u8cv4", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-u8cv4/services/redis-master", "uid":"cd4ada0f-b6b8-11e6-8054-42010af00002", "resourceVersion":"38500"}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28523 #35741

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.52.239 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-18gdj] []  0xc82141f6a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc82141fd60 exit status 1 <nil> true [0xc8200b86d0 0xc8200b8758 0xc8200b8768] [0xc8200b86d0 0xc8200b8758 0xc8200b8768] [0xc8200b8708 0xc8200b8750 0xc8200b8760] [0xafa5c0 0xafa720 0xafa720] 0xc820d11620}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.52.239 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-18gdj] []  0xc82141f6a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc82141fd60 exit status 1 <nil> true [0xc8200b86d0 0xc8200b8758 0xc8200b8768] [0xc8200b86d0 0xc8200b8758 0xc8200b8768] [0xc8200b8708 0xc8200b8750 0xc8200b8760] [0xafa5c0 0xafa720 0xafa720] 0xc820d11620}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:67
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:59

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] ScheduledJob should not schedule new jobs when ForbidConcurrent [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:125
Expected error:
    <*runtime.notRegisteredErr | 0xc820d2c580>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:101

Failed: [k8s.io] ScheduledJob should schedule multiple jobs concurrently {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:72
Expected error:
    <*runtime.notRegisteredErr | 0xc82117c500>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:57

Issues about this test specifically: #31657 #35876

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:233
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.52.239 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-y376e] []  0xc8212d7620  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8212d7be0 exit status 1 <nil> true [0xc820036528 0xc820036858 0xc820036970] [0xc820036528 0xc820036858 0xc820036970] [0xc820036530 0xc820036590 0xc820036968] [0xafa5c0 0xafa720 0xafa720] 0xc82168e960}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.52.239 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-y376e] []  0xc8212d7620  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8212d7be0 exit status 1 <nil> true [0xc820036528 0xc820036858 0xc820036970] [0xc820036528 0xc820036858 0xc820036970] [0xc820036530 0xc820036590 0xc820036968] [0xafa5c0 0xafa720 0xafa720] 0xc82168e960}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.52.239 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-pvhjz] []  0xc8219b2400  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8219b2a20 exit status 1 <nil> true [0xc820240760 0xc8202407b8 0xc8202407d0] [0xc820240760 0xc8202407b8 0xc8202407d0] [0xc820240768 0xc8202407a8 0xc8202407c8] [0xafa5c0 0xafa720 0xafa720] 0xc820e24660}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.52.239 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-pvhjz] []  0xc8219b2400  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8219b2a20 exit status 1 <nil> true [0xc820240760 0xc8202407b8 0xc8202407d0] [0xc820240760 0xc8202407b8 0xc8202407d0] [0xc820240768 0xc8202407a8 0xc8202407c8] [0xafa5c0 0xafa720 0xafa720] 0xc820e24660}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:792
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.52.239 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-z4bta] []  0xc821077000  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8210777e0 exit status 1 <nil> true [0xc820241068 0xc820241090 0xc8202410a0] [0xc820241068 0xc820241090 0xc8202410a0] [0xc820241070 0xc820241088 0xc820241098] [0xafa5c0 0xafa720 0xafa720] 0xc82117a7e0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.52.239 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-z4bta] []  0xc821077000  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8210777e0 exit status 1 <nil> true [0xc820241068 0xc820241090 0xc8202410a0] [0xc820241068 0xc820241090 0xc8202410a0] [0xc820241070 0xc820241088 0xc820241098] [0xafa5c0 0xafa720 0xafa720] 0xc82117a7e0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.52.239 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-wzrtf] []  0xc8210cc260  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8210cc820 exit status 1 <nil> true [0xc820240b18 0xc820240778 0xc8202407a8] [0xc820240b18 0xc820240778 0xc8202407a8] [0xc8202404f8 0xc820240528 0xc820240798] [0xafa5c0 0xafa720 0xafa720] 0xc820a764e0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.52.239 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-wzrtf] []  0xc8210cc260  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8210cc820 exit status 1 <nil> true [0xc820240b18 0xc820240778 0xc8202407a8] [0xc820240b18 0xc820240778 0xc8202407a8] [0xc8202404f8 0xc820240528 0xc820240798] [0xafa5c0 0xafa720 0xafa720] 0xc820a764e0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28426 #32168 #33756 #34797

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gce-1.4-1.5-upgrade-master/70/

Multiple broken tests:

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:43
Expected error:
    <*errors.StatusError | 0xc82098bd00>: {
        ErrStatus: {
            TypeMeta: {Kind: "Status", APIVersion: "v1"},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:261

Issues about this test specifically: #29647 #35627

Failed: [k8s.io] ScheduledJob should not schedule jobs when suspended [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:94
Expected error:
    <*runtime.notRegisteredErr | 0xc820ed7140>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:80

Issues about this test specifically: #32035 #34472

Failed: [k8s.io] Addon update should propagate add-on file changes [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/addon_update.go:316
Expected error:
    <*errors.errorString | 0xc8218af2b0>: {
        s: "error waiting for service e2e-tests-addon-update-test-r09h5/addon-test to appear: timed out waiting for the condition",
    }
    error waiting for service e2e-tests-addon-update-test-r09h5/addon-test to appear: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/addon_update.go:320

Issues about this test specifically: #35600

Failed: [k8s.io] ScheduledJob should not schedule new jobs when ForbidConcurrent [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:125
Expected error:
    <*runtime.notRegisteredErr | 0xc820da1700>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:101

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.52.239 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-8bmtr] []  0xc8212e1620  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8212e1be0 exit status 1 <nil> true [0xc820036258 0xc820036288 0xc8200362e8] [0xc820036258 0xc820036288 0xc8200362e8] [0xc820036260 0xc820036280 0xc8200362e0] [0xafa5c0 0xafa720 0xafa720] 0xc82112d500}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.52.239 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-8bmtr] []  0xc8212e1620  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8212e1be0 exit status 1 <nil> true [0xc820036258 0xc820036288 0xc8200362e8] [0xc820036258 0xc820036288 0xc8200362e8] [0xc820036260 0xc820036280 0xc8200362e0] [0xafa5c0 0xafa720 0xafa720] 0xc82112d500}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] master upgrade should maintain responsive services [Feature:MasterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:53
Nov 29 23:16:43.880: Timeout waiting for service "service-test" to have a load balancer
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2342

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:275
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.52.239 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-tm7h1] []  0xc82143cf80  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc82143d660 exit status 1 <nil> true [0xc8200363f8 0xc820036440 0xc820036450] [0xc8200363f8 0xc820036440 0xc820036450] [0xc820036400 0xc820036438 0xc820036448] [0xafa5c0 0xafa720 0xafa720] 0xc821793c80}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.52.239 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-tm7h1] []  0xc82143cf80  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc82143d660 exit status 1 <nil> true [0xc8200363f8 0xc820036440 0xc820036450] [0xc8200363f8 0xc820036440 0xc820036450] [0xc820036400 0xc820036438 0xc820036448] [0xafa5c0 0xafa720 0xafa720] 0xc821793c80}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] ScheduledJob should schedule multiple jobs concurrently {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:72
Expected error:
    <*runtime.notRegisteredErr | 0xc821220540>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:57

Issues about this test specifically: #31657 #35876

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:233
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.52.239 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-6wdr7] []  0xc821857200  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821857a80 exit status 1 <nil> true [0xc8200c4a28 0xc8200c4aa8 0xc8200c4ac8] [0xc8200c4a28 0xc8200c4aa8 0xc8200c4ac8] [0xc8200c4a30 0xc8200c4a90 0xc8200c4ab0] [0xafa5c0 0xafa720 0xafa720] 0xc820e19800}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.52.239 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-6wdr7] []  0xc821857200  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821857a80 exit status 1 <nil> true [0xc8200c4a28 0xc8200c4aa8 0xc8200c4ac8] [0xc8200c4a28 0xc8200c4aa8 0xc8200c4ac8] [0xc8200c4a30 0xc8200c4a90 0xc8200c4ab0] [0xafa5c0 0xafa720 0xafa720] 0xc820e19800}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.52.239 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-07qx8] []  0xc82124b0a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc82124b660 exit status 1 <nil> true [0xc820d704b8 0xc820d704e0 0xc820d704f0] [0xc820d704b8 0xc820d704e0 0xc820d704f0] [0xc820d704c0 0xc820d704d8 0xc820d704e8] [0xafa5c0 0xafa720 0xafa720] 0xc820d27620}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.52.239 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-07qx8] []  0xc82124b0a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc82124b660 exit status 1 <nil> true [0xc820d704b8 0xc820d704e0 0xc820d704f0] [0xc820d704b8 0xc820d704e0 0xc820d704f0] [0xc820d704c0 0xc820d704d8 0xc820d704e8] [0xafa5c0 0xafa720 0xafa720] 0xc820d27620}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:792
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.52.239 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-86rhp] []  0xc82118dd60  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821820440 exit status 1 <nil> true [0xc8200ce258 0xc8200ce2c8 0xc8200ce3f8] [0xc8200ce258 0xc8200ce2c8 0xc8200ce3f8] [0xc8200ce270 0xc8200ce288 0xc8200ce308] [0xafa5c0 0xafa720 0xafa720] 0xc82115b020}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.52.239 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-86rhp] []  0xc82118dd60  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821820440 exit status 1 <nil> true [0xc8200ce258 0xc8200ce2c8 0xc8200ce3f8] [0xc8200ce258 0xc8200ce2c8 0xc8200ce3f8] [0xc8200ce270 0xc8200ce288 0xc8200ce308] [0xafa5c0 0xafa720 0xafa720] 0xc82115b020}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:92
Failed to update the first deployment's overlapping annotation
Expected error:
    <*errors.errorString | 0xc8201b4b80>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1244

Issues about this test specifically: #31502 #32947

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.52.239 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-x0qlk] []  0xc8214dfc40  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8213263e0 exit status 1 <nil> true [0xc820ff2350 0xc820ff2378 0xc820ff2388] [0xc820ff2350 0xc820ff2378 0xc820ff2388] [0xc820ff2358 0xc820ff2370 0xc820ff2380] [0xafa5c0 0xafa720 0xafa720] 0xc82115bb00}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.52.239 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-x0qlk] []  0xc8214dfc40  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8213263e0 exit status 1 <nil> true [0xc820ff2350 0xc820ff2378 0xc820ff2388] [0xc820ff2350 0xc820ff2378 0xc820ff2388] [0xc820ff2358 0xc820ff2370 0xc820ff2380] [0xafa5c0 0xafa720 0xafa720] 0xc82115bb00}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #27156 #28979 #30489 #33649

Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:781
Nov 30 00:22:57.069: Timeout waiting for service "mutability-test" to have a load balancer
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1967

Issues about this test specifically: #26134

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:67
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:59

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] ScheduledJob should not emit unexpected warnings {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:178
Expected error:
    <*runtime.notRegisteredErr | 0xc820f2d3c0>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:163

Issues about this test specifically: #32034 #34910

Failed: [k8s.io] Kibana Logging Instances Is Alive should check that the Kibana logging instance is alive {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kibana_logging.go:42
Expected error:
    <*errors.StatusError | 0xc820c52f00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"unknown\") has prevented the request from succeeding (get services kibana-logging)",
            Reason: "InternalError",
            Details: {
                Name: "kibana-logging",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "unknown",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("unknown") has prevented the request from succeeding (get services kibana-logging)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kibana_logging.go:98

Issues about this test specifically: #31420

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:219
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.52.239 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-qgh4r] []  0xc8212e7740  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8212e7da0 exit status 1 <nil> true [0xc8203bca68 0xc8203bcaa0 0xc8203bcb60] [0xc8203bca68 0xc8203bcaa0 0xc8203bcb60] [0xc8203bca78 0xc8203bca98 0xc8203bcb58] [0xafa5c0 0xafa720 0xafa720] 0xc821524780}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.52.239 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-qgh4r] []  0xc8212e7740  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8212e7da0 exit status 1 <nil> true [0xc8203bca68 0xc8203bcaa0 0xc8203bcb60] [0xc8203bca68 0xc8203bcaa0 0xc8203bcb60] [0xc8203bca78 0xc8203bca98 0xc8203bcb58] [0xafa5c0 0xafa720 0xafa720] 0xc821524780}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.52.239 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-dnt55] []  0xc820eda0e0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820eda720 exit status 1 <nil> true [0xc820d70070 0xc820d700c0 0xc820d700d8] [0xc820d70070 0xc820d700c0 0xc820d700d8] [0xc820d70088 0xc820d700b0 0xc820d700c8] [0xafa5c0 0xafa720 0xafa720] 0xc821974e40}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.52.239 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-dnt55] []  0xc820eda0e0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820eda720 exit status 1 <nil> true [0xc820d70070 0xc820d700c0 0xc820d700d8] [0xc820d70070 0xc820d700c0 0xc820d700d8] [0xc820d70088 0xc820d700b0 0xc820d700c8] [0xafa5c0 0xafa720 0xafa720] 0xc821974e40}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.52.239 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-n8dmb] []  0xc820e90f60  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820e918c0 exit status 1 <nil> true [0xc8200ceaa8 0xc8200cec50 0xc8200ce538] [0xc8200ceaa8 0xc8200cec50 0xc8200ce538] [0xc8200ceab8 0xc8200cec40 0xc8200ce528] [0xafa5c0 0xafa720 0xafa720] 0xc820d26420}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.52.239 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-n8dmb] []  0xc820e90f60  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820e918c0 exit status 1 <nil> true [0xc8200ceaa8 0xc8200cec50 0xc8200ce538] [0xc8200ceaa8 0xc8200cec50 0xc8200ce538] [0xc8200ceab8 0xc8200cec40 0xc8200ce528] [0xafa5c0 0xafa720 0xafa720] 0xc820d26420}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:521
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.52.239 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-180rk -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"namespace\":\"e2e-tests-kubectl-180rk\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-180rk/services/redis-master\", \"uid\":\"705e37ac-b6e5-11e6-85f4-42010af00002\", \"resourceVersion\":\"28376\", \"creationTimestamp\":\"2016-11-30T10:12:03Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\"}, \"spec\":map[string]interface {}{\"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.0.169.203\"}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc820e09ba0 exit status 1 <nil> true [0xc8200ce3f8 0xc8200ce4c0 0xc8200ce500] [0xc8200ce3f8 0xc8200ce4c0 0xc8200ce500] [0xc8200ce480 0xc8200ce4f0] [0xafa720 0xafa720] 0xc8219a4180}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"namespace\":\"e2e-tests-kubectl-180rk\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-180rk/services/redis-master\", \"uid\":\"705e37ac-b6e5-11e6-85f4-42010af00002\", \"resourceVersion\":\"28376\", \"creationTimestamp\":\"2016-11-30T10:12:03Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\"}, \"spec\":map[string]interface {}{\"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.0.169.203\"}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.52.239 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-180rk -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"namespace":"e2e-tests-kubectl-180rk", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-180rk/services/redis-master", "uid":"705e37ac-b6e5-11e6-85f4-42010af00002", "resourceVersion":"28376", "creationTimestamp":"2016-11-30T10:12:03Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master"}, "spec":map[string]interface {}{"type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.0.169.203"}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc820e09ba0 exit status 1 <nil> true [0xc8200ce3f8 0xc8200ce4c0 0xc8200ce500] [0xc8200ce3f8 0xc8200ce4c0 0xc8200ce500] [0xc8200ce480 0xc8200ce4f0] [0xafa720 0xafa720] 0xc8219a4180}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"namespace":"e2e-tests-kubectl-180rk", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-180rk/services/redis-master", "uid":"705e37ac-b6e5-11e6-85f4-42010af00002", "resourceVersion":"28376", "creationTimestamp":"2016-11-30T10:12:03Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master"}, "spec":map[string]interface {}{"type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.0.169.203"}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28523 #35741

Failed: UpgradeTest {e2e.go}

error running Upgrade Ginkgo tests: exit status 1

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:756
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.52.239 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-6glwf] []  0xc8218a3a40  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8215a83e0 exit status 1 <nil> true [0xc820c96458 0xc820c96128 0xc820c96328] [0xc820c96458 0xc820c96128 0xc820c96328] [0xc820c96460 0xc820c96120 0xc820c96130] [0xafa5c0 0xafa720 0xafa720] 0xc821792a80}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.52.239 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-6glwf] []  0xc8218a3a40  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8215a83e0 exit status 1 <nil> true [0xc820c96458 0xc820c96128 0xc820c96328] [0xc820c96458 0xc820c96128 0xc820c96328] [0xc820c96460 0xc820c96120 0xc820c96130] [0xafa5c0 0xafa720 0xafa720] 0xc821792a80}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28493 #29964

Failed: [k8s.io] ScheduledJob should replace jobs when ReplaceConcurrent {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:156
Expected error:
    <*runtime.notRegisteredErr | 0xc8218c4800>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:132

Issues about this test specifically: #30542 #31460 #31479 #31552 #32032

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gce-1.4-1.5-upgrade-master/71/

Multiple broken tests:

Failed: UpgradeTest {e2e.go}

error running Upgrade Ginkgo tests: exit status 1

Failed: [k8s.io] ScheduledJob should not schedule new jobs when ForbidConcurrent [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:125
Expected error:
    <*runtime.notRegisteredErr | 0xc820d7b040>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:101

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:233
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.243.31 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-ssk9j] []  0xc8208471e0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8208477c0 exit status 1 <nil> true [0xc8200c4d18 0xc8200c4d40 0xc8200c4d58] [0xc8200c4d18 0xc8200c4d40 0xc8200c4d58] [0xc8200c4d20 0xc8200c4d38 0xc8200c4d50] [0xafa5c0 0xafa720 0xafa720] 0xc82124fe00}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.243.31 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-ssk9j] []  0xc8208471e0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8208477c0 exit status 1 <nil> true [0xc8200c4d18 0xc8200c4d40 0xc8200c4d58] [0xc8200c4d18 0xc8200c4d40 0xc8200c4d58] [0xc8200c4d20 0xc8200c4d38 0xc8200c4d50] [0xafa5c0 0xafa720 0xafa720] 0xc82124fe00}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:792
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.243.31 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-65lnn] []  0xc821789980  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821789f60 exit status 1 <nil> true [0xc820e1e0a8 0xc820e1e0e8 0xc820e1e0f8] [0xc820e1e0a8 0xc820e1e0e8 0xc820e1e0f8] [0xc820e1e0b0 0xc820e1e0e0 0xc820e1e0f0] [0xafa5c0 0xafa720 0xafa720] 0xc8213df800}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.243.31 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-65lnn] []  0xc821789980  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821789f60 exit status 1 <nil> true [0xc820e1e0a8 0xc820e1e0e8 0xc820e1e0f8] [0xc820e1e0a8 0xc820e1e0e8 0xc820e1e0f8] [0xc820e1e0b0 0xc820e1e0e0 0xc820e1e0f0] [0xafa5c0 0xafa720 0xafa720] 0xc8213df800}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] ScheduledJob should schedule multiple jobs concurrently {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:72
Expected error:
    <*runtime.notRegisteredErr | 0xc821707440>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:57

Issues about this test specifically: #31657 #35876

Failed: [k8s.io] Kibana Logging Instances Is Alive should check that the Kibana logging instance is alive {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kibana_logging.go:42
Expected error:
    <*errors.StatusError | 0xc820a1ab00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"unknown\") has prevented the request from succeeding (get services kibana-logging)",
            Reason: "InternalError",
            Details: {
                Name: "kibana-logging",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "unknown",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("unknown") has prevented the request from succeeding (get services kibana-logging)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kibana_logging.go:98

Issues about this test specifically: #31420

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:756
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.243.31 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-f4lsr] []  0xc820cb6680  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820cb6ce0 exit status 1 <nil> true [0xc8200369f0 0xc820036a00 0xc820036a10] [0xc8200369f0 0xc820036a00 0xc820036a10] [0xc8200369f8 0xc820036940 0xc820036a08] [0xafa5c0 0xafa720 0xafa720] 0xc8211b4480}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.243.31 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-f4lsr] []  0xc820cb6680  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820cb6ce0 exit status 1 <nil> true [0xc8200369f0 0xc820036a00 0xc820036a10] [0xc8200369f0 0xc820036a00 0xc820036a10] [0xc8200369f8 0xc820036940 0xc820036a08] [0xafa5c0 0xafa720 0xafa720] 0xc8211b4480}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28493 #29964

Failed: [k8s.io] ScheduledJob should not schedule jobs when suspended [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:94
Expected error:
    <*runtime.notRegisteredErr | 0xc8210af700>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:80

Issues about this test specifically: #32035 #34472

Failed: [k8s.io] Addon update should propagate add-on file changes [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/addon_update.go:316
Expected error:
    <*errors.errorString | 0xc821b9b3e0>: {
        s: "error waiting for service e2e-tests-addon-update-test-v2h2s/addon-test to appear: timed out waiting for the condition",
    }
    error waiting for service e2e-tests-addon-update-test-v2h2s/addon-test to appear: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/addon_update.go:320

Issues about this test specifically: #35600

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:219
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.243.31 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-v7vx1] []  0xc821243a40  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820bf2160 exit status 1 <nil> true [0xc820dfa210 0xc820dfa260 0xc820dfa270] [0xc820dfa210 0xc820dfa260 0xc820dfa270] [0xc820dfa218 0xc820dfa258 0xc820dfa268] [0xafa5c0 0xafa720 0xafa720] 0xc8212b4240}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.243.31 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-v7vx1] []  0xc821243a40  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820bf2160 exit status 1 <nil> true [0xc820dfa210 0xc820dfa260 0xc820dfa270] [0xc820dfa210 0xc820dfa260 0xc820dfa270] [0xc820dfa218 0xc820dfa258 0xc820dfa268] [0xafa5c0 0xafa720 0xafa720] 0xc8212b4240}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] ScheduledJob should replace jobs when ReplaceConcurrent {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:156
Expected error:
    <*runtime.notRegisteredErr | 0xc82106ad80>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:132

Issues about this test specifically: #30542 #31460 #31479 #31552 #32032

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.243.31 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-mvqmg] []  0xc82134cba0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc82134d160 exit status 1 <nil> true [0xc820dfa410 0xc820dfa470 0xc820dfa480] [0xc820dfa410 0xc820dfa470 0xc820dfa480] [0xc820dfa418 0xc820dfa468 0xc820dfa478] [0xafa5c0 0xafa720 0xafa720] 0xc8212588a0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.243.31 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-mvqmg] []  0xc82134cba0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc82134d160 exit status 1 <nil> true [0xc820dfa410 0xc820dfa470 0xc820dfa480] [0xc820dfa410 0xc820dfa470 0xc820dfa480] [0xc820dfa418 0xc820dfa468 0xc820dfa478] [0xafa5c0 0xafa720 0xafa720] 0xc8212588a0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #27156 #28979 #30489 #33649

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:67
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:59

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:521
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.243.31 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-t1v11 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"metadata\":map[string]interface {}{\"uid\":\"cd3ef53f-b72c-11e6-b18d-42010af00002\", \"resourceVersion\":\"40985\", \"creationTimestamp\":\"2016-11-30T18:42:53Z\", \"labels\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-t1v11\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-t1v11/services/redis-master\"}, \"spec\":map[string]interface {}{\"clusterIP\":\"10.0.36.191\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\"}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc8213745e0 exit status 1 <nil> true [0xc820de2008 0xc820de2020 0xc820de2088] [0xc820de2008 0xc820de2020 0xc820de2088] [0xc820de2018 0xc820de2080] [0xafa720 0xafa720] 0xc8213de180}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"metadata\":map[string]interface {}{\"uid\":\"cd3ef53f-b72c-11e6-b18d-42010af00002\", \"resourceVersion\":\"40985\", \"creationTimestamp\":\"2016-11-30T18:42:53Z\", \"labels\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-t1v11\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-t1v11/services/redis-master\"}, \"spec\":map[string]interface {}{\"clusterIP\":\"10.0.36.191\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\"}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.243.31 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-t1v11 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"metadata":map[string]interface {}{"uid":"cd3ef53f-b72c-11e6-b18d-42010af00002", "resourceVersion":"40985", "creationTimestamp":"2016-11-30T18:42:53Z", "labels":map[string]interface {}{"role":"master", "app":"redis"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-t1v11", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-t1v11/services/redis-master"}, "spec":map[string]interface {}{"clusterIP":"10.0.36.191", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1"}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc8213745e0 exit status 1 <nil> true [0xc820de2008 0xc820de2020 0xc820de2088] [0xc820de2008 0xc820de2020 0xc820de2088] [0xc820de2018 0xc820de2080] [0xafa720 0xafa720] 0xc8213de180}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"metadata":map[string]interface {}{"uid":"cd3ef53f-b72c-11e6-b18d-42010af00002", "resourceVersion":"40985", "creationTimestamp":"2016-11-30T18:42:53Z", "labels":map[string]interface {}{"role":"master", "app":"redis"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-t1v11", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-t1v11/services/redis-master"}, "spec":map[string]interface {}{"clusterIP":"10.0.36.191", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1"}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28523 #35741

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:275
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.243.31 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-tvvc1] []  0xc8211bcf00  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8211bd4c0 exit status 1 <nil> true [0xc820dfa578 0xc820dfa5a0 0xc820dfa5b0] [0xc820dfa578 0xc820dfa5a0 0xc820dfa5b0] [0xc820dfa580 0xc820dfa598 0xc820dfa5a8] [0xafa5c0 0xafa720 0xafa720] 0xc8212b5bc0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.243.31 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-tvvc1] []  0xc8211bcf00  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8211bd4c0 exit status 1 <nil> true [0xc820dfa578 0xc820dfa5a0 0xc820dfa5b0] [0xc820dfa578 0xc820dfa5a0 0xc820dfa5b0] [0xc820dfa580 0xc820dfa598 0xc820dfa5a8] [0xafa5c0 0xafa720 0xafa720] 0xc8212b5bc0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.243.31 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-63ndg] []  0xc821b99060  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821b99620 exit status 1 <nil> true [0xc820dfa0d0 0xc820dfa0f8 0xc820dfa108] [0xc820dfa0d0 0xc820dfa0f8 0xc820dfa108] [0xc820dfa0d8 0xc820dfa0f0 0xc820dfa100] [0xafa5c0 0xafa720 0xafa720] 0xc8212b6c00}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.243.31 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-63ndg] []  0xc821b99060  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821b99620 exit status 1 <nil> true [0xc820dfa0d0 0xc820dfa0f8 0xc820dfa108] [0xc820dfa0d0 0xc820dfa0f8 0xc820dfa108] [0xc820dfa0d8 0xc820dfa0f0 0xc820dfa100] [0xafa5c0 0xafa720 0xafa720] 0xc8212b6c00}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.243.31 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-k437b] []  0xc821bd1c60  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821bba260 exit status 1 <nil> true [0xc8200ce970 0xc8200ce9b8 0xc8200ce9d0] [0xc8200ce970 0xc8200ce9b8 0xc8200ce9d0] [0xc8200ce978 0xc8200ce998 0xc8200ce9c8] [0xafa5c0 0xafa720 0xafa720] 0xc820d2fce0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.243.31 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-k437b] []  0xc821bd1c60  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821bba260 exit status 1 <nil> true [0xc8200ce970 0xc8200ce9b8 0xc8200ce9d0] [0xc8200ce970 0xc8200ce9b8 0xc8200ce9d0] [0xc8200ce978 0xc8200ce998 0xc8200ce9c8] [0xafa5c0 0xafa720 0xafa720] 0xc820d2fce0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] ScheduledJob should not emit unexpected warnings {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:178
Expected error:
    <*runtime.notRegisteredErr | 0xc820eff2c0>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:163

Issues about this test specifically: #32034 #34910

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.243.31 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-27btk] []  0xc820ce0200  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820ce0b20 exit status 1 <nil> true [0xc8209be018 0xc8209be040 0xc8209be050] [0xc8209be018 0xc8209be040 0xc8209be050] [0xc8209be020 0xc8209be038 0xc8209be048] [0xafa5c0 0xafa720 0xafa720] 0xc820ea7b00}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.243.31 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-27btk] []  0xc820ce0200  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820ce0b20 exit status 1 <nil> true [0xc8209be018 0xc8209be040 0xc8209be050] [0xc8209be018 0xc8209be040 0xc8209be050] [0xc8209be020 0xc8209be038 0xc8209be048] [0xafa5c0 0xafa720 0xafa720] 0xc820ea7b00}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] master upgrade should maintain responsive services [Feature:MasterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:53
Nov 30 06:01:36.474: Timeout waiting for service "service-test" to have a load balancer
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2342

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.243.31 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-58nr6] []  0xc82152b8c0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc82152be80 exit status 1 <nil> true [0xc820036e88 0xc820036eb0 0xc820036ec0] [0xc820036e88 0xc820036eb0 0xc820036ec0] [0xc820036e90 0xc820036ea8 0xc820036eb8] [0xafa5c0 0xafa720 0xafa720] 0xc820d12300}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.243.31 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-58nr6] []  0xc82152b8c0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc82152be80 exit status 1 <nil> true [0xc820036e88 0xc820036eb0 0xc820036ec0] [0xc820036e88 0xc820036eb0 0xc820036ec0] [0xc820036e90 0xc820036ea8 0xc820036eb8] [0xafa5c0 0xafa720 0xafa720] 0xc820d12300}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:43
Expected error:
    <*errors.StatusError | 0xc821718980>: {
        ErrStatus: {
            TypeMeta: {Kind: "Status", APIVersion: "v1"},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:261

Issues about this test specifically: #29647 #35627

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:92
Failed to update the first deployment's overlapping annotation
Expected error:
    <*errors.errorString | 0xc8201ba760>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1244

Issues about this test specifically: #31502 #32947

Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:781
Nov 30 11:57:09.782: Timeout waiting for service "mutability-test" to have a load balancer
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1967

Issues about this test specifically: #26134

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gce-1.4-1.5-upgrade-master/72/

Multiple broken tests:

Failed: [k8s.io] ScheduledJob should not schedule new jobs when ForbidConcurrent [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:125
Expected error:
    <*runtime.notRegisteredErr | 0xc82218a780>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:101

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:792
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.89.1 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-k6x0w] []  0xc821242da0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821243360 exit status 1 <nil> true [0xc820036510 0xc820036648 0xc820036690] [0xc820036510 0xc820036648 0xc820036690] [0xc820036550 0xc820036630 0xc820036668] [0xafa5c0 0xafa720 0xafa720] 0xc820b46600}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.89.1 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-k6x0w] []  0xc821242da0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821243360 exit status 1 <nil> true [0xc820036510 0xc820036648 0xc820036690] [0xc820036510 0xc820036648 0xc820036690] [0xc820036550 0xc820036630 0xc820036668] [0xafa5c0 0xafa720 0xafa720] 0xc820b46600}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:275
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.89.1 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-x9scz] []  0xc8215d8700  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8215d9440 exit status 1 <nil> true [0xc820037100 0xc820037128 0xc820037138] [0xc820037100 0xc820037128 0xc820037138] [0xc820037108 0xc820037120 0xc820037130] [0xafa5c0 0xafa720 0xafa720] 0xc820e15860}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.89.1 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-x9scz] []  0xc8215d8700  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8215d9440 exit status 1 <nil> true [0xc820037100 0xc820037128 0xc820037138] [0xc820037100 0xc820037128 0xc820037138] [0xc820037108 0xc820037120 0xc820037130] [0xafa5c0 0xafa720 0xafa720] 0xc820e15860}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:92
Failed to update the first deployment's overlapping annotation
Expected error:
    <*errors.errorString | 0xc8201b2760>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1244

Issues about this test specifically: #31502 #32947

Failed: [k8s.io] Addon update should propagate add-on file changes [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/addon_update.go:316
Expected error:
    <*errors.errorString | 0xc82119cfa0>: {
        s: "error waiting for service e2e-tests-addon-update-test-ghmfn/addon-test to appear: timed out waiting for the condition",
    }
    error waiting for service e2e-tests-addon-update-test-ghmfn/addon-test to appear: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/addon_update.go:320

Issues about this test specifically: #35600

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.89.1 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-p4z64] []  0xc820a8f800  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820a8fe80 exit status 1 <nil> true [0xc820036668 0xc820036910 0xc820036928] [0xc820036668 0xc820036910 0xc820036928] [0xc820036670 0xc820036900 0xc820036920] [0xafa5c0 0xafa720 0xafa720] 0xc8213298c0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.89.1 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-p4z64] []  0xc820a8f800  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820a8fe80 exit status 1 <nil> true [0xc820036668 0xc820036910 0xc820036928] [0xc820036668 0xc820036910 0xc820036928] [0xc820036670 0xc820036900 0xc820036920] [0xafa5c0 0xafa720 0xafa720] 0xc8213298c0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] ScheduledJob should replace jobs when ReplaceConcurrent {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:156
Expected error:
    <*runtime.notRegisteredErr | 0xc82120c580>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:132

Issues about this test specifically: #30542 #31460 #31479 #31552 #32032

Failed: [k8s.io] ScheduledJob should not schedule jobs when suspended [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:94
Expected error:
    <*runtime.notRegisteredErr | 0xc8214ccfc0>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:80

Issues about this test specifically: #32035 #34472

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.89.1 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-bwt10] []  0xc8212f97a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8212f9d60 exit status 1 <nil> true [0xc82164a218 0xc82164a240 0xc82164a250] [0xc82164a218 0xc82164a240 0xc82164a250] [0xc82164a220 0xc82164a238 0xc82164a248] [0xafa5c0 0xafa720 0xafa720] 0xc8212b45a0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.89.1 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-bwt10] []  0xc8212f97a0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8212f9d60 exit status 1 <nil> true [0xc82164a218 0xc82164a240 0xc82164a250] [0xc82164a218 0xc82164a240 0xc82164a250] [0xc82164a220 0xc82164a238 0xc82164a248] [0xafa5c0 0xafa720 0xafa720] 0xc8212b45a0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] master upgrade should maintain responsive services [Feature:MasterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:53
Nov 30 12:43:35.848: Timeout waiting for service "service-test" to have a load balancer
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2342

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.89.1 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-xkw1x] []  0xc821cd3c00  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc82147e420 exit status 1 <nil> true [0xc820b1e5b0 0xc820b1e638 0xc820b1e648] [0xc820b1e5b0 0xc820b1e638 0xc820b1e648] [0xc820b1e5b8 0xc820b1e620 0xc820b1e640] [0xafa5c0 0xafa720 0xafa720] 0xc821913d40}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.89.1 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-xkw1x] []  0xc821cd3c00  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc82147e420 exit status 1 <nil> true [0xc820b1e5b0 0xc820b1e638 0xc820b1e648] [0xc820b1e5b0 0xc820b1e638 0xc820b1e648] [0xc820b1e5b8 0xc820b1e620 0xc820b1e640] [0xafa5c0 0xafa720 0xafa720] 0xc821913d40}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:521
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.89.1 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-bvj94 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-bvj94/services/redis-master\", \"uid\":\"5f8e6c4b-b74e-11e6-a5fd-42010af00002\", \"resourceVersion\":\"17562\", \"creationTimestamp\":\"2016-11-30T22:43:11Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-bvj94\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.0.190.165\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc8215973a0 exit status 1 <nil> true [0xc82164a050 0xc82164a070 0xc82164a088] [0xc82164a050 0xc82164a070 0xc82164a088] [0xc82164a068 0xc82164a080] [0xafa720 0xafa720] 0xc821056600}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-bvj94/services/redis-master\", \"uid\":\"5f8e6c4b-b74e-11e6-a5fd-42010af00002\", \"resourceVersion\":\"17562\", \"creationTimestamp\":\"2016-11-30T22:43:11Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-bvj94\"}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.0.190.165\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.89.1 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-bvj94 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"selfLink":"/api/v1/namespaces/e2e-tests-kubectl-bvj94/services/redis-master", "uid":"5f8e6c4b-b74e-11e6-a5fd-42010af00002", "resourceVersion":"17562", "creationTimestamp":"2016-11-30T22:43:11Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-bvj94"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.0.190.165", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc8215973a0 exit status 1 <nil> true [0xc82164a050 0xc82164a070 0xc82164a088] [0xc82164a050 0xc82164a070 0xc82164a088] [0xc82164a068 0xc82164a080] [0xafa720 0xafa720] 0xc821056600}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"selfLink":"/api/v1/namespaces/e2e-tests-kubectl-bvj94/services/redis-master", "uid":"5f8e6c4b-b74e-11e6-a5fd-42010af00002", "resourceVersion":"17562", "creationTimestamp":"2016-11-30T22:43:11Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-bvj94"}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.0.190.165", "type":"ClusterIP", "sessionAffinity":"None"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28523 #35741

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.89.1 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-vp8fj] []  0xc8214fa120  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc8214fa980 exit status 1 <nil> true [0xc82164a1e0 0xc82164a268 0xc82164a278] [0xc82164a1e0 0xc82164a268 0xc82164a278] [0xc82164a248 0xc82164a260 0xc82164a270] [0xafa5c0 0xafa720 0xafa720] 0xc8208ca300}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.89.1 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-vp8fj] []  0xc8214fa120  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc8214fa980 exit status 1 <nil> true [0xc82164a1e0 0xc82164a268 0xc82164a278] [0xc82164a1e0 0xc82164a268 0xc82164a278] [0xc82164a248 0xc82164a260 0xc82164a270] [0xafa5c0 0xafa720 0xafa720] 0xc8208ca300}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:43
Expected error:
    <*errors.StatusError | 0xc820e35180>: {
        ErrStatus: {
            TypeMeta: {Kind: "Status", APIVersion: "v1"},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:261

Issues about this test specifically: #29647 #35627

Failed: [k8s.io] ScheduledJob should not emit unexpected warnings {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:178
Expected error:
    <*runtime.notRegisteredErr | 0xc820918880>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:163

Issues about this test specifically: #32034 #34910

Failed: UpgradeTest {e2e.go}

error running Upgrade Ginkgo tests: exit status 1

Issues about this test specifically: #37745

Failed: [k8s.io] Kibana Logging Instances Is Alive should check that the Kibana logging instance is alive {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kibana_logging.go:42
Expected error:
    <*errors.StatusError | 0xc8212b6200>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"unknown\") has prevented the request from succeeding (get services kibana-logging)",
            Reason: "InternalError",
            Details: {
                Name: "kibana-logging",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "unknown",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("unknown") has prevented the request from succeeding (get services kibana-logging)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kibana_logging.go:98

Issues about this test specifically: #31420

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:67
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:59

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.89.1 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-bl729] []  0xc820de8f40  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820de9500 exit status 1 <nil> true [0xc82048a318 0xc82048a400 0xc82048a418] [0xc82048a318 0xc82048a400 0xc82048a418] [0xc82048a328 0xc82048a3f8 0xc82048a410] [0xafa5c0 0xafa720 0xafa720] 0xc821056ea0}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.89.1 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-bl729] []  0xc820de8f40  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820de9500 exit status 1 <nil> true [0xc82048a318 0xc82048a400 0xc82048a418] [0xc82048a318 0xc82048a400 0xc82048a418] [0xc82048a328 0xc82048a3f8 0xc82048a410] [0xafa5c0 0xafa720 0xafa720] 0xc821056ea0}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #27156 #28979 #30489 #33649

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:756
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.89.1 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-ghq0j] []  0xc821cd27c0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc821cd2f40 exit status 1 <nil> true [0xc820d66288 0xc820d662b0 0xc820d66310] [0xc820d66288 0xc820d662b0 0xc820d66310] [0xc820d66290 0xc820d662a8 0xc820d66308] [0xafa5c0 0xafa720 0xafa720] 0xc8212b9740}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.89.1 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-ghq0j] []  0xc821cd27c0  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc821cd2f40 exit status 1 <nil> true [0xc820d66288 0xc820d662b0 0xc820d66310] [0xc820d66288 0xc820d662b0 0xc820d66310] [0xc820d66290 0xc820d662a8 0xc820d66308] [0xafa5c0 0xafa720 0xafa720] 0xc8212b9740}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28493 #29964

Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:781
Nov 30 16:35:11.117: Timeout waiting for service "mutability-test" to have a load balancer
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1967

Issues about this test specifically: #26134

Failed: [k8s.io] ScheduledJob should schedule multiple jobs concurrently {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:72
Expected error:
    <*runtime.notRegisteredErr | 0xc8215c9400>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:57

Issues about this test specifically: #31657 #35876

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:219
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.89.1 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-235f8] []  0xc820dddd80  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc82087c400 exit status 1 <nil> true [0xc82164a6d8 0xc82164a328 0xc82164a338] [0xc82164a6d8 0xc82164a328 0xc82164a338] [0xc82164a300 0xc82164a320 0xc82164a330] [0xafa5c0 0xafa720 0xafa720] 0xc820e15260}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.89.1 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-235f8] []  0xc820dddd80  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc82087c400 exit status 1 <nil> true [0xc82164a6d8 0xc82164a328 0xc82164a338] [0xc82164a6d8 0xc82164a328 0xc82164a338] [0xc82164a300 0xc82164a320 0xc82164a330] [0xafa5c0 0xafa720 0xafa720] 0xc820e15260}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:233
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.89.1 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-l15mr] []  0xc82103dd20  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n [] <nil> 0xc820e424c0 exit status 1 <nil> true [0xc8200c48b8 0xc8200c4918 0xc8200c4938] [0xc8200c48b8 0xc8200c4918 0xc8200c4938] [0xc8200c48c0 0xc8200c4910 0xc8200c4930] [0xafa5c0 0xafa720 0xafa720] 0xc8205d7200}:\nCommand stdout:\n\nstderr:\nerror: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.154.89.1 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-l15mr] []  0xc82103dd20  error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
     [] <nil> 0xc820e424c0 exit status 1 <nil> true [0xc8200c48b8 0xc8200c4918 0xc8200c4938] [0xc8200c48b8 0xc8200c4918 0xc8200c4938] [0xc8200c48c0 0xc8200c4910 0xc8200c4930] [0xafa5c0 0xafa720 0xafa720] 0xc8205d7200}:
    Command stdout:
    
    stderr:
    error: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. You must pass --force to delete with grace period 0.
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:430
Nov 30 14:26:33.096: Couldn't restore the original cluster size: timeout waiting 10m0s for cluster size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:421

Issues about this test specifically: #27470 #30156 #34304 #37620

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gce-1.4-1.5-upgrade-master/73/

Multiple broken tests:

Failed: [k8s.io] Kibana Logging Instances Is Alive should check that the Kibana logging instance is alive {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kibana_logging.go:42
Expected error:
    <*errors.StatusError | 0xc820d21780>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"unknown\") has prevented the request from succeeding (get services kibana-logging)",
            Reason: "InternalError",
            Details: {
                Name: "kibana-logging",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "unknown",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("unknown") has prevented the request from succeeding (get services kibana-logging)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kibana_logging.go:98

Issues about this test specifically: #31420

Failed: [k8s.io] Addon update should propagate add-on file changes [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/addon_update.go:316
Expected error:
    <*errors.errorString | 0xc82174d9f0>: {
        s: "error waiting for service e2e-tests-addon-update-test-k3zph/addon-test to appear: timed out waiting for the condition",
    }
    error waiting for service e2e-tests-addon-update-test-k3zph/addon-test to appear: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/addon_update.go:320

Issues about this test specifically: #35600

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:521
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.155.153 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-ldsb0 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-ldsb0\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-ldsb0/services/redis-master\", \"uid\":\"f31f86fe-b7a2-11e6-9244-42010af00002\", \"resourceVersion\":\"44884\", \"creationTimestamp\":\"2016-12-01T08:48:37Z\", \"labels\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"targetPort\":\"redis-server\", \"protocol\":\"TCP\", \"port\":6379}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.0.87.250\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc820820be0 exit status 1 <nil> true [0xc8204e80a0 0xc8204e8128 0xc8204e8198] [0xc8204e80a0 0xc8204e8128 0xc8204e8198] [0xc8204e80b8 0xc8204e8190] [0xafa720 0xafa720] 0xc8209aa360}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-ldsb0\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-ldsb0/services/redis-master\", \"uid\":\"f31f86fe-b7a2-11e6-9244-42010af00002\", \"resourceVersion\":\"44884\", \"creationTimestamp\":\"2016-12-01T08:48:37Z\", \"labels\":map[string]interface {}{\"role\":\"master\", \"app\":\"redis\"}}, \"spec\":map[string]interface {}{\"ports\":[]interface {}{map[string]interface {}{\"targetPort\":\"redis-server\", \"protocol\":\"TCP\", \"port\":6379}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.0.87.250\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\"}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.155.153 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-ldsb0 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"name":"redis-master", "namespace":"e2e-tests-kubectl-ldsb0", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-ldsb0/services/redis-master", "uid":"f31f86fe-b7a2-11e6-9244-42010af00002", "resourceVersion":"44884", "creationTimestamp":"2016-12-01T08:48:37Z", "labels":map[string]interface {}{"role":"master", "app":"redis"}}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"targetPort":"redis-server", "protocol":"TCP", "port":6379}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.0.87.250", "type":"ClusterIP", "sessionAffinity":"None"}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc820820be0 exit status 1 <nil> true [0xc8204e80a0 0xc8204e8128 0xc8204e8198] [0xc8204e80a0 0xc8204e8128 0xc8204e8198] [0xc8204e80b8 0xc8204e8190] [0xafa720 0xafa720] 0xc8209aa360}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"name":"redis-master", "namespace":"e2e-tests-kubectl-ldsb0", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-ldsb0/services/redis-master", "uid":"f31f86fe-b7a2-11e6-9244-42010af00002", "resourceVersion":"44884", "creationTimestamp":"2016-12-01T08:48:37Z", "labels":map[string]interface {}{"role":"master", "app":"redis"}}, "spec":map[string]interface {}{"ports":[]interface {}{map[string]interface {}{"targetPort":"redis-server", "protocol":"TCP", "port":6379}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.0.87.250", "type":"ClusterIP", "sessionAffinity":"None"}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28523 #35741

Failed: UpgradeTest {e2e.go}

error running Upgrade Ginkgo tests: exit status 1

Issues about this test specifically: #37745

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:67
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:59

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] ScheduledJob should not schedule new jobs when ForbidConcurrent [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:125
Expected error:
    <*runtime.notRegisteredErr | 0xc8210860c0>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:101

Failed: [k8s.io] ScheduledJob should schedule multiple jobs concurrently {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:72
Expected error:
    <*runtime.notRegisteredErr | 0xc821106b40>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:57

Issues about this test specifically: #31657 #35876

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] master upgrade should maintain responsive services [Feature:MasterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:53
Nov 30 19:34:37.945: Timeout waiting for service "service-test" to have a load balancer
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2342

Failed: [k8s.io] ScheduledJob should replace jobs when ReplaceConcurrent {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:156
Expected error:
    <*runtime.notRegisteredErr | 0xc8211ccf80>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:132

Issues about this test specifically: #30542 #31460 #31479 #31552 #32032

Failed: [k8s.io] ScheduledJob should not emit unexpected warnings {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:178
Expected error:
    <*runtime.notRegisteredErr | 0xc820dddb00>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:163

Issues about this test specifically: #32034 #34910

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:43
Expected error:
    <*errors.StatusError | 0xc82198c100>: {
        ErrStatus: {
            TypeMeta: {Kind: "Status", APIVersion: "v1"},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:261

Issues about this test specifically: #29647 #35627

Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:781
Nov 30 23:39:20.585: Timeout waiting for service "mutability-test" to have a load balancer
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1967

Issues about this test specifically: #26134

Failed: [k8s.io] ScheduledJob should not schedule jobs when suspended [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:94
Expected error:
    <*runtime.notRegisteredErr | 0xc820c94d00>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:80

Issues about this test specifically: #32035 #34472

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:92
Failed to update the first deployment's overlapping annotation
Expected error:
    <*errors.errorString | 0xc82019e760>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1244

Issues about this test specifically: #31502 #32947

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gce-1.4-1.5-upgrade-master/74/

Multiple broken tests:

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:92
Failed to update the first deployment's overlapping annotation
Expected error:
    <*errors.errorString | 0xc820182a10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1244

Issues about this test specifically: #31502 #32947

Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:781
Dec  1 07:35:43.289: Timeout waiting for service "mutability-test" to have a load balancer
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1967

Issues about this test specifically: #26134

Failed: [k8s.io] ScheduledJob should not schedule jobs when suspended [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:94
Expected error:
    <*runtime.notRegisteredErr | 0xc8212df800>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:80

Issues about this test specifically: #32035 #34472

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:43
Expected error:
    <*errors.StatusError | 0xc8214ef000>: {
        ErrStatus: {
            TypeMeta: {Kind: "Status", APIVersion: "v1"},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:261

Issues about this test specifically: #29647 #35627

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] master upgrade should maintain responsive services [Feature:MasterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:53
Dec  1 02:15:39.261: Timeout waiting for service "service-test" to have a load balancer
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2342

Failed: [k8s.io] ScheduledJob should not emit unexpected warnings {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:178
Expected error:
    <*runtime.notRegisteredErr | 0xc820c74dc0>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:163

Issues about this test specifically: #32034 #34910

Failed: [k8s.io] ScheduledJob should not schedule new jobs when ForbidConcurrent [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:125
Expected error:
    <*runtime.notRegisteredErr | 0xc8212de5c0>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:101

Failed: [k8s.io] ScheduledJob should replace jobs when ReplaceConcurrent {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:156
Expected error:
    <*runtime.notRegisteredErr | 0xc82128a9c0>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:132

Issues about this test specifically: #30542 #31460 #31479 #31552 #32032

Failed: UpgradeTest {e2e.go}

error running Upgrade Ginkgo tests: exit status 1

Issues about this test specifically: #37745

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:67
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:59

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] Kibana Logging Instances Is Alive should check that the Kibana logging instance is alive {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kibana_logging.go:42
Expected error:
    <*errors.StatusError | 0xc821d84180>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"unknown\") has prevented the request from succeeding (get services kibana-logging)",
            Reason: "InternalError",
            Details: {
                Name: "kibana-logging",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "unknown",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("unknown") has prevented the request from succeeding (get services kibana-logging)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kibana_logging.go:98

Issues about this test specifically: #31420

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:521
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.155.142.128 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-ll4n2 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"uid\":\"a7393c9d-b7ca-11e6-ac75-42010af00002\", \"resourceVersion\":\"30418\", \"creationTimestamp\":\"2016-12-01T13:32:49Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-ll4n2\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-ll4n2/services/redis-master\"}, \"spec\":map[string]interface {}{\"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"port\":6379, \"targetPort\":\"redis-server\", \"protocol\":\"TCP\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.0.91.246\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc8211d65e0 exit status 1 <nil> true [0xc8208a4020 0xc8208a4040 0xc8208a4058] [0xc8208a4020 0xc8208a4040 0xc8208a4058] [0xc8208a4030 0xc8208a4050] [0xafa720 0xafa720] 0xc821290480}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"kind\":\"Service\", \"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"uid\":\"a7393c9d-b7ca-11e6-ac75-42010af00002\", \"resourceVersion\":\"30418\", \"creationTimestamp\":\"2016-12-01T13:32:49Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-ll4n2\", \"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-ll4n2/services/redis-master\"}, \"spec\":map[string]interface {}{\"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"port\":6379, \"targetPort\":\"redis-server\", \"protocol\":\"TCP\"}}, \"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.0.91.246\"}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.155.142.128 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-ll4n2 -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"uid":"a7393c9d-b7ca-11e6-ac75-42010af00002", "resourceVersion":"30418", "creationTimestamp":"2016-12-01T13:32:49Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-ll4n2", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-ll4n2/services/redis-master"}, "spec":map[string]interface {}{"type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"port":6379, "targetPort":"redis-server", "protocol":"TCP"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.0.91.246"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc8211d65e0 exit status 1 <nil> true [0xc8208a4020 0xc8208a4040 0xc8208a4058] [0xc8208a4020 0xc8208a4040 0xc8208a4058] [0xc8208a4030 0xc8208a4050] [0xafa720 0xafa720] 0xc821290480}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"kind":"Service", "apiVersion":"v1", "metadata":map[string]interface {}{"uid":"a7393c9d-b7ca-11e6-ac75-42010af00002", "resourceVersion":"30418", "creationTimestamp":"2016-12-01T13:32:49Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-ll4n2", "selfLink":"/api/v1/namespaces/e2e-tests-kubectl-ll4n2/services/redis-master"}, "spec":map[string]interface {}{"type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"port":6379, "targetPort":"redis-server", "protocol":"TCP"}}, "selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.0.91.246"}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] Addon update should propagate add-on file changes [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/addon_update.go:316
Expected error:
    <*errors.errorString | 0xc8215b90e0>: {
        s: "error waiting for service e2e-tests-addon-update-test-djff4/addon-test to appear: timed out waiting for the condition",
    }
    error waiting for service e2e-tests-addon-update-test-djff4/addon-test to appear: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/addon_update.go:320

Issues about this test specifically: #35600

Failed: [k8s.io] ScheduledJob should schedule multiple jobs concurrently {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:72
Expected error:
    <*runtime.notRegisteredErr | 0xc821192d40>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:57

Issues about this test specifically: #31657 #35876

@k8s-github-robot
Copy link
Author

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gce-1.4-1.5-upgrade-master/75/

Multiple broken tests:

Failed: [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:43
Expected error:
    <*errors.StatusError | 0xc820b96a80>: {
        ErrStatus: {
            TypeMeta: {Kind: "Status", APIVersion: "v1"},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "the server could not find the requested resource",
            Reason: "NotFound",
            Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
            Code: 404,
        },
    }
    the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/monitoring.go:261

Issues about this test specifically: #29647 #35627

Failed: [k8s.io] Kibana Logging Instances Is Alive should check that the Kibana logging instance is alive {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kibana_logging.go:42
Expected error:
    <*errors.StatusError | 0xc821438280>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"unknown\") has prevented the request from succeeding (get services kibana-logging)",
            Reason: "InternalError",
            Details: {
                Name: "kibana-logging",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "unknown",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("unknown") has prevented the request from succeeding (get services kibana-logging)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kibana_logging.go:98

Issues about this test specifically: #31420

Failed: [k8s.io] ScheduledJob should not emit unexpected warnings {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:178
Expected error:
    <*runtime.notRegisteredErr | 0xc8214d36c0>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:163

Issues about this test specifically: #32034 #34910

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:67
Expected
    <int>: 0
to equal
    <int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:59

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] ScheduledJob should schedule multiple jobs concurrently {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:72
Expected error:
    <*runtime.notRegisteredErr | 0xc821606c00>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:57

Issues about this test specifically: #31657 #35876

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:452
Expected error:
    <*errors.errorString | 0xc820b5f2f0>: {
        s: "failed to wait for pods responding: pod with UID 18fda40b-b7f5-11e6-acc0-42010af00002 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-p66w9/pods 14790} [{{ } {my-hostname-delete-node-06dk5 my-hostname-delete-node- e2e-tests-resize-nodes-p66w9 /api/v1/namespaces/e2e-tests-resize-nodes-p66w9/pods/my-hostname-delete-node-06dk5 18fdbcf1-b7f5-11e6-acc0-42010af00002 14421 0 2016-12-01 10:36:39 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-p66w9\",\"name\":\"my-hostname-delete-node\",\"uid\":\"18fb6980-b7f5-11e6-acc0-42010af00002\",\"apiVersion\":\"v1\",\"resourceVersion\":\"14402\"}}\n] [{v1 ReplicationController my-hostname-delete-node 18fb6980-b7f5-11e6-acc0-42010af00002 0xc8210d3367}] [] } {[{default-token-3sgr8 {<nil> <nil> <nil> <nil> <nil> 0xc820b75410 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-3sgr8 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8210d34a0 <nil> ClusterFirst map[] default jenkins-e2e-minion-group-j1oz 0xc82116b740 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-01 10:36:39 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-12-01 10:36:41 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-01 10:36:39 -0800 PST  }]   10.240.0.3 10.180.0.17 2016-12-01 10:36:39 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821082120 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://3ba87aaf7f7e6424d21a864d3112e6592003c1467f3c23117c0ebe3baba60616}]}} {{ } {my-hostname-delete-node-0n8fg my-hostname-delete-node- e2e-tests-resize-nodes-p66w9 /api/v1/namespaces/e2e-tests-resize-nodes-p66w9/pods/my-hostname-delete-node-0n8fg 564ac6cf-b7f5-11e6-acc0-42010af00002 14615 0 2016-12-01 10:38:22 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-p66w9\",\"name\":\"my-hostname-delete-node\",\"uid\":\"18fb6980-b7f5-11e6-acc0-42010af00002\",\"apiVersion\":\"v1\",\"resourceVersion\":\"14522\"}}\n] [{v1 ReplicationController my-hostname-delete-node 18fb6980-b7f5-11e6-acc0-42010af00002 0xc8210d3817}] [] } {[{default-token-3sgr8 {<nil> <nil> <nil> <nil> <nil> 0xc820b75470 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-3sgr8 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8210d39d0 <nil> ClusterFirst map[] default jenkins-e2e-minion-group-z5p4 0xc82116b880 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-01 10:38:22 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-12-01 10:38:24 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-01 10:38:22 -0800 PST  }]   10.240.0.4 10.180.1.73 2016-12-01 10:38:22 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821082140 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://b477745434b0ab3d40380dafa8626431e45c341a796c54e06bb4868daf606366}]}} {{ } {my-hostname-delete-node-8t14m my-hostname-delete-node- e2e-tests-resize-nodes-p66w9 /api/v1/namespaces/e2e-tests-resize-nodes-p66w9/pods/my-hostname-delete-node-8t14m 18fd6af6-b7f5-11e6-acc0-42010af00002 14416 0 2016-12-01 10:36:39 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-p66w9\",\"name\":\"my-hostname-delete-node\",\"uid\":\"18fb6980-b7f5-11e6-acc0-42010af00002\",\"apiVersion\":\"v1\",\"resourceVersion\":\"14402\"}}\n] [{v1 ReplicationController my-hostname-delete-node 18fb6980-b7f5-11e6-acc0-42010af00002 0xc8210d3df7}] [] } {[{default-token-3sgr8 {<nil> <nil> <nil> <nil> <nil> 0xc820b754d0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-3sgr8 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8210d3f60 <nil> ClusterFirst map[] default jenkins-e2e-minion-group-z5p4 0xc82116b980 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-01 10:36:39 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-12-01 10:36:40 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-01 10:36:39 -0800 PST  }]   10.240.0.4 10.180.1.72 2016-12-01 10:36:39 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821082160 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://0bb701f75b109530eb8b3e572c4a7c0fc47a5849a9b93829befe225282025344}]}}]}",
    }
    failed to wait for pods responding: pod with UID 18fda40b-b7f5-11e6-acc0-42010af00002 is no longer a member of the replica set.  Must have been restarted for some reason.  Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-p66w9/pods 14790} [{{ } {my-hostname-delete-node-06dk5 my-hostname-delete-node- e2e-tests-resize-nodes-p66w9 /api/v1/namespaces/e2e-tests-resize-nodes-p66w9/pods/my-hostname-delete-node-06dk5 18fdbcf1-b7f5-11e6-acc0-42010af00002 14421 0 2016-12-01 10:36:39 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-p66w9","name":"my-hostname-delete-node","uid":"18fb6980-b7f5-11e6-acc0-42010af00002","apiVersion":"v1","resourceVersion":"14402"}}
    ] [{v1 ReplicationController my-hostname-delete-node 18fb6980-b7f5-11e6-acc0-42010af00002 0xc8210d3367}] [] } {[{default-token-3sgr8 {<nil> <nil> <nil> <nil> <nil> 0xc820b75410 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-3sgr8 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8210d34a0 <nil> ClusterFirst map[] default jenkins-e2e-minion-group-j1oz 0xc82116b740 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-01 10:36:39 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-12-01 10:36:41 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-01 10:36:39 -0800 PST  }]   10.240.0.3 10.180.0.17 2016-12-01 10:36:39 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821082120 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://3ba87aaf7f7e6424d21a864d3112e6592003c1467f3c23117c0ebe3baba60616}]}} {{ } {my-hostname-delete-node-0n8fg my-hostname-delete-node- e2e-tests-resize-nodes-p66w9 /api/v1/namespaces/e2e-tests-resize-nodes-p66w9/pods/my-hostname-delete-node-0n8fg 564ac6cf-b7f5-11e6-acc0-42010af00002 14615 0 2016-12-01 10:38:22 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-p66w9","name":"my-hostname-delete-node","uid":"18fb6980-b7f5-11e6-acc0-42010af00002","apiVersion":"v1","resourceVersion":"14522"}}
    ] [{v1 ReplicationController my-hostname-delete-node 18fb6980-b7f5-11e6-acc0-42010af00002 0xc8210d3817}] [] } {[{default-token-3sgr8 {<nil> <nil> <nil> <nil> <nil> 0xc820b75470 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-3sgr8 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8210d39d0 <nil> ClusterFirst map[] default jenkins-e2e-minion-group-z5p4 0xc82116b880 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-01 10:38:22 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-12-01 10:38:24 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-01 10:38:22 -0800 PST  }]   10.240.0.4 10.180.1.73 2016-12-01 10:38:22 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821082140 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://b477745434b0ab3d40380dafa8626431e45c341a796c54e06bb4868daf606366}]}} {{ } {my-hostname-delete-node-8t14m my-hostname-delete-node- e2e-tests-resize-nodes-p66w9 /api/v1/namespaces/e2e-tests-resize-nodes-p66w9/pods/my-hostname-delete-node-8t14m 18fd6af6-b7f5-11e6-acc0-42010af00002 14416 0 2016-12-01 10:36:39 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-p66w9","name":"my-hostname-delete-node","uid":"18fb6980-b7f5-11e6-acc0-42010af00002","apiVersion":"v1","resourceVersion":"14402"}}
    ] [{v1 ReplicationController my-hostname-delete-node 18fb6980-b7f5-11e6-acc0-42010af00002 0xc8210d3df7}] [] } {[{default-token-3sgr8 {<nil> <nil> <nil> <nil> <nil> 0xc820b754d0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] []  [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-3sgr8 true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8210d3f60 <nil> ClusterFirst map[] default jenkins-e2e-minion-group-z5p4 0xc82116b980 []  } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-01 10:36:39 -0800 PST  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-12-01 10:36:40 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-01 10:36:39 -0800 PST  }]   10.240.0.4 10.180.1.72 2016-12-01 10:36:39 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821082160 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://0bb701f75b109530eb8b3e572c4a7c0fc47a5849a9b93829befe225282025344}]}}]}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:451

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Addon update should propagate add-on file changes [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/addon_update.go:316
Expected error:
    <*errors.errorString | 0xc8217ee1c0>: {
        s: "error waiting for service e2e-tests-addon-update-test-bcgxf/addon-test to appear: timed out waiting for the condition",
    }
    error waiting for service e2e-tests-addon-update-test-bcgxf/addon-test to appear: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/addon_update.go:320

Issues about this test specifically: #35600

Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:781
Dec  1 09:51:28.993: Timeout waiting for service "mutability-test" to have a load balancer
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1967

Issues about this test specifically: #26134

Failed: UpgradeTest {e2e.go}

error running Upgrade Ginkgo tests: exit status 1

Issues about this test specifically: #37745

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse nodePort when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:521
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.158.107 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-mms3m -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-mms3m/services/redis-master\", \"uid\":\"e97bba75-b7eb-11e6-acc0-42010af00002\", \"resourceVersion\":\"7078\", \"creationTimestamp\":\"2016-12-01T17:30:54Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-mms3m\"}, \"spec\":map[string]interface {}{\"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.0.12.203\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\"}\n\n error: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n [] <nil> 0xc820a30780 exit status 1 <nil> true [0xc8202eebe8 0xc8202eec00 0xc8202eec18] [0xc8202eebe8 0xc8202eec00 0xc8202eec18] [0xc8202eebf8 0xc8202eec10] [0xafa720 0xafa720] 0xc820fbd860}:\nCommand stdout:\nError executing template: nodePort is not found. Printing more information for debugging the template:\n\ttemplate was:\n\t\t{.spec.ports[0].nodePort}\n\tobject given to jsonpath engine was:\n\t\tmap[string]interface {}{\"apiVersion\":\"v1\", \"metadata\":map[string]interface {}{\"selfLink\":\"/api/v1/namespaces/e2e-tests-kubectl-mms3m/services/redis-master\", \"uid\":\"e97bba75-b7eb-11e6-acc0-42010af00002\", \"resourceVersion\":\"7078\", \"creationTimestamp\":\"2016-12-01T17:30:54Z\", \"labels\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"name\":\"redis-master\", \"namespace\":\"e2e-tests-kubectl-mms3m\"}, \"spec\":map[string]interface {}{\"selector\":map[string]interface {}{\"app\":\"redis\", \"role\":\"master\"}, \"clusterIP\":\"10.0.12.203\", \"type\":\"ClusterIP\", \"sessionAffinity\":\"None\", \"ports\":[]interface {}{map[string]interface {}{\"protocol\":\"TCP\", \"port\":6379, \"targetPort\":\"redis-server\"}}}, \"status\":map[string]interface {}{\"loadBalancer\":map[string]interface {}{}}, \"kind\":\"Service\"}\n\n\nstderr:\nerror: error executing jsonpath \"{.spec.ports[0].nodePort}\": nodePort is not found\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/cluster/kubectl.sh [/workspace/kubernetes_skew/cluster/kubectl.sh --server=https://104.198.158.107 --kubeconfig=/workspace/.kube/config get service redis-master --namespace=e2e-tests-kubectl-mms3m -o jsonpath={.spec.ports[0].nodePort}] []  <nil> Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"apiVersion":"v1", "metadata":map[string]interface {}{"selfLink":"/api/v1/namespaces/e2e-tests-kubectl-mms3m/services/redis-master", "uid":"e97bba75-b7eb-11e6-acc0-42010af00002", "resourceVersion":"7078", "creationTimestamp":"2016-12-01T17:30:54Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-mms3m"}, "spec":map[string]interface {}{"selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.0.12.203", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service"}
    
     error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
     [] <nil> 0xc820a30780 exit status 1 <nil> true [0xc8202eebe8 0xc8202eec00 0xc8202eec18] [0xc8202eebe8 0xc8202eec00 0xc8202eec18] [0xc8202eebf8 0xc8202eec10] [0xafa720 0xafa720] 0xc820fbd860}:
    Command stdout:
    Error executing template: nodePort is not found. Printing more information for debugging the template:
    	template was:
    		{.spec.ports[0].nodePort}
    	object given to jsonpath engine was:
    		map[string]interface {}{"apiVersion":"v1", "metadata":map[string]interface {}{"selfLink":"/api/v1/namespaces/e2e-tests-kubectl-mms3m/services/redis-master", "uid":"e97bba75-b7eb-11e6-acc0-42010af00002", "resourceVersion":"7078", "creationTimestamp":"2016-12-01T17:30:54Z", "labels":map[string]interface {}{"app":"redis", "role":"master"}, "name":"redis-master", "namespace":"e2e-tests-kubectl-mms3m"}, "spec":map[string]interface {}{"selector":map[string]interface {}{"app":"redis", "role":"master"}, "clusterIP":"10.0.12.203", "type":"ClusterIP", "sessionAffinity":"None", "ports":[]interface {}{map[string]interface {}{"protocol":"TCP", "port":6379, "targetPort":"redis-server"}}}, "status":map[string]interface {}{"loadBalancer":map[string]interface {}{}}, "kind":"Service"}
    
    
    stderr:
    error: error executing jsonpath "{.spec.ports[0].nodePort}": nodePort is not found
    
    error:
    exit status 1
    
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183

Issues about this test specifically: #28523 #35741 #37820

Failed: [k8s.io] ScheduledJob should not schedule new jobs when ForbidConcurrent [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:125
Expected error:
    <*runtime.notRegisteredErr | 0xc821b22300>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:101

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:92
Failed to update the first deployment's overlapping annotation
Expected error:
    <*errors.errorString | 0xc8201b2760>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1244

Issues about this test specifically: #31502 #32947

Failed: [k8s.io] ScheduledJob should not schedule jobs when suspended [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:94
Expected error:
    <*runtime.notRegisteredErr | 0xc821c8e6c0>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:80

Issues about this test specifically: #32035 #34472

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] master upgrade should maintain responsive services [Feature:MasterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:53
Dec  1 09:02:09.519: Timeout waiting for service "service-test" to have a load balancer
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2342

Failed: [k8s.io] ScheduledJob should replace jobs when ReplaceConcurrent {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:156
Expected error:
    <*runtime.notRegisteredErr | 0xc820ac1e40>: {
        gvk: {Group: "batch", Version: "v2alpha1", Kind: "CronJob"},
        t: nil,
    }
    no kind "CronJob" is registered for version "batch/v2alpha1"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:132

Issues about this test specifically: #30542 #31460 #31479 #31552 #32032

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/test-infra kind/flake Categorizes issue or PR as related to a flaky test. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

3 participants